Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI-ing the Practices in Investments Unlimited

Avatar for Helen Beal Helen Beal
November 13, 2025

AI-ing the Practices in Investments Unlimited

Helen Beal, co-author of the novel *Investments Unlimited*, explores the intersection of accelerated software delivery, regulatory pressure, and the transformative power of AI.

The presentation uses the narrative of Investments Unlimited to discuss how organizations can thrive in the digital age by addressing DevOps, Security, Audit, and Compliance. The book's inciting incident involves a potential regulatory action (MRIA) against the fictional firm IUI.

Key Takeaways:

- Beyond DevOps: Learn why simply adopting DevOps principles isn't enough. The challenge is systematically including all parties—Security, Compliance, and Risk (GRC)—in a shift-left mentality, leveraging the three ways of flow, feedback, and continuous learning.
- Automated Governance: Discover how AI and ML are critical for achieving continuous compliance.
- AI/ML automates the identification and aggregation of evidence to prove control effectiveness ("Continuous Evidence").
- AI-powered tools enforce "policy as code" and automate cross-referencing activity against regulatory frameworks (like SOX or GDPR).
- In DevSecOps, AI enhances SAST/DAST tools for proactive vulnerability detection and reduces false positives by learning from past remediation actions.
- The AI Outlook: The path of Generative AI, AI Agents, and AI Engineering through the Trough of Disillusionment and onto the Plateau of Productivity by 2029.
- Practical advice for adopting AI, including starting with proven process automation use-cases and leveraging Knowledge Graphs for compliance and audit readiness.

Avatar for Helen Beal

Helen Beal

November 13, 2025
Tweet

More Decks by Helen Beal

Other Decks in Technology

Transcript

  1. Internal Helen Beal Helen leads the ambassador program at PeopleCert

    for DEVOPS INSTITUTE, ITIL, PRINCE2 and LanguageCert. She is the founder of Flowtopia, the global community for value stream practitioners. She’s the lead author of the State of VSM Report, the State of Availability Report and an adjunct researcher at IDC. She is a co-author of the book about DevOps and governance, Investments Unlimited, published by IT Revolution. Bringing joy to work 2
  2. Internal 3 About PeopleCert We are in the business of

    dream making, turning dreams into reality and fuelling the dream economy. PeopleCert is the global leader in the development of best practice frameworks and certifications that improve organizational efficiency and enhance the lives and careers of people. Our vision | To empower organizations and people to achieve what they are capable of. Millions of candidates and individuals 50,000 leading companies (82% of Fortune 500) 800 government departments in 45 countries Our values | Quality | Innovation | Passion | Integrity | Clarity | Velocity Our guiding principles
  3. Internal talk map OUR FLOW TODAY GENESIS: Why Investments Unlimited

    was written PRACTICES: GRC and DevSecOps REALITY: where’s our AI at? In conclusion The outlook 4
  4. Internal To all those change agents in every organization who

    dare to challenge the status quo, build bridges instead of walls, and propel us into the unlimited future. 5
  5. Internal The Characters BOARD Bernard Collins CEO Susan Jones SVP

    Digital Jason Colbert CRCO Jada King Andrea Regan AUDIT FIRM Laura Perez CISO Tim Jones CIO Jennifer Limus Security Barry David VP Product Bill Lucas Lucy VP Engineering Carol Smith Sr. Staff Engineer Michelle Dundin Engineer Omar SRE Dillon FRB Officer Greg Dorshaw
  6. Internal To: Greg Dorson Subject: IUI Preliminary Examination Results Greg,

    looks like history is repeating itself. Seems like another fintech firm is going to require a formal action. The team is quite concerned… “How did you find out?” “I met with Bernard this evening at our regular two-finger Scotch session. He let me know that the MRIA will be issued to IUI. You know, it may feel like regulators are out to get us, but they’re really there to help us and protect our customers.” “You could have fooled me.” “It’s not uncommon for a MRIA to be informally notified through back channels. Bernard has a good relationship with the director of the regulatory agency…” 13 The Inciting Incident 1
  7. Internal “Take The DevOps Handbook. It points out three key

    aspects of DevOps: flow, feedback, and continuous learning.” “Yeah, the three ways.” “You betcha. Well, those same concepts can be applied to Security, Compliance, Risk and any other stakeholder along a value stream. These days, I’d argue that Development versus Operations is mostly solved. Now it’s all about systematically looking at all other parties that ensure the quality of software and including them in our shift-left mentality.” 15 30 Learning
  8. Internal LEARNING Dear Auditor, We realize that we have been

    changing our practices from Agile and DevOps to cloud and containers. Yes, we have been busy, and we are having great success delivering faster than ever with better quality, responding competitively to market pressure. However, this approach isn’t just icing on the cake. The only sustainable advantage in our industry is the ability to meet customer demands faster and more reliably than our competitors. But with all this growth, we made a tragic mistake: we forgot to bring you along for the ride. That is totally our fault and we want to make it right, We are going to make some new commitments. 16 33
  9. Internal “Addressing the symptoms as they exposed themselves was the

    catalyst for an ever-slowing software delivery process. It was always in the name of security and risk. More and more processes were created, more complexity was added to the systems and more time-wasting meetings were required. It was like organizational scar tissue.” 49 LEARNING
  10. Internal 1 Unauthorized changes to production PAM 2 Production breaks

    due to human error IaC 3 Material misstatement of financial data SoD 4 Intellectual property and licensing violation SBOM 5 Data breach from unauthorized access DAR 6 Unwanted customer impact Progressive delivery 7 Business continuity BCM/DR 8 Divergence of audit evidence from developer evidence 9 Data in appropriate jurisdiction GDPR or PII 10 Compromise unknown breach of infrastructure Red Team 19 The DevOps Risks and Controls Matrix (RCM)
  11. Internal 20 1. We have to talk about AIs, plural.

    2. Their ability to answer questions is probably the least important thing about them. 3. You are not late. Kevin Kelly
  12. Internal 26 Key Practice How AI Helps Proactive Vulnerability Detection

    AI-enhanced Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) tools go beyond static rules. They use machine learning to predict potential vulnerabilities based on code patterns and historical data, offering developers immediate, contextual feedback right in their code editor. Software Composition Analysis (SBOM) AI tools automatically and continuously generate a Software Bill of Materials (SBOM) and then use ML to monitor for new vulnerabilities (CVEs) in those third-party components, flagging risks that arise after the initial build. Reducing False Positives By learning from past alerts and developer remediation actions, ML models can significantly reduce the volume of false positive security alerts, allowing developers and security teams to focus only on genuine, high-risk issues.
  13. Internal 28 Key Practice How AI Helps Complete Automation for

    Evidence AI/ML algorithms analyze massive log files, configuration data, and network traffic in real-time. They automatically identify and aggregate the required data points (evidence) to prove a control is effective (e.g., "all code changes were peer-reviewed" or "vulnerability scans ran successfully"). GRC and Compliance Monitoring AI-powered Governance, Risk, and Compliance (GRC) tools automate the tedious tasks of cross-referencing activity against regulatory frameworks (like SOX or GDPR), flagging any deviations immediately, and generating audit-ready reports on-demand. Policy as Code Enforcement AI agents are deployed to continuously scan and enforce "policy as code" throughout the Continuous Integration/Continuous Delivery (CI/CD) pipeline, acting as an automated gatekeeper that prevents non-compliant changes from ever reaching production.
  14. Internal Need Traditional Compliance Continuous Compliance Process conformance Checklists Risk

    controls as code Change management Change tickets Self documenting change Governance Audits Compliance monitoring 29 Automated Governance
  15. Internal 30 Key Practice How AI Helps Predictive Risk Modeling

    AI can analyze flow metrics, security incidents, and configuration drift to predict when a system is approaching its "risk budget" or is likely to fail. This allows the team to proactively pause feature work and focus on remediation before an MRIA-level event occurs. Intelligent Incident Response In the event of a security breach or outage, AI can rapidly analyze logs, traces, and metrics across disparate systems to perform automated root cause analysis, correlate related alerts, and suggest specific, pre-approved remediation steps to accelerate recovery time. System Behavior Analysis AI is used for anomaly detection, learning the "normal" behavior of an application in production. Any deviation, which might indicate a security threat or operational issue, is flagged immediately, helping to protect the integrity of the value stream.
  16. Internal “You know what we did today? We applied the

    same concepts of infrastructure as code to our governance. I’m going to go out on a limb with this one. Omar, what you showed with REGO, you showed policy as code. Our policies can be source controlled, just like our software and some of our infrastructure.” “Policy as code? Does this mean that Audit and Risk need to hire developers and learn to write code? Your demo seemed great, but if we have to write code, I’m not sure this will work.” “Um, I guess we didn’t think about that.” “No, I don’t think so, Andrea. This is where we can collaborate. Based on how things are being built, someone will need to understand how to write the policies into the REGO, but it doesn’t have to be Risk. we can have a policy team. Andrea, or Barry, when we need to implement a control with this approach, an engineer can be there to help.” 31 70
  17. Internal “She was convinced that now, more than ever, every

    business was truly a technology business and every business leader was a technology leader.”
  18. Internal 33 Summary Advice from Pragmatic Coders 1. You’re probably

    being sold the wrong thing first Every vendor is pitching “AI Agents” or “AI-ready data lakes” right now. Start with a use-case that already works (e.g., a process automation enhanced with AI). 2. Knowledge Graphs = cheap insurance If you have any compliance, audit or “explain-this-to-the-board” requirement, bolt a lightweight knowledge graph onto your existing database in a couple of sprints. It’s not sexy, but it de-risks hallucinations and cuts future re-work. 3. AI Engineering is the bill you’ll pay anyway Whether you pick GenAI or Agents, you’ll still need CI/CD for models, drift detection and rollback. Build that pipeline once, so the next model swap is a one-day job instead of a six-week fire-drill. 4. Responsible AI = faster go-live Instead of treating “AI TRiSM” as a separate line item, embed guardrails directly into the product: red-team prompts, audit logs and bias tests ship with v1. Result: regulators sign off faster, your legal team sleeps at night, and you still hit the market before the competition. 5. Edge AI is still a waiting game Unless you’re running robots in a warehouse, park Edge AI for 2027.