This talk was presented at AusCert 2025.
It’s 2028. Your organization runs on AI. You’ve automated most of your workflows, everything is faster, and profits are higher than ever.
But one day, it happens.
One of your AI systems gets compromised.
It’s the first time your AI pipeline has been breached, and you don’t know how to investigate these systems. You’re stuck, with no idea how to trace it or where to start.
This is more likely than you think.
With the widespread adoption of generative AI, attackers are finding new ways to exploit these systems. They are targeting weaknesses, weaponizing the same technology you rely on, and using it to compromise your organization.
In this talk, I will discuss about:
- How breaches involving generative AI can happen, with real-world examples.
- What attackers look for and how they exploit AI systems, demonstrating potential vulnerabilities and weaknesses.
- A step-by-step methodology for handling AI-specific security incidents, from understanding the assets to defining the process of incident response in GenAI systems.
- How to apply threat intelligence techniques to investigate and counter these threats, including how we can classify malicious prompts to define a new kind of operating methods.
Businesses are evolving to integrate AI systems, but threat actors are evolving too. Generative AI is becoming a new weapon in their arsenal.
I will show you how threat actors are using GenAI in their attacks and how to detect these activities. We’ll also go through a practical AI incident response framework so you’re ready when—not if—it happens.
It's a topic that is still relatively new and not broadly discussed. To truly evolve and thrive, you need the right tools and strategies to face these threats head-on. Let’s tackle this challenge now, before it becomes your next "learning opportunity"!