Slide 1

Slide 1 text

AI and Security: A double-edged sword? Montréal Stacy Véronneau - SE Cloud Experts Yan Bellerose - Google

Slide 2

Slide 2 text

Stacy Véronneau Passionate Cloud Evangelist and Technical Lead, blending my strong business acumen with a deep dive into cloud technologies. My journey in tech is fueled by a love for learning and sharing knowledge, especially in the realms of cloud integration and operations. Whether you're a cloud newbie or a seasoned professional, I'm here to engage in thoughtful discussions, collaborate on exciting projects, and share insights from my journey.

Slide 3

Slide 3 text

Yan Bellerose As a results-oriented technology enthusiast, I excel at bridging the gap between technical expertise, business objectives, and innovation. Known for delivering high-quality work and inspiring others, I possess a natural aptitude for adaptation and continuous learning. This enables me to quickly become effective in dynamic and fast-paced environments such as Google. As a technology passionate, I'm dedicated to staying at the forefront of emerging trends. My experience, ranging from customer engineering to security leadership roles, has equipped me with a deep understanding of the security industry.

Slide 4

Slide 4 text

Montréal Our Topics For Today ● AI 101 ● AI as a Security Threat ● Security in AI ● AI in Security ● The Google Ecosystem ● Conclusion ● Q&A

Slide 5

Slide 5 text

Let’s level set the convo a bit

Slide 6

Slide 6 text

AI is powerful but has both positive and negative implications (Double-Edged), especially in the security realm. But, let’s level set and frame this conversation and also take a few steps back. AI 101

Slide 7

Slide 7 text

AI as a Security Threat: The dark side

Slide 8

Slide 8 text

AI as a Security Threat: The dark side Potential risks and challenges posed by AI in security: ● Adversarial attacks on AI systems ● AI-powered cyberattacks ● Deep Fakes and misinformation ● Privacy and surveillance issues ● Ethical considerations

Slide 9

Slide 9 text

Security in AI: Protecting the Protectors

Slide 10

Slide 10 text

Vulnerabilities Galore: ● Data Poisoning ● Model Theft ● Backdoors ● Adversarial Attacks Security in AI: Protecting the Protectors

Slide 11

Slide 11 text

The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly. That's why we introduced the Secure AI Framework (SAIF), a conceptual framework to secure AI systems. SAIF is designed to address top-of-mind concerns for security professionals, such as AI/ML model risk management, security and privacy—helping to ensure that when AI models are implemented, they're secure by default. Google's Secure AI Framework (SAIF)

Slide 12

Slide 12 text

Train model Dataset Pretrained model(s) ML Framework Model hub Production model Inference in production Risks in stage of GenAI Supply Chain

Slide 13

Slide 13 text

Train model Dataset Pretrained model(s) ML Framework Model hub Production model Inference in production Source tampering Inject vulnerability Data Poisoning Unauthorized Training Data Compromise model Model poisoning Bad model usage Insecure Integrated Component Risks in stage of GenAI Supply Chain Model Reverse Engineering Denial of ML Service Rogue actions

Slide 14

Slide 14 text

Train model Dataset Pretrained model(s) ML Framework Model hub Production model Inference in production Risks in stage of GenAI Supply Chain Source tampering Inject vulnerability Unauthorized Training Data Compromise model Model poisoning Bad model usage Library vulnerability analysis Trusted source Sensitive Data protection Trusted source Trusted source Sensitive Data protection Network Isolation WAF & DDOS Protection Insecure Integrated Component Model Reverse Engineering Denial of ML Service Rogue actions Data Poisoning

Slide 15

Slide 15 text

AI in Security: Enhancing Protection

Slide 16

Slide 16 text

AI in Security: Enhancing Protection AI is being used to improve security measures like: ● Threat detection and prevention ● Vulnerability assessment ● Incident response ● Fraud detection ● Cybersecurity automation

Slide 17

Slide 17 text

AI Core Capability Why specialized LLM needed Example opportunities Summarize Classify Generate Complex & ever-growing security jargon not well-represented in other training sets. Help reduce the noise and increase the signal Corrupt, malicious, & vulnerable data/code not represented in other training sets Niche languages and data structures in security domain not well-represented in other training sets ● Concisely explain behavior of suspicious scripts ● Summarize relevant & actionable threat intelligence ● SOC cases | Intel reports | Attack paths ● Generate YARA-L detection ● Generate SOAR playbook ● Security Testing ● Classify malware (Powershell, VBA, PHP, JavaScript) ● Identify security vulnerabilities in code ● Threat Relevance & Risk AI - key use case

Slide 18

Slide 18 text

The Google Cloud Ecosystem: AI and Security

Slide 19

Slide 19 text

● AI Superpowers: AI-powered security for rapid threat detection and response. 󰭉 ● Security Command Center: Panoramic security monitoring with AI. 🔭 ● SecOps: AI threat detection that finds hidden attacks. 󰡸 ● VirusTotal: AI-driven malware analysis and extermination. 🐛💥 The Google Cloud Ecosystem: AI and Security

Slide 20

Slide 20 text

Google Claims World First As AI Finds 0-Day Security Vulnerability On November 1st 2024, An AI agent has discovered a previously unknown, zero-day, exploitable memory-safety vulnerability in widely used real-world software. It’s the first example, at least to be made public, of such a find, according to Google’s Project Zero and DeepMind, the forces behind Big Sleep, the large language model-assisted vulnerability agent that spotted the vulnerability.

Slide 21

Slide 21 text

Demo time!

Slide 22

Slide 22 text

Conclusion and Q&A

Slide 23

Slide 23 text

● Develop robust architecture and resilient AI models that are resistant to adversarial attacks. ● Prioritize ethical considerations in the design and deployment of AI security systems. ● Invest in research and development to stay inform and ready to react against emerging threats and vulnerabilities. ● Foster collaboration between industry, academia, and government to ensure responsible and secure AI innovation in the security domain. To harness the benefits of AI while mitigating the risks, it's crucial to:

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

No content