Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Adversarial Examples: Let's fool modern AI syst...

Adversarial Examples: Let's fool modern AI systems with Physical Stickers.

A quick and dirty introduction to Adversarial Examples in Deep Learning.

Video: https://youtu.be/BpNET8N4dzQ

Anant Jain

April 29, 2019
Tweet

Other Decks in Technology

Transcript

  1. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com Adversarial Examples:
 Let’s fool modern

    A.I. Systems! 1 Anant Jain Co-founder, CommonLounge.com (Compose Labs) https://commonlounge.com
 https://index.anantja.in @anant90
  2. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com 6 Three remarkable things about

    Adversarial examples • Small perturbation • Amount of noise added is imperceivable
  3. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com 7 Three remarkable things about

    Adversarial examples • Small perturbation • Amount of noise added is imperceivable • High Confidence • It was easy to attain high confidence in the incorrect classification
  4. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com 8 Three remarkable things about

    Adversarial examples • Small perturbation • Amount of noise added is imperceivable • High Confidence • It was easy to attain high confidence in the incorrect classification • Transferability • Didn’t depend on the specific ConvNet used for the task.
  5. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com 10 Take a correctly classified

    image (left image in both columns), and add a tiny distortion (middle) to fool the ConvNet with the resulting image (right).
  6. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com 14 Adversarial examples can be

    printed out on normal paper and photographed with a standard resolution smartphone and still cause a classifier to, in this case, label a “washer” as a “safe”.
  7. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com What are the implications of

    these attacks? 19 •Self Driving Cars: A patch may make a car think that a Stop Sign is a Yield Sign.
  8. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com What are the implications of

    these attacks? 20 •Self Driving Cars: A patch may make a car think that a Stop Sign is a Yield Sign. •Alexa: Voice-based Personal Assistants: Transmit sounds that sound like noise, but give specific commands.
  9. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com What are the implications of

    these attacks? 21 •Self Driving Cars: A patch may make a car think that a Stop Sign is a Yield Sign. •Alexa: Voice-based Personal Assistants: Transmit sounds that sound like noise, but give specific commands. •Ebay: Sell livestock and other banned substances.
  10. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com 22 125,478 views | Apr

    1, 2019, 07:06am Hackers Use Little Stickers To Trick Tesla Autopilot Into The Wrong Lane Cybersecurity I cover crime, privacy and security in digital and physical forms. Thomas Brewster Forbes Staff
  11. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com “Resistance to adversarial examples and

    figuring out how to make machine learning secure against an adversary who wants to interfere and control it is one of the most important problems researchers today could solve.” — Ian Goodfellow, inventor of GANs, April 2019. 24 What’s next?
  12. twitter.com/@anant90 Anant Jain, Co-Founder, CommonLounge.com Thanks for listening to this

    ⚡talk! 25 Anant Jain Co-founder, CommonLounge.com (Compose Labs) https://commonlounge.com/pathfinder
 https://index.anantja.in CommonLounge.com is an online-learning platform similar to Coursera/Udacity, except our courses are in the form of lists of text-based tutorials, quizzes and step- by-step projects instead of videos. 
 
 Check out our Deep Learning Course!