Adversarial examples • Small perturbation • Amount of noise added is imperceivable • High Confidence • It was easy to attain high confidence in the incorrect classification
Adversarial examples • Small perturbation • Amount of noise added is imperceivable • High Confidence • It was easy to attain high confidence in the incorrect classification • Transferability • Didn’t depend on the specific ConvNet used for the task.
printed out on normal paper and photographed with a standard resolution smartphone and still cause a classifier to, in this case, label a “washer” as a “safe”.
these attacks? 20 •Self Driving Cars: A patch may make a car think that a Stop Sign is a Yield Sign. •Alexa: Voice-based Personal Assistants: Transmit sounds that sound like noise, but give specific commands.
these attacks? 21 •Self Driving Cars: A patch may make a car think that a Stop Sign is a Yield Sign. •Alexa: Voice-based Personal Assistants: Transmit sounds that sound like noise, but give specific commands. •Ebay: Sell livestock and other banned substances.
1, 2019, 07:06am Hackers Use Little Stickers To Trick Tesla Autopilot Into The Wrong Lane Cybersecurity I cover crime, privacy and security in digital and physical forms. Thomas Brewster Forbes Staff
figuring out how to make machine learning secure against an adversary who wants to interfere and control it is one of the most important problems researchers today could solve.” — Ian Goodfellow, inventor of GANs, April 2019. 24 What’s next?
⚡talk! 25 Anant Jain Co-founder, CommonLounge.com (Compose Labs) https://commonlounge.com/pathfinder https://index.anantja.in CommonLounge.com is an online-learning platform similar to Coursera/Udacity, except our courses are in the form of lists of text-based tutorials, quizzes and step- by-step projects instead of videos. Check out our Deep Learning Course!