Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Mutually Assured Destruction and the Impending AI Apocalypse

Mutually Assured Destruction and the Impending AI Apocalypse

USENIX Workshop on Offensive Technologies 2018
Opening Keynote
Baltimore, Maryland
13 August 2018

The history of security includes a long series of arms races, where a new technology emerges and is subsequently developed and exploited by both defenders and attackers. Over the past few years, "Artificial Intelligence" has re-emerged as a potentially transformative technology, and deep learning in particular has produced a barrage of amazing results. We are in the very early stages of understanding the potential of this technology in security, but more worryingly, seeing how it may be exploited by malicious individuals and powerful organizations. In this talk, I'll look at what lessons might be learned from previous security arms races, consider how asymmetries in AI may be exploited by attackers and defenders, touch on some recent work in adversarial machine learning, and hopefully help progress-loving Luddites figure out how to survive in a world overrun by AI doppelgängers, GAN gangs, and gibbon-impersonating pandas.

David Evans is a Professor of Computer Science at the University of Virginia where he leads the Security Research Group. He is the author of an open computer science textbook and a children's book on combinatorics and computability. He won the Outstanding Faculty Award from the State Council of Higher Education for Virginia, and was Program Co-Chair for the 24th ACM Conference on Computer and Communications Security (CCS 2017) and the 30th (2009) and 31st (2010) IEEE Symposia on Security and Privacy. He has SB, SM and PhD degrees in Computer Science from MIT and has been a faculty member at the University of Virginia since 1999.

David Evans

August 13, 2018
Tweet

More Decks by David Evans

Other Decks in Research

Transcript

  1. Mutually Assured Destruction and the Impending AI Apocalypse David Evans

    University of Virginia evadeML.org USENIX Workshop on Offensive Technologies 13 August 2018 Baltimore, MD
  2. AI Arms Races and How to End Them David Evans

    University of Virginia evadeML.org USENIX Workshop on Offensive Technologies 13 August 2018 Baltimore, MD
  3. Plan for Talk 1. What is AI? Definitions 2. What

    should (and shouldn’t) we be afraid of? Harmful use of AI 3. What can we learn from previous arms races? Evasive malware 4. What (if anything) can we do? 3
  4. 6 Cognitive Task Human Machine (2018) Adding 4-digit numbers ü

    Adding 5-digit numbers ü ... ü Adding 8923-digit numbers ü Spelling ü Sorting alphabetically ü Sorting numerically ü Factoring big numbers ü Playing chess ü Playing poker ü Playing go ü Face recognition ü
  5. 7 Cognitive Task Human Machine (2018) Adding 4-digit numbers ü

    Adding 5-digit numbers ü ... ü Adding 8923-digit numbers ü Spelling ü Sorting alphabetically ü Sorting numerically ü Factoring big numbers ü Playing chess ü Playing poker ü Playing go ü Face recognition ü Giving talks at WOOT ?
  6. 16

  7. 17

  8. More Ambition 18 “The human race will have a new

    kind of instrument which will increase the power of the mind much more than optical lenses strengthen the eyes and which will be as far superior to microscopes or telescopes as reason is superior to sight.”
  9. More Ambition 19 “The human race will have a new

    kind of instrument which will increase the power of the mind much more than optical lenses strengthen the eyes and which will be as far superior to microscopes or telescopes as reason is superior to sight.” Gottfried Wilhelm Leibniz (1679)
  10. 20 Gottfried Wilhelm Leibniz (Universitat Altdorf, 1666) who advised: Jacob

    Bernoulli (Universitdt Basel, 1684) who advised: Johann Bernoulli (Universitdt Basel, 1694) who advised: Leonhard Euler (Universitat Basel, 1726) who advised: Joseph Louis Lagrange who advised: Simeon Denis Poisson who advised: Michel Chasles (Ecole Polytechnique, 1814) who advised: H. A. (Hubert Anson) Newton (Yale, 1850) who advised: E. H. Moore (Yale, 1885) who advised: Oswald Veblen (U. of Chicago, 1903) who advised: Philip Franklin (Princeton 1921) who advised: Alan Perlis (MIT Math PhD 1950) who advised: Jerry Feldman (CMU Math 1966) who advised: Jim Horning (Stanford CS PhD 1969) who advised: John Guttag (U. of Toronto CS PhD 1975) who advised: David Evans (MIT CS PhD 2000) my academic great- great-great-great- great-great-great- great-great-great- great-great-great- great-great- grandparent!
  11. More Precision 21 “The human race will have a new

    kind of instrument which will increase the power of the mind much more than optical lenses strengthen the eyes and which will be as far superior to microscopes or telescopes as reason is superior to sight.” Gottfried Wilhelm Leibniz (1679) Normal computing amplifies (quadrillions of times faster) and aggregates (enables millions of humans to work together) human cognitive abilities; AI goes beyond what humans can do.
  12. 23 The history of computer chess is the history of

    artificial intelligence. After their disappointments in trying to reverse- engineer the brain, computer scientists narrowed their sights. Abandoning their pursuit of human-like intelligence, they began to concentrate on accomplishing sophisticated, but limited, analytical tasks by capitalizing on the inhuman speed of the modern computer’s calculations. This less ambitious but more pragmatic approach has paid off in areas ranging from medical diagnosis to self-driving cars. Computers are replicating the results of human thought without replicating thought itself. Nicolas Carr, A Brutal Intelligence: AI, Chess, and the Human Mind, 2017
  13. 24

  14. Operational Definition “Artificial Intelligence” means making computers do things their

    programmers don’t understand well enough to program explicitly. 26 If it is explainable, its not AI!
  15. Plan for Talk 1. What is AI? Definitions 2. What

    should (and shouldn’t) we be afraid of? Harmful use of AI 3. What can we learn from previous arms races? Evasive malware 4. What (if anything) can we do? 27
  16. Harmful AI Benign developers and operators AI out of control

    AI inadvertently causes harm Malicious operators Build AI to do harm 30
  17. Harmful AI Benign developers and operators AI out of control

    AI inadvertently causes harm Malicious operators Build AI to do harm 31
  18. Harmful AI Benign developers and operators AI out of control

    AI inadvertently causes harm to humanity Malicious operators Build AI to do harm 34
  19. 37 On Robots Joe Berger and Pascal Wyse (The Guardian,

    21 July 2018) Human Jobs of the Future
  20. Harmful AI Benign developers AI out of control AI causes

    harm (without creators objecting) Malicious developers Using AI to do harm 40 Malice is (often) in the eye of the beholder (e.g., mass surveillance, pop-up ads, etc.)
  21. 41 “The future has arrived — it’s just not evenly

    distributed yet.” (William Gibson, 1990s) Photo: Christopher J. Morris/Corbis
  22. 42 “The future has arrived — it’s just not evenly

    distributed yet.” (William Gibson, 1990s) Expanding victims: Attacks that are only cost-effective for high-value, easy-compromise targets, become cost-effective against everyone Expanding adversaries: Attacks only available to nation-state level adversaries, become accessible to everyone
  23. Malicious Uses of AI 43 Malware Automated Vulnerability Finding, Exploit

    Generation Social Engineering Mass-market Spear Phishing Fake content generation Virtual-physical attacks
  24. 45

  25. Strategy 2: Build Less Vulnerable Systems 47 Rust Project Everest

    We actually know how to build much less vulnerable software, it just costs too much for everyday use.
  26. Malicious Uses of AI 48 Malware Automated Vulnerability Finding, Exploit

    Generation Social Engineering Mass-market Spear Phishing Fake content generation Virtual-physical attacks
  27. 49 WEIS 2012 Automated, low cost: sending out initial scam

    email Human, high effort: conversing with potential victims What happens when the conversing with potential victims part is automated also?
  28. Automated Spear Phishing 50 “It’s slightly less effective [than manually

    generated] but it’s dramatically more efficient” (John Seymour)
  29. Asymmetry of Automated Spear Phishing 51 AI Classifier “99.9% accurate”

    AI Spear Phishing Generator + Botnet ... Victim
  30. Malicious Uses of AI 53 Malware Automated Vulnerability Finding, Exploit

    Generation Social Engineering Mass-market Spear Phishing Fake content generation Virtual-physical attacks
  31. Detection-Generation Arms Race 56 Forgery Technique Detection Classifier Forgery Technique

    Detection Classifier If you know the forgery technique, detection (by machines) has advantage.
  32. Plan for Talk 1. What is AI? Definitions 2. What

    should (and shouldn’t) we be afraid of? Harmful use of AI 3. What can we learn from previous arms races? Evasive malware 4. What (if anything) can we do? 57
  33. Trojan Horse Arms Race 58 Or do you think any

    Greek gift’s free of treachery? Is that Ulysses’s reputation? Either there are Greeks in hiding, concealed by the wood, or it’s been built as a machine to use against our walls, or spy on our homes, or fall on the city from above, or it hides some other trick: Trojans, don’t trust this horse. Whatever it is, I’m afraid of Greeks even those bearing gifts.’ Virgil, The Aenid (Book II)
  34. Labelled Training Data ML Algorithm Feature Extraction Vectors Deployment Malicious

    / Benign Operational Data Trained Classifier Training (supervised learning) Assumption: Training Data is Representative
  35. PDF Malware Classifiers Random Forest Random Forest Support Vector Machine

    Features Object counts, lengths, positions, … Object structural paths Very robust against “strongest conceivable mimicry attack”. Automated Features Manual Features PDFrate [ACSA 2012] Hidost16 [JIS 2016] Hidost13 [NDSS 2013]
  36. Adversarial Examples across Domains 67 Domain Classifier Space “Reality” Space

    Trojan Wars Judgment of Trojans !(#) = “gift” Physical Reality !∗(#) = invading army Malware Malware Detector !(#) = “benign” Victim’s Execution !∗(#) = malicious behavior Image Classification, Detection DNN Classifier !(#) = ) Human Perception !∗(#) = * Next Next 2 talks!
  37. “Oracle” Definition 68 Given seed sample, !, !" is an

    adversarial example iff: # !" = % Class is % (for malware, %= “benign”) ℬ !′) = ℬ(! Behavior we care about is the same Malware: evasive variant preserves malicious behavior of seed, but is classified as benign No requirement that ! ~ !′ except through ℬ.
  38. Finding Evasive Malware 69 Given seed sample, !, !" is

    an adversarial example iff: # !" = % Class is % (for malware, %= “benign”) ℬ !′) = ℬ(! Behavior we care about is the same Generic attack: heuristically explore input space for !′ that satisfies definition.
  39. Variants Evolutionary Search Clone Benign PDFs Malicious PDF Mutation 01011001101

    Variants Variants Select Variants ✓ ✓ ✗ ✓ Found Evasive? Benign Oracle Weilin Xu Yanjun Qi Fitness Selection Mutant Generation
  40. Variants Generating Variants Clone Benign PDFs Malicious PDF Mutation 01011001101

    Variants Variants Select Variants ✓ ✓ ✗ ✓ Found Evasive? Fitness Selection Mutant Generation
  41. Variants Generating Variants Clone Benign PDFs Malicious PDF Mutation 01011001101

    Variants Variants Select Variants ✓ ✓ ✗ ✓ Found Evasive? Fitness Selection Mutant Generation
  42. Variants Generating Variants Clone Benign PDFs Malicious PDF Mutation 01011001101

    Variants Variants Select Variants ✓ ✓ ✗ ✓ Found Evasive? Found Evasive ? 0 /JavaScript eval(‘…’); /Root /Catalog /Pages Select random node Randomly transform: delete, insert, replace
  43. Variants Generating Variants Clone Benign PDFs Malicious PDF Mutation 01011001101

    Variants Variants Select Variants Found Evasive? Found Evasive ? Select random node Randomly transform: delete, insert, replace Nodes from Benign PDFs 0 /JavaScript eval(‘…’); /Root /Catalog /Pages 128 546 7 63 128
  44. Variants Selecting Promising Variants Clone Benign PDFs Malicious PDF Mutation

    01011001101 Variants Variants Select Variants ✓ ✓ ✗ ✓ Found Evasive? Fitness Selection Mutant Generation
  45. Variants Selecting Promising Variants Clone Benign PDFs Malicious PDF Mutation

    01011001101 Variants Variants Select Variants ✓ ✓ ✗ ✓ Found Evasive? Fitness Function Candidate Variant !(#$%&'() , #'(&++ ) Score Malicious 0 /JavaScript eval(‘…’); /Root /Catalog /Pages 128 Oracle Target Classifier
  46. Oracle: ℬ "′) = ℬ(" ? Execute candidate in vulnerable

    Adobe Reader in virtual environment Behavioral signature: malicious if signature matches https://github.com/cuckoosandbox Simulated network: INetSim Cuckoo HTTP_URL + HOST extracted from API traces
  47. Fitness Function Assumes lost malicious behavior will not be recovered

    !itness '′ = * 1 − classi!ier_score '3 if ℬ '′) = ℬ(' −∞ otherwise
  48. 0 100 200 300 400 500 0 100 200 300

    Seeds Evaded (out of 500) PDFRate Number of Mutations Hidost
  49. 0 100 200 300 400 500 0 100 200 300

    Seeds Evaded (out of 500) PDFRate Number of Mutations Hidost Simple transformations often worked
  50. 0 100 200 300 400 500 0 100 200 300

    Seeds Evaded (out of 500) PDFRate Number of Mutations Hidost (insert, /Root/Pages/Kids, 3:/Root/Pages/Kids/4/Kids/5/) Works on 162/500 seeds
  51. 0 100 200 300 400 500 0 100 200 300

    Seeds Evaded (out of 500) PDFRate Number of Mutations Hidost Some seeds required complex transformations
  52. Malicious Label Threshold Original Malicious Seeds Evading PDFrate Classification Score

    Malware Seed (sorted by original score) Discovered Evasive Variants
  53. Discovered Evasive Variants Malicious Label Threshold Original Malicious Seeds Adjust

    threshold? Charles Smutz, Angelos Stavrou. When a Tree Falls: Using Diversity in Ensemble Classifiers to Identify Evasion in Malware Detectors. NDSS 2016. Classification Score Malware Seed (sorted by original score)
  54. Variants found with threshold = 0.25 Variants found with threshold

    = 0.50 Adjust threshold? Classification Score Malware Seed (sorted by original score)
  55. Variants Hide the Classifier Score? Clone Benign PDFs Malicious PDF

    Mutation 01011001101 Variants Variants Select Variants ✓ ✓ ✗ ✓ Found Evasive? Fitness Function Candidate Variant !(#$%&'() , #'(&++ ) Score Malicious 0 /JavaScript eval(‘…’); /Root /Catalog /Pages 128 Oracle Target Classifier
  56. Variants Binary Classifier Output is Enough Clone Benign PDFs Malicious

    PDF Mutation 01011001101 Variants Variants Select Variants ✓ ✓ ✗ ✓ Found Evasive? Fitness Function Candidate Variant !(#$%&'() , #'(&++ ) Score Malicious 0 /JavaScript eval(‘…’); /Root /Catalog /Pages 128 Oracle Target Classifier ACM CCS 2017
  57. Labelled Training Data ML Algorithm Feature Extraction Vectors Deployment Malicious

    / Benign Operational Data Trained Classifier Training (supervised learning) Retrain Classifier
  58. 0 100 200 300 400 500 0 200 400 600

    800 Seeds Evaded (out of 500) Generations Hidost16 Original classifier: Takes 614 generations to evade all seeds
  59. 0 100 200 300 400 500 0 200 400 600

    800 HidostR1 Seeds Evaded (out of 500) Generations Hidost16
  60. 0 100 200 300 400 500 0 200 400 600

    800 HidostR1 Seeds Evaded (out of 500) Generations Hidost16
  61. 0 100 200 300 400 500 0 200 400 600

    800 HidostR1 HidostR2 Seeds Evaded (out of 500) Generations Hidost16
  62. 0 100 200 300 400 500 0 200 400 600

    800 HidostR1 HidostR2 Seeds Evaded (out of 500) Generations Hidost16
  63. 0 100 200 300 400 500 0 200 400 600

    800 Hidost16 Genome Contagio Benign Hidost16 0.00 0.00 HidostR1 0.78 0.30 HidostR2 0.85 0.53 False Positive Rates HidostR1 Seeds Evaded (out of 500) Generations HidostR2
  64. 97 Only 8/6987 robust features (Hidost) Robust classifier High false

    positives /Names /Names /JavaScript /Names /JavaScript /Names /Names /JavaScript /JS /OpenAction /OpenAction /JS /OpenAction /S /Pages
  65. AI Arms Races AI-based defenses are at-best temporary 98 “Artificial

    Intelligence” means making computers do things their programmers don’t understand well enough to program explicitly. Can be effective against current adversaries Asymmetries benefit attackers Motivated adversary with any access to defense can learn to thwart it
  66. AI Arms Races AI-based defenses are at-best temporary 99 “Artificial

    Intelligence” means making computers do things their programmers don’t understand well enough to program explicitly. Can be effective against current adversaries Asymmetries benefit attackers Motivated adversary with any access to defense can learn to thwart it Can only work reliably, if we are using robust features that are strong signals – but then, don’t need AI!
  67. Plan for Talk 1. What is AI? Definitions 2. What

    should (and shouldn’t) we be afraid of? Harmful use of AI 3. What can we learn from previous arms races? Evasive malware 4. What (if anything) can we do? 101
  68. 107