Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Big Pinocchio - Subversion and Degradation of B...

Big Pinocchio - Subversion and Degradation of Big Data Systems

Daniel Jacob Bilar

February 04, 2015
Tweet

More Decks by Daniel Jacob Bilar

Other Decks in Business

Transcript

  1. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Big Pinocchio: Subversion and Degradation of ‘Big Data’ Systems Daniel Bilar [email protected] Suits & Spooks DC The Ritz-Carlton Arlington, VA 4th February, 2015
  2. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Talk Roadmap Status Quo Analytics Data-driven systems woven silently into everyday life Data in high (100s, 1000s) dimensional space. Examples Ads, Netflix, pre-trial discovery, teaching, pre-crime, trading, copy-writing, unmanned systems, corporate, IC, traffic, electricity grid Security issues largely ignored, barely studied Decision Systems Security Issues Many Pitfalls in high dims. Data lives in ‘corners’ of feature space (hypercube) not center (hypersphere). Distances → 0, dissimilarity measures become meaningless Opaqueness What exactly algorithms learn & ‘understand’ from the data, as well as decision ‘junctures’ quite mysterious Fragile Algorithmic underbelly under the hood susceptible to attacks Big Pinocchio Definitions Leverage vulnerabilities in/of/through ‘Big Data’ systems Vulnerabilities Input data → learning (features, algorithms) → dependent
  3. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Main Ideas of ‘Big Pinocchio’ Figure: Deep neural networks (DNNs) learn hierarchical layers of representation from data for pattern recognition. State-of-the-art DNN recognize natural images (left panel), but easily fooled into declaring with near-certainty that nonsense images are familiar objects (right) Take away Pitfalls of high dim. Data in ‘corners’; distance → 0, dissimilarity meaningless Human intuition/experience fails Add very small effects → strange results Inherent High Dim Vulnerability due to linear behavior of employed models Inherent Tradeoff “Models easy to design are easy to perturb”
  4. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Anti-Virus as Decision Systems Bad: Empirical AV Results Report Date AV Signature Update MW Corpus Date False Negative (%) 2011/05 Feb. 22nd Feb. 23rd -Mar. 3rd [39-77] 2011/02 Feb. 22nd Feb. 10th [0.2-15.6] 2010/11 Aug. 16th Aug. 17th -24th [38-63] 2010/08 Aug. 16th Aug. 6th [0.2-19.1] 2010/05 Feb. 10th Feb. 11th -18th [37-89] 2010/02 Feb. 10th Feb. 3rd [0.4-19.2] 2009/11 Aug. 10th Aug. 11th -17th [26-68] 2009/08 Aug. 10th Aug. 10th [0.2-15.2] 2009/05 Feb. 9th Feb. 9th -16th [31-86] 2009/02 Feb. 9th Feb. 1st [0.2-15.1] 2008/11 Aug. 4th Aug. 4th -11th [29-81] 2008/08 Aug. 4th Aug. 1st [0.4-13.5] 2008/05 Feb. 4th Feb. 5th -12th [26-94] 2008/02 Feb. 4th Feb. 2nd [0.2-12.3] Table: Empirical miss rates for 9-16 well-known AV. After freezing update sigs for one week, best AV missed 30-40 % of new MW, worst missed 65-77 % Worse: Theoretical Findings Detection of interactive malware at least in complexity class NPNP NPoracle oracle [EF05, JF08] Blacklisting Deadend Infeasibility of modeling polymorphic shellcode [YSS07]
  5. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Callgraph: Control Flow of Malware Decision System of Programs: Callgraph (+ data) Call-graph is relationship-graph of function calls Control flow in a program represented by call-graph ‘path’ Many decision points during execution Goal Compare ‘graph structure’ of unknown binaries across non-malicious software and malware classes Main Result (2007) [Bil07] Malware tends to have a lower basic block count, implying a simpler functionality: Limited goals, interaction → fewer branches Idea: Leverage malware’s simpler decision structure to ‘outplay’ it R &D 2008-current “Autonomous Baiting, Control and Deception of Adversarial Cyberspace Participants” [SB11]
  6. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Example: Hacking Traffic Decision Systems with Bad Data Figure: Sensys Systems: Wireless Sensors , Repeaters, Access Points. No encryption, wireless comms in clear. Firmware updates not signed or encrypted. Picture [Cer14]
  7. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Fake Input Data, Affect Decision: Traffic Decision System Subversion Figure: Traffic analytics/control software based on sensors (50,000+ world-wide). Bricking, DoS attacks, fake traffic. Picture [Cer14]
  8. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Human vs DNN interpretation [SZS+13] Figure: Inducing imperceptibly small perturbations to a correctly classified input image, so that it is no longer classified correctly. Result highlights difference between how DNNs and humans recognize objects.Picture [SZS+13]
  9. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra DNN ‘Snowcrash’ [NYC14] Figure: Unrecognizable direct encoded images that state-of-the-art DNNs believe with 99.6 certainty to be a familiar object. DNNs are used in applications such as by cars that drive themselves. Picture [NYC14]
  10. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Peeking under the DNN kimono [NYC14] Figure: DNN ‘archetypes’. Unrecognizable indirect encoded images that state-of-the-art DNNs believe with 99.6% certainty to be a familiar object. Result also highlights differences between how DNNs and humans recognize objects. Picture [NYC14]
  11. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Big Pinocchio via Linear Perturbation Figure: Classification change from ‘Panda’ to ‘Gibbon’ by adding an imperceptibly small vector whose elements are equal to the sign of the elements of the gradient of the cost function with respect to the input. Picture [GSS14] “Accidental steganography” m.o. Linear model is forced to attend to signals most closely aligned with weight, even in presence of other signals w higher amplitude
  12. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Perturbation Effects in (Mis-)Classification Space Figure: (1) Applying an imperceptible perturbation to a correctly classified natural image (blue dot) results in an image (square) that a DNN classifies as an entirely different class (“crossing decision boundary”). (2) Possible to generate high-confidence images (pentagon (I0 )) starting from a random or blank image. They do not look like images in the training set. (3) Possible to generate high-confidence, regular images (triangles (G0 ).) with discriminative features for a class, but still far from the training set. Picture [NYC14]
  13. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Effects of Decision System Subversions “Degradation and Subversion through sub-system attacks” [Bil10] Power Grid Load balancing in electricity grids relies on accurate state estimation. Data integrity attacks on a chosen subset of sensors make these estimates unreliable, which could push such feedback systems into unstable state (Enron 2000 did this to manipulate spot prices) Democracy Voting systems assume honest participants vote actual preferences. Voters expect systems to reflect preferences as much as possible. In elections with more than two candidates, ranking decision system can be subverted by strategic voting. Given preferences, possible to design a seemingly democratic voting procedure to ensure desired candidate win (voting theorist Donald Saari) Financial Exchange High-frequency trading algorithms subvert analytic decision systems of participants by faking data & destabilizing pricing environment to profit from artificially created volatility (Nanex, Bodek)
  14. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Interaction Effects of Individual Decisions Collective Behavior of Interacting Agents Beginnings Bell Labs ‘Core War’ 1960, Conway ‘Game of Life’ 1970s Yesterday Flash Crash 2010: Billions USD evaporated in fraction of second Today 1000s of mini-Flash Crashes every week. HFT shenanigans & collusion schemes finally being investigated by NY AG “Rise of the Machines” [Joh13] Phenomenological ‘signatures’ of automated black-box algorithmic trading All-machine time regime characterized by frequent ‘black swan’ events with ultrafast durations Aggregate Decision Systems with Reflexivity Collective behavior unpredictable No useful security guarantees anent dynamics possible Figure: HFT “Painting the Tape” Illegal practice of creating fictitious activity in a stock: 70k+ meaningless bids / offers blasted in 47 seconds. Picture from Nanex
  15. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Big Data quandary: Learning/Deciding in High Dim Space Figure: Two player adversarial non-zero sum game with reinforcement learning strategies. α is memory (0 ≈ all steps , 1 ≈ no memory). Γ is deviation from zero sum game (-1 ≈ zero-sum, 0 ≈ uncorrelated payoffs, 1 ≈ payoffs identical. β is intensity of choice (0 ≈ all moves equally likely, large ≈ some preferential moves). α ≈ 0 corresponds to replicator dynamics. Evaluating ‘Best’ Decision in High Dimensional Space is Different Dimensional Vastness of Solution Space overwhelms “rational learning” algorithms, making them effectively no better than random meanderings.
  16. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Machine Learning as Potemkin Village Quote from [GSS14] “ [C]lassifiers based on modern machine learning techniques, even those that obtain excellent performance on the test set, are not learning the true underlying concepts that determine the correct output label. Instead, these algorithms have built a Potemkin village that works well on naturally occurring data, but is exposed as a fake when one visits points in space that do not have high probability in the data distribution”. Figure: Example on the left are recognized correctly as car. Image in the middle are not recognized. The rightmost image is the magnified absolute value of the difference between the two images. Picture from [SZS+13]
  17. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Epilogue Take away Pitfalls of high dim. Data in ‘corners’; distance → 0, dissimilarity meaningless Human intuition/experience fails Add very small effects → strange results Inherent High Dim Vulnerability due to linear behavior of employed models Inherent Tradeoff “Models easy to design are easy to perturb” Short term fixes and longer term solutions 1 ‘What-If’ & Simulations Generate adversarial examples, systematic robustness evaluation. Not a fix, a trustworthiness score. 2 AV equivalent Inoculation with generated examples. Barely a fix. 3 Decision System Audit White-box, GAAP-like audit. Legal CYA. 4 Robust-by-design Desired solution. Fundamental tension btw easy to train linear models and nonlinear models more resistant to adversarial perturbation
  18. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Thank you How Scientists Relax Infrared spectroscopy on a vexing problem of our times: Truly comparing apples and oranges. Thank You Thank you for your time and the consideration of these ideas. I appreciate being at Suits & Spooks at the Ritz Carlton in Arlington ¨ Figure: A spectrographic analysis of ground, desiccated samples of a Granny Smith apple and a Sunkist navel orange. Picture from [San95]
  19. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra References I Mike Bond and George Danezis, A pact with the devil, NSPW, ACM, 2006, pp. 77–82. Daniel Bilar, On callgraphs and generative mechanisms, Journal in Computer Virology 3 (2007), no. 4. , On nth order attacks, The virtual battlefield : Perspectives on cyber warfare (Christian Czosseck and Kenneth Geers, eds.), IOS Press, 2009, pp. 262–281. , Degradation and subversion through subsystem attacks, IEEE Security & Privacy 8 (2010), no. 4, 70–73. Jean Carlson and John Doyle, Highly Optimized Tolerance: Robustness and Design in Complex Systems, Physical Review Letters 84 (2000), no. 11, 2529+. Cesar Cerrudo, Hacking us traffic control systems, DefCon, vol. 22, 2014. Aaron Clauset, Cosma R. Shalizi, and Mark Newman, Power-Law Distributions in Empirical Data, SIAM Reviews (2007). Éric Filiol, Computer viruses: from theory to applications, Springer, 2005. I. J. Goodfellow, J. Shlens, and C. Szegedy, Explaining and Harnessing Adversarial Examples, ArXiv e-prints (2014). Gregoire Jacob and Eric Filiol, Malware As Interaction Machines, J. Comp. Vir. 4 (2008), no. 2.
  20. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra References II Neil Johnson, Abrupt rise of new machine ecology beyond human response time, Nature Science Reports 3 (2013). Lisa Manning, Jean Carlson, and John Doyle, Highly Optimized Tolerance and Power Laws in Dense and Sparse Resource Regimes, Physical Review E 72 (2005), no. 1, 16108+. Anh Nguyen, Jason Yosinski, and Jeff Clune, Deep neural networks are easily fooled: High confidence predictions for unrecognizable images, arXiv preprint arXiv:1412.1897 (2014). Scott Sandford, Apples and oranges: a comparison, Annals of Improbable Research 1 (1995), no. 3. Brendan. Saltaformaggio and D.aniel Bilar, Using a novel behavioral stimuli-response framework to defend against adversarial cyberspace participants, 3rd International Conference onCyber Conflict (ICCC), IEEE, June 2011, pp. 170–186. Felix Lindner Sergey Bratus, Information security war room, Usenix, 2014. Meredith Patterson Sergey Bratus and Dan Hirsch, From “shotgun parsers” to more secure stacks, ShmooCon, 2013. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus, Intriguing properties of neural networks, arXiv preprint arXiv:1312.6199 (2013). Michael E. Locasto Yingbo Song and Salvatore J. Stolfo, On the infeasibility of modelling polymorphic shellcode, ACM CCS, 2007, pp. 541–551.
  21. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Systems, Attacks and Assumption Violation [Bil09] Assumptions Fundamentally, attacks work because they violate assumptions Finite (i.e real life engineered or evolved) systems incorporate implicit/explicit assumptions into structure, functionality, language System geared towards ‘expected’, ‘typical’ cases Assumptions reflect those ‘designed-for’ cases Intuitive Examples of Attacks and Assumption Violations Man-in-Middle Attacks Identity assumption violated BGP Routing Attacks Trust assumption violated Decision Systems Attack Feature choice, data expectations, algorithm assumptions violated Generative Mechanism and Assumptions Optimization process incorporating tradeoffs between objective functions and resource constraints under uncertainty Some assumptions generated by optimization process
  22. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Optimization Process: Highly Optimized Tolerance HOT Background Generative first-principles approach proposed to account for power laws P(m) ∼ mαe− m kc in natural/engineered systems [CSN07, CD00] Optimization tradeoffs objective functions & resource constraints in prob. environment Used Internet, power and immune systems, computer security (me) Pertinent Trait Robust towards common perturbations, but fragile towards rare events ‘rare events’ ≈ low probability subspace in learning system ‘framing’ Decision Systems ‘Framing’ Categories of features, algorithms and data Probability, Loss, Resource Optimization Problem [MCD05] min J (1) subject to ri ≤ R (2) where J = pi li (3) li = f (ri ) (4) 1 ≤ i ≤ M (5) M events (Eq. 5) occurring iid with probability pi incurring loss li (Eq. 3) Sum-product is objective function to be minimized (Eq. 1) Resources ri are hedged against losses li , with normalizing f (ri ) = − log ri (Eq. 4), subject to resource bounds R (Eq. 2).
  23. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Human Decision Subversion Background Gedankenspiel Conceptual malware [BD06] Technically relatively simple Trojan Pertinent Modus Operandus Faust’s pact with Mephistoteles W sends program to Z, promising powers: Remotely browse X’s hard disk, read emails between X & Y Program delivers and surreptitiously keeps log of Z’s activities and rummages through Z’s files Reckoning After incriminating evidence gathered, program uses threats and bribes to get Z to propagate itself to next person Human Decision System used, subverted & exploited: curiosity, risk, greed, power, shame, fear, cowardice and cognitive dissonance Astounding Innovation: Symbiotic Human-Machine ‘Code’ Malware induces ‘production’ of propagation ‘code’ dynamically Invokes generative ‘factory routines’ evolutionary and social
  24. Shared w ith SpeakerDeck Overview Background Decision Systems Systemic Issues

    Epilogue Extra Big Pinocchio subset of LANGSEC space Figure: Every piece of software that takes inputs contains a de facto recognizer for accepting valid or expected inputs and rejecting invalid or malicious ones. This recognizer code is often ad hoc, spread throughout the program, and interspersed with processing logic (a “shotgun parser”). This lends the processing logic to exploitation and programmers to false assumptions of data safety [SB14]. Picture from [SBH13]