Epilogue Extra Talk Roadmap Status Quo Analytics Data-driven systems woven silently into everyday life Data in high (100s, 1000s) dimensional space. Examples Ads, Netﬂix, pre-trial discovery, teaching, pre-crime, trading, copy-writing, unmanned systems, corporate, IC, traﬃc, electricity grid Security issues largely ignored, barely studied Decision Systems Security Issues Many Pitfalls in high dims. Data lives in ‘corners’ of feature space (hypercube) not center (hypersphere). Distances → 0, dissimilarity measures become meaningless Opaqueness What exactly algorithms learn & ‘understand’ from the data, as well as decision ‘junctures’ quite mysterious Fragile Algorithmic underbelly under the hood susceptible to attacks Big Pinocchio Deﬁnitions Leverage vulnerabilities in/of/through ‘Big Data’ systems Vulnerabilities Input data → learning (features, algorithms) → dependent
Epilogue Extra Main Ideas of ‘Big Pinocchio’ Figure: Deep neural networks (DNNs) learn hierarchical layers of representation from data for pattern recognition. State-of-the-art DNN recognize natural images (left panel), but easily fooled into declaring with near-certainty that nonsense images are familiar objects (right) Take away Pitfalls of high dim. Data in ‘corners’; distance → 0, dissimilarity meaningless Human intuition/experience fails Add very small eﬀects → strange results Inherent High Dim Vulnerability due to linear behavior of employed models Inherent Tradeoﬀ “Models easy to design are easy to perturb”
Epilogue Extra Anti-Virus as Decision Systems Bad: Empirical AV Results Report Date AV Signature Update MW Corpus Date False Negative (%) 2011/05 Feb. 22nd Feb. 23rd -Mar. 3rd [39-77] 2011/02 Feb. 22nd Feb. 10th [0.2-15.6] 2010/11 Aug. 16th Aug. 17th -24th [38-63] 2010/08 Aug. 16th Aug. 6th [0.2-19.1] 2010/05 Feb. 10th Feb. 11th -18th [37-89] 2010/02 Feb. 10th Feb. 3rd [0.4-19.2] 2009/11 Aug. 10th Aug. 11th -17th [26-68] 2009/08 Aug. 10th Aug. 10th [0.2-15.2] 2009/05 Feb. 9th Feb. 9th -16th [31-86] 2009/02 Feb. 9th Feb. 1st [0.2-15.1] 2008/11 Aug. 4th Aug. 4th -11th [29-81] 2008/08 Aug. 4th Aug. 1st [0.4-13.5] 2008/05 Feb. 4th Feb. 5th -12th [26-94] 2008/02 Feb. 4th Feb. 2nd [0.2-12.3] Table: Empirical miss rates for 9-16 well-known AV. After freezing update sigs for one week, best AV missed 30-40 % of new MW, worst missed 65-77 % Worse: Theoretical Findings Detection of interactive malware at least in complexity class NPNP NPoracle oracle [EF05, JF08] Blacklisting Deadend Infeasibility of modeling polymorphic shellcode [YSS07]
Epilogue Extra Callgraph: Control Flow of Malware Decision System of Programs: Callgraph (+ data) Call-graph is relationship-graph of function calls Control ﬂow in a program represented by call-graph ‘path’ Many decision points during execution Goal Compare ‘graph structure’ of unknown binaries across non-malicious software and malware classes Main Result (2007) [Bil07] Malware tends to have a lower basic block count, implying a simpler functionality: Limited goals, interaction → fewer branches Idea: Leverage malware’s simpler decision structure to ‘outplay’ it R &D 2008-current “Autonomous Baiting, Control and Deception of Adversarial Cyberspace Participants” [SB11]
Epilogue Extra Example: Hacking Traﬃc Decision Systems with Bad Data Figure: Sensys Systems: Wireless Sensors , Repeaters, Access Points. No encryption, wireless comms in clear. Firmware updates not signed or encrypted. Picture [Cer14]
Epilogue Extra Human vs DNN interpretation [SZS+13] Figure: Inducing imperceptibly small perturbations to a correctly classiﬁed input image, so that it is no longer classiﬁed correctly. Result highlights diﬀerence between how DNNs and humans recognize objects.Picture [SZS+13]
Epilogue Extra DNN ‘Snowcrash’ [NYC14] Figure: Unrecognizable direct encoded images that state-of-the-art DNNs believe with 99.6 certainty to be a familiar object. DNNs are used in applications such as by cars that drive themselves. Picture [NYC14]
Epilogue Extra Peeking under the DNN kimono [NYC14] Figure: DNN ‘archetypes’. Unrecognizable indirect encoded images that state-of-the-art DNNs believe with 99.6% certainty to be a familiar object. Result also highlights diﬀerences between how DNNs and humans recognize objects. Picture [NYC14]
Epilogue Extra Big Pinocchio via Linear Perturbation Figure: Classiﬁcation change from ‘Panda’ to ‘Gibbon’ by adding an imperceptibly small vector whose elements are equal to the sign of the elements of the gradient of the cost function with respect to the input. Picture [GSS14] “Accidental steganography” m.o. Linear model is forced to attend to signals most closely aligned with weight, even in presence of other signals w higher amplitude
Epilogue Extra Perturbation Eﬀects in (Mis-)Classiﬁcation Space Figure: (1) Applying an imperceptible perturbation to a correctly classiﬁed natural image (blue dot) results in an image (square) that a DNN classiﬁes as an entirely diﬀerent class (“crossing decision boundary”). (2) Possible to generate high-conﬁdence images (pentagon (I0 )) starting from a random or blank image. They do not look like images in the training set. (3) Possible to generate high-conﬁdence, regular images (triangles (G0 ).) with discriminative features for a class, but still far from the training set. Picture [NYC14]
Epilogue Extra Eﬀects of Decision System Subversions “Degradation and Subversion through sub-system attacks” [Bil10] Power Grid Load balancing in electricity grids relies on accurate state estimation. Data integrity attacks on a chosen subset of sensors make these estimates unreliable, which could push such feedback systems into unstable state (Enron 2000 did this to manipulate spot prices) Democracy Voting systems assume honest participants vote actual preferences. Voters expect systems to reﬂect preferences as much as possible. In elections with more than two candidates, ranking decision system can be subverted by strategic voting. Given preferences, possible to design a seemingly democratic voting procedure to ensure desired candidate win (voting theorist Donald Saari) Financial Exchange High-frequency trading algorithms subvert analytic decision systems of participants by faking data & destabilizing pricing environment to proﬁt from artiﬁcially created volatility (Nanex, Bodek)
Epilogue Extra Interaction Eﬀects of Individual Decisions Collective Behavior of Interacting Agents Beginnings Bell Labs ‘Core War’ 1960, Conway ‘Game of Life’ 1970s Yesterday Flash Crash 2010: Billions USD evaporated in fraction of second Today 1000s of mini-Flash Crashes every week. HFT shenanigans & collusion schemes ﬁnally being investigated by NY AG “Rise of the Machines” [Joh13] Phenomenological ‘signatures’ of automated black-box algorithmic trading All-machine time regime characterized by frequent ‘black swan’ events with ultrafast durations Aggregate Decision Systems with Reﬂexivity Collective behavior unpredictable No useful security guarantees anent dynamics possible Figure: HFT “Painting the Tape” Illegal practice of creating ﬁctitious activity in a stock: 70k+ meaningless bids / oﬀers blasted in 47 seconds. Picture from Nanex
Epilogue Extra Big Data quandary: Learning/Deciding in High Dim Space Figure: Two player adversarial non-zero sum game with reinforcement learning strategies. α is memory (0 ≈ all steps , 1 ≈ no memory). Γ is deviation from zero sum game (-1 ≈ zero-sum, 0 ≈ uncorrelated payoﬀs, 1 ≈ payoﬀs identical. β is intensity of choice (0 ≈ all moves equally likely, large ≈ some preferential moves). α ≈ 0 corresponds to replicator dynamics. Evaluating ‘Best’ Decision in High Dimensional Space is Diﬀerent Dimensional Vastness of Solution Space overwhelms “rational learning” algorithms, making them eﬀectively no better than random meanderings.
Epilogue Extra Machine Learning as Potemkin Village Quote from [GSS14] “ [C]lassiﬁers based on modern machine learning techniques, even those that obtain excellent performance on the test set, are not learning the true underlying concepts that determine the correct output label. Instead, these algorithms have built a Potemkin village that works well on naturally occurring data, but is exposed as a fake when one visits points in space that do not have high probability in the data distribution”. Figure: Example on the left are recognized correctly as car. Image in the middle are not recognized. The rightmost image is the magniﬁed absolute value of the diﬀerence between the two images. Picture from [SZS+13]
Epilogue Extra Epilogue Take away Pitfalls of high dim. Data in ‘corners’; distance → 0, dissimilarity meaningless Human intuition/experience fails Add very small eﬀects → strange results Inherent High Dim Vulnerability due to linear behavior of employed models Inherent Tradeoﬀ “Models easy to design are easy to perturb” Short term ﬁxes and longer term solutions 1 ‘What-If’ & Simulations Generate adversarial examples, systematic robustness evaluation. Not a ﬁx, a trustworthiness score. 2 AV equivalent Inoculation with generated examples. Barely a ﬁx. 3 Decision System Audit White-box, GAAP-like audit. Legal CYA. 4 Robust-by-design Desired solution. Fundamental tension btw easy to train linear models and nonlinear models more resistant to adversarial perturbation
Epilogue Extra Thank you How Scientists Relax Infrared spectroscopy on a vexing problem of our times: Truly comparing apples and oranges. Thank You Thank you for your time and the consideration of these ideas. I appreciate being at Suits & Spooks at the Ritz Carlton in Arlington ¨ Figure: A spectrographic analysis of ground, desiccated samples of a Granny Smith apple and a Sunkist navel orange. Picture from [San95]
Epilogue Extra References I Mike Bond and George Danezis, A pact with the devil, NSPW, ACM, 2006, pp. 77–82. Daniel Bilar, On callgraphs and generative mechanisms, Journal in Computer Virology 3 (2007), no. 4. , On nth order attacks, The virtual battleﬁeld : Perspectives on cyber warfare (Christian Czosseck and Kenneth Geers, eds.), IOS Press, 2009, pp. 262–281. , Degradation and subversion through subsystem attacks, IEEE Security & Privacy 8 (2010), no. 4, 70–73. Jean Carlson and John Doyle, Highly Optimized Tolerance: Robustness and Design in Complex Systems, Physical Review Letters 84 (2000), no. 11, 2529+. Cesar Cerrudo, Hacking us traﬃc control systems, DefCon, vol. 22, 2014. Aaron Clauset, Cosma R. Shalizi, and Mark Newman, Power-Law Distributions in Empirical Data, SIAM Reviews (2007). Éric Filiol, Computer viruses: from theory to applications, Springer, 2005. I. J. Goodfellow, J. Shlens, and C. Szegedy, Explaining and Harnessing Adversarial Examples, ArXiv e-prints (2014). Gregoire Jacob and Eric Filiol, Malware As Interaction Machines, J. Comp. Vir. 4 (2008), no. 2.
Epilogue Extra References II Neil Johnson, Abrupt rise of new machine ecology beyond human response time, Nature Science Reports 3 (2013). Lisa Manning, Jean Carlson, and John Doyle, Highly Optimized Tolerance and Power Laws in Dense and Sparse Resource Regimes, Physical Review E 72 (2005), no. 1, 16108+. Anh Nguyen, Jason Yosinski, and Jeﬀ Clune, Deep neural networks are easily fooled: High conﬁdence predictions for unrecognizable images, arXiv preprint arXiv:1412.1897 (2014). Scott Sandford, Apples and oranges: a comparison, Annals of Improbable Research 1 (1995), no. 3. Brendan. Saltaformaggio and D.aniel Bilar, Using a novel behavioral stimuli-response framework to defend against adversarial cyberspace participants, 3rd International Conference onCyber Conﬂict (ICCC), IEEE, June 2011, pp. 170–186. Felix Lindner Sergey Bratus, Information security war room, Usenix, 2014. Meredith Patterson Sergey Bratus and Dan Hirsch, From “shotgun parsers” to more secure stacks, ShmooCon, 2013. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus, Intriguing properties of neural networks, arXiv preprint arXiv:1312.6199 (2013). Michael E. Locasto Yingbo Song and Salvatore J. Stolfo, On the infeasibility of modelling polymorphic shellcode, ACM CCS, 2007, pp. 541–551.
Epilogue Extra Systems, Attacks and Assumption Violation [Bil09] Assumptions Fundamentally, attacks work because they violate assumptions Finite (i.e real life engineered or evolved) systems incorporate implicit/explicit assumptions into structure, functionality, language System geared towards ‘expected’, ‘typical’ cases Assumptions reﬂect those ‘designed-for’ cases Intuitive Examples of Attacks and Assumption Violations Man-in-Middle Attacks Identity assumption violated BGP Routing Attacks Trust assumption violated Decision Systems Attack Feature choice, data expectations, algorithm assumptions violated Generative Mechanism and Assumptions Optimization process incorporating tradeoﬀs between objective functions and resource constraints under uncertainty Some assumptions generated by optimization process
Epilogue Extra Optimization Process: Highly Optimized Tolerance HOT Background Generative ﬁrst-principles approach proposed to account for power laws P(m) ∼ mαe− m kc in natural/engineered systems [CSN07, CD00] Optimization tradeoﬀs objective functions & resource constraints in prob. environment Used Internet, power and immune systems, computer security (me) Pertinent Trait Robust towards common perturbations, but fragile towards rare events ‘rare events’ ≈ low probability subspace in learning system ‘framing’ Decision Systems ‘Framing’ Categories of features, algorithms and data Probability, Loss, Resource Optimization Problem [MCD05] min J (1) subject to ri ≤ R (2) where J = pi li (3) li = f (ri ) (4) 1 ≤ i ≤ M (5) M events (Eq. 5) occurring iid with probability pi incurring loss li (Eq. 3) Sum-product is objective function to be minimized (Eq. 1) Resources ri are hedged against losses li , with normalizing f (ri ) = − log ri (Eq. 4), subject to resource bounds R (Eq. 2).
Epilogue Extra Human Decision Subversion Background Gedankenspiel Conceptual malware [BD06] Technically relatively simple Trojan Pertinent Modus Operandus Faust’s pact with Mephistoteles W sends program to Z, promising powers: Remotely browse X’s hard disk, read emails between X & Y Program delivers and surreptitiously keeps log of Z’s activities and rummages through Z’s ﬁles Reckoning After incriminating evidence gathered, program uses threats and bribes to get Z to propagate itself to next person Human Decision System used, subverted & exploited: curiosity, risk, greed, power, shame, fear, cowardice and cognitive dissonance Astounding Innovation: Symbiotic Human-Machine ‘Code’ Malware induces ‘production’ of propagation ‘code’ dynamically Invokes generative ‘factory routines’ evolutionary and social
Epilogue Extra Big Pinocchio subset of LANGSEC space Figure: Every piece of software that takes inputs contains a de facto recognizer for accepting valid or expected inputs and rejecting invalid or malicious ones. This recognizer code is often ad hoc, spread throughout the program, and interspersed with processing logic (a “shotgun parser”). This lends the processing logic to exploitation and programmers to false assumptions of data safety [SB14]. Picture from [SBH13]