Keynote talk at 1st Deep Learning and Security Workshop
May 24, 2018
co-located with the
39th IEEE Symposium on Security and Privacy
San Francisco, California
Abstract:
Over the past few years, there has been an explosion of research in
security of machine learning and on adversarial examples in
particular. Although this is in many ways a new and immature research
area, the general problem of adversarial examples has been a core
problem in information security for thousands of years. In this talk,
I'll look at some of the long-forgotten lessons from that quest and
attempt to understand what, if anything, has changed now we are in the
era of deep learning classifiers. I will survey the prevailing
definitions for "adversarial examples", argue that those definitions
are unlikely to be the right ones, and raise questions about whether
those definitions are leading us astray.
Bio:
David Evans (https://www.cs.virginia.edu/evans/) is a Professor of
Computer Science at the University of Virginia where he leads the
Security Research Group (https://www.jeffersonswheel.org). He is the author of an open computer science textbook
(http://www.computingbook.org) and a children's book on combinatorics and computability (http://www.dori-mic.org). He won the Outstanding Faculty Award from the State Council of Higher Education for Virginia, and was Program Co-Chair for the 24th ACM Conference on Computer and Communications Security (CCS 2017) and the 30th (2009) and 31st (2010) IEEE Symposia on Security and Privacy. He has SB, SM and PhD degrees in Computer Science from MIT and has been a faculty member at the University of Virginia since 1999.