Is "adversarial examples" an Adversarial Example?

Is "adversarial examples" an Adversarial Example?

Keynote talk at 1st Deep Learning and Security Workshop
May 24, 2018
co-located with the
39th IEEE Symposium on Security and Privacy
San Francisco, California

Over the past few years, there has been an explosion of research in
security of machine learning and on adversarial examples in
particular. Although this is in many ways a new and immature research
area, the general problem of adversarial examples has been a core
problem in information security for thousands of years. In this talk,
I'll look at some of the long-forgotten lessons from that quest and
attempt to understand what, if anything, has changed now we are in the
era of deep learning classifiers. I will survey the prevailing
definitions for "adversarial examples", argue that those definitions
are unlikely to be the right ones, and raise questions about whether
those definitions are leading us astray.

David Evans ( is a Professor of
Computer Science at the University of Virginia where he leads the
Security Research Group ( He is the author of an open computer science textbook
( and a children's book on combinatorics and computability ( He won the Outstanding Faculty Award from the State Council of Higher Education for Virginia, and was Program Co-Chair for the 24th ACM Conference on Computer and Communications Security (CCS 2017) and the 30th (2009) and 31st (2010) IEEE Symposia on Security and Privacy. He has SB, SM and PhD degrees in Computer Science from MIT and has been a faculty member at the University of Virginia since 1999.


David Evans

May 24, 2018