Presentation of a paper by Tian et al. (2017) at the Mila Robotics RG: https://arxiv.org/abs/1708.08559
When: Monday, 10/08/18 at 2.00pm
Where: PAA 3195
Synthesizing images through deformation or generation has a long history in computer vision and adversarial machine learning. More broadly, this technique is also known as data augmentation [1] [2], feature augmentation [3], or domain randomization [2] (in the non-adversarial setting). Often used to augment a dataset to increase data diversity, to probe a model for hidden biases, or reduce sensitivity to noise, this family of techniques seek to generate realistic but synthetic inputs based on our understanding of the data generating process, and in some cases, the model architecture. Such inputs can be used to discover hidden failure modes and improve generalization through retraining.
In this work, Tian et al. present a grey-box testing technique to evaluate deep neural networks by applying realistic image deformations to maximize “neuron coverage” and promote output diversity. This technique is used to discover images that would induce unsafe control outputs in a self driving car, i.e. dangerous steering angles. In this talk, we will introduce the concept “neuron coverage” and its usefulness as a proxy for capturing output diversity. We will explore the notion of metamorphic testing, and related white-box techniques for evaluating neural networks. And we will learn how fake training data can help avoid rapid unplanned deceleration of an autonomous vehicle.
[1] Understanding data augmentation for classification: When to warp? https://arxiv.org/abs/1609.08764
[2] The Effectiveness of Data Augmentation in Image Classification using Deep Learning https://arxiv.org/abs/1712.04621
[3] Dataset augmentation in feature space https://arxiv.org/abs/1702.05538
[4] Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World https://arxiv.org/abs/1703.06907