$30 off During Our Annual Pro Sale. View Details »

Magic Tricks for Self-driving Cars

Weilin Xu
August 11, 2018

Magic Tricks for Self-driving Cars

A highlighted talk at the Defcon 2018 CAAD Village. I conducted this work at Baidu X-Lab in summer 2018 while I was an intern researcher.

Weilin Xu

August 11, 2018
Tweet

Other Decks in Research

Transcript

  1. Magic Tricks for
    Self-driving Cars
    Weilin Xu, Zhenyu Zhong, Yunhan Jia

    View Slide

  2. • This is a proof-of-concept.
    • We are NOT targeting at any autonomous vehicle vendors.
    • Don’t try to fool your neighbor’s car.
    AUTHORS’ WARNING

    View Slide

  3. Autonomous Vehicle Framework
    Radar
    LiDAR
    Camera
    Image: https://medium.com/toyota-ai-ventures/https-medium-com-toyota-ai-ventures-announcingblackmore-7947eacc9e9e
    Camera

    View Slide

  4. Camera-based Obstacle Detection

    View Slide

  5. View Slide

  6. Our Target: YOLOv3
    YOLO v3
    Object Detection Model
    [147 Layers, 62M Parameters]
    Input
    [416x416x3]
    Output
    [3549 Bounding Boxes]
    Image: http://media.nj.com/traffic_impact/photo/all-way-stop-sign-that-flashes-in-montclairjpg-30576ab330660eff.jpg

    View Slide

  7. Trained with the COCO Dataset
    • Common Objects in Context
    • 80 Classes: person, [car, truck, bus], [bicycle, motorcycle], [stop sign,
    traffic light], etc.
    Source: http://cocodataset.org/

    View Slide

  8. car
    0.01%
    YOLOv3 Inference
    116x9
    0
    156x
    198
    373x326
    Anchor
    Boxes
    !"
    = $ %"
    + '"
    !(
    = $ %(
    + '(
    !)
    = *)
    +,-
    Center
    Point
    Object
    Size
    !.
    = *.
    +,/
    *.
    *)
    13 x 13 Grid
    ('"
    , '(
    ) = (11,2)
    Prediction
    Vector
    Bounding Box 80 Class Confidence
    Objectness
    %"
    %(
    %)
    %.
    *567
    '8
    '9 … … ':;
    '<=

    stop sign
    99%
    ×
    car
    0.01%
    car
    0.01%

    View Slide

  9. Threat Model: Image Patch Attack
    Company
    Logo

    View Slide

  10. Threat Model: Image Patch Attack
    Company
    Logo

    View Slide

  11. Threat Model: Image Patch Attack
    Company
    Logo

    View Slide

  12. Threat Model: Image Patch Attack
    Company
    Logo

    View Slide

  13. Attack Algorithms
    • Input Construction
    • Objectives
    • Optimization

    View Slide

  14. Resize
    Perspective
    Transform
    Differentiable Input Construction
    Company
    Logo
    Company
    Logo

    View Slide

  15. Objectives
    • Object Production
    • Object Vanish
    • Object Transformation

    View Slide

  16. Object Production - Coarse
    We want more certain objects on the whole image.
    • Easy to implement.
    • May be difficult to optimize. Company
    Logo

    View Slide

  17. Object Production - Precise
    We want a certain object of a specific size in a specific location.
    Company
    Logo

    View Slide

  18. Object Vanish - Coarse
    We want a certain object class to vanish on the whole image.

    View Slide

  19. Object Transformation - Coarse
    We want certain object class to transform to other class.

    View Slide

  20. Optimization
    • Change of variable
    Convert to tanh() space to encode the [0, 1] interval constraint.
    Friendly to many off-the-shelf optimizers, e.g. Adam.
    • To optimize logits
    Skip sigmoid() to avoid vanishing gradient.
    Carlini, Nicholas, and David Wagner. “Towards Evaluating the Robustness of Neural Networks.”
    IEEE S&P (Oakland) 2016.

    View Slide

  21. But, Image Sensing is not an Identity Function
    Model Input
    [416x416x3]
    § Limited Resolution
    § Distortions
    § Random Noise
    § …
    Digital
    Image
    Scene

    View Slide

  22. Towards Robust Physical Adversarial Examples
    • [Limited Resolution] Smoother patch with the total variation regularization.
    • [Distortions] Color management with the non-printability loss.
    • [Inaccurate Patch] Random transformation during optimization iterations.
    • ……
    Sharif, Mahmood, et al. "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face
    recognition." ACM CCS 2016.

    View Slide

  23. Conclusion
    • Magicians can fool object detection models, so
    can attackers.
    • We should be cautious with self-driving cars that
    rely on computer vision.

    View Slide