CodeFest 2019. Борис Лесцов (Mail.Ru Group) — Детектирование людей в толпе

16b6c87229eaf58768d25ed7b2bbbf52?s=47 CodeFest
April 06, 2019

CodeFest 2019. Борис Лесцов (Mail.Ru Group) — Детектирование людей в толпе

Задача детектирования людей на изображении или видеопотоке — это сложная задача компьютерного зрения, основными сложностями в которой являются разнообразие возможных сценариев детектирования, большая внутриклассовая вариативность самих людей (одежда, поза), а также частое перекрытие людей (ообенно сложный случай — толпы). Для её решения исторически было придумано множество способов, но на данный момент наилучшее качество демонстрируют свёрточные нейронные сети. Доклад посвящен построению собственной production-ready системы детектирования людей, работающей на свёрточных нейронных сетях в реальном времении. Рассматриваются специфические приемы (архитектуры, функции потерь, особенности обучения), позволяющие существенно поднять качество детектирования.

16b6c87229eaf58768d25ed7b2bbbf52?s=128

CodeFest

April 06, 2019
Tweet

Transcript

  1. 2.

    Computer Vision Team We solve computer vision problems at Mail.Ru

    Projects: 1) Vision (b2b) 2) Cloud 3) Mail 4) ...
  2. 4.

    Business case 1) Queue optimisation a) Open the elevator (ski

    resort) b) Call the cashier 2) Await time estimation
  3. 5.
  4. 6.
  5. 13.

    Metrics: AP - Average Precision (single class) 1. Compute predictions.

    2. Plot Precision-Recall Curve, make non-increasing 3. Compute Area Under Curve Multiclass: mean AP (mAP) Different IoU thresholds
  6. 14.

    mAP - mean Average Precision (VOC) Compute mean of AP

    for all classes. Problem: These detections give the same contribution to mAP.
  7. 15.

    Metrics: mAP@[.5:.95] (COCO) 1) For each IoU threshold in [.5:.95]

    = [0.5, 0.55, 0.6, …, 0.9, 0.95] compute mAP. 2) Average these values to get mAP@[.5:.95]: Also: log average miss-rate (mMR) is used sometimes
  8. 22.

    Approaches 1) Classical CV (HOG, Deformable Part Models, ViolaJones) 2)

    Motion-based detection (background subtraction) 3) CNN: a) Two stage - Faster RCNN b) Single stage - SSD, YOLO, RetinaNet. c) Cascaded - MTCNN
  9. 30.

    Faster RCNN + Accurate + Bigger resolution => better result

    - Slow - More objects => more proposals => slower detection
  10. 32.
  11. 33.
  12. 34.
  13. 35.
  14. 36.
  15. 47.
  16. 49.

    FocalLoss Problem: class disbalance 99 : 1 Cross Entropy (CE):

    Focal Loss (FL): pt - predicted probability of g.t. class:
  17. 50.
  18. 53.
  19. 54.

    Small pedestrians Bigger resolution => better result, but slower. 800x600

    : 30 fps, ~73.5% AP 1200x800: 15 fps, ~78.0% AP
  20. 64.

    Appendix Repulsion Loss Three components: 1) Attraction to matched g.t.

    box. 2) Repulsion from other g.t. boxes. 3) Repulsion from other predicted boxes. Technically, IoU is maximized/minimized.
  21. 65.

    RetinaMask 1) RetinaNet adapted to instance segmentation 2) Mask prediction

    gives improves detection quality (~2.3% mAP on COCO). 3) Masks are predicted in Mask-RCNN manner.
  22. 66.

    RetinaMask 4) Mask prediction can be discarded during inference to

    speed up the detector. 5) Code and models available!
  23. 67.
  24. 69.

    Tracking use cases 1) Tracking itself 2) Less False Positives

    on a video stream. 3) Deal with “blinking” detections.
  25. 70.

    SORT (Simple Online and Realtime Tracking) • Association by IoU

    • Kalman Filters • Fast We fine-tuned SORT
  26. 71.

    Intuition about Kalman Filter in SORT Box is represented with

    vector: • u,v - coordinates of the center • s - box scale • r - box aspect ratio • dotted u, v, s - corresponding derivatives
  27. 72.

    Intuition about Kalman Filter in SORT Notes: 1. Linear prediction

    with correction from detector output. 2. Speed, aspect ratio are constant. 3. Can model many dynamic systems (fluid amount in a tank, the temperature of a car engine).
  28. 76.

    Conclusion 1) Two stage detectors are more accurate, but slower

    2) Bigger resolution => better accuracy, slower 3) ResNet, FPN, Focal Loss => better result
  29. 77.
  30. 78.
  31. 79.
  32. 80.

    Resolution 1) SGD training instead of Adam. 2) Replacing SSD

    with RetinaNet arch. 3) FocalLoss 4) Bigger resolution (current models: 800x600 and 1200x800) 5) scale_by_aspect instead of simple resize. 6) Anchor box tuning. 7) Crop augmentations 8) Joint training with head detection. 9) Removing strides from convolutions in last stages of RetinaNet. 10)Synchronized Batchnorm (big resolution => small batch size)
  33. 81.

    Things that did NOT work out 1) MTCNN for detecting

    small people. 2) Prediction of full bounding box instead of the visible one.