Towards Realistic Predictors - EN

C1595f6a99fc51c0fb8e04b54863dbeb?s=47 Leszek Rybicki
November 29, 2018

Towards Realistic Predictors - EN

Later version of the "Towards Realistic Predictors" paper review, in English.


Leszek Rybicki

November 29, 2018


  1. Towards Realistic Predictors Pei Wang and Nuno Vasconcelos Statistical and

    Visual Computing Lab, UC San Diego mlKitchen X 2018.11.29 @_lunardog_
  2. Self-Introduction • Call me Leszek • Born in Poland •

    Living in Japan since 2010 • Cookpad R&D since 2016 • I consume too much science fiction • I’m bad at selfies
  3. None
  4. None
  5. None
  6. None
  7. Towards Realistic Predictors Pei Wang and Nuno Vasconcelos Statistical and

    Visual Computing Lab, UC San Diego
  8. Define “Realistic” optimistic pessimistic realistic




  12. A more benign example...

  13. Classifier Food / non-food classifier food plant person pet …

  14. None
  15. None
  16. None
  17. None
  18. None
  19. None
  20. Let’s start with what we can easily classify

  21. None
  22. None
  23. None
  24. What is “hard” anyway?

  25. HP-Net = Hardness Predictor HP-Net hardness

  26. Classifier HP-Net Adversarial training with a hardness predictor

  27. Hardness Predictor Loss bi y s e t y s

    u w r e s ma c mi zi t K l a k-Le b di g e b en t di r i n d a m i m n = 1 − p c
  28. Classifier Loss weighted by hardness we t ro -en p

    ma h er p e (la r ) mo po n , w i as e m s (lo s) ar en s or c
  29. Classifier HP-Net Training 1. train classifier F and HP-Net S

    jointly on training set D 2. run S on D and eliminate hard examples, to create realistic training set D′ 3. learn realistic classifier F′ on D′, with S fixed 4. output pair S, F′ 5. GOTO 1 D F S
  30. Can’t we just use confidence scores?

  31. None
  32. Hardness progression during training

  33. Do we need two separate models?

  34. Classifier + HP-Net +

  35. None
  36. Do we need to fine tune?

  37. C - normal classifier F - realistic predictor without fine-tuning

    (just rejection) F’ - realistic predictor, fine-tuned on samples accepted by HP-Net
  38. None
  39. Conclusions • There are times when it’s OK to skip

    hard samples • ...and times when it’s BEST to reject hard samples • The paper introduces a GAN-like architecture to train any classifier with its own hardness predictor • Training with a hardness predictor improves accuracy • HP-Net should be trained jointly with the classifier, but in an alternating order • HP-Net solves a different problem from the Classifier, should be a separate model • ….but best results are when the architectures are the same
  40.年宇宙の旅    うちゅうのたび 2001年宇宙の旅 『2001年宇宙の旅』(にせんいちねんう ちゅうのたび、原題:2001: A Space Odyssey)は、アーサー・C・クラークとスタン リー・キューブリックのアイデアをまとめた ストーリーに基いて製作された、SF映画お

    よびSF小説である。 2001: A Space Odyssey is a 1968 epic science fiction film produced and directed by Stanley Kubrick. The screenplay was written by Kubrick and Arthur C. Clarke, and was inspired by Clarke's short story "The Sentinel". A novel also called 2001: A Space Odyssey, written concurrently with the screenplay, was published soon after the film was released. 2001: A Space Odyssey
  41. It’s the movie with the mysterious black block, and classical

    music in space.
  42. Spaceships don’t make a “Whoosh!” sound, there’s classical music instead.

  43. Fra Dav HA 9000 is r pe in f e

    n a om d i n p e h Frank and Dave don’t trust HAL. HAL is a realistic predictor and doesn’t always follow orders.
  44. Open the pod bay doors, HAL!

  45. I’m sorry, Dave. I’m afraid I can’t do that. Fra

    HAL kills Frank.
  46. Dave has no choice but to deactivate HAL. Daisy… Daisy…

  47. Dedicated to Douglas Rain (March 13, 1928 – November 11,

    2018) known as the Voice of HAL