20180215_MLKitchen7_YoheiKIKUTA

A182964bc0a261a5fc8bb207d660c743?s=47 yoppe
February 15, 2018

 20180215_MLKitchen7_YoheiKIKUTA

A182964bc0a261a5fc8bb207d660c743?s=128

yoppe

February 15, 2018
Tweet

Transcript

  1. NIPS 2017 report Yohei KIKUTA resume : https://github.com/yoheikikuta/resume twitter: @yohei_kikuta

    (in Japanese) 20180215 ML Kitchen #7 1/23
  2. What is NIPS? Neural Information Processing Systems (NIPS) - originally

    started as a meeting on NN - is one of the top conferences for theoretical ML 2/23
  3. Statistics of NIPS 2017 3/23

  4. # of registrations 4/23 year

  5. 5/23 Log10(# of registrations) year

  6. # of submitted papers 3,240 6/23

  7. Algorithms Deep Learning 7/23 Research topics

  8. # of accepted papers 679 8/23

  9. acceptance rate 21% papers not posted online: 15% papers posted

    online: 29% 9/23
  10. JSAI top conference reporters I attended NIPS as one of

    the reporters. 10/23
  11. JSAI top conference reporters I attended NIPS as one of

    the reporters. It’s me. 11/23
  12. Views of the conference 12/23

  13. Invited talks 13/23

  14. Sponsors 14/23

  15. Exhibitions 15/23

  16. Poster sessions 16/23

  17. Interesting topics 17/23

  18. Trends - GAN - Theoretical understandings of DL optimization -

    New directions of DL - Interpretablities of ML - Incomplete information games - Deep Reinforcement Learnings - Bayesian Deep Learnings - … 18/23
  19. GAN: convergence around equilibrium points Convergence analysis of GANs -

    analysis near equilibrium points - ODE analysis of gradient flows - relation to eigenvalues of Jacobian double back prop. regularization 19/23 ref: https://arxiv.org/abs/1705.10461 https://arxiv.org/abs/1706.04156
  20. GAN: an example of application 20/23 ref: https://arxiv.org/abs/1705.09368 Toward practical

    applications. Combinations of models and conditional “constraint”.
  21. Understandings of DL: generalization 21/23 ref: https://arxiv.org/abs/1705.08741 https://arxiv.org/abs/1710.06451 https://arxiv.org/abs/1706.02677 SGD

    optimizations are controlled by “noise scale”: ε : learning rate, N : training set size, B : Batch size training error
  22. New DL architecture: CapsNet 22/23 ref: https://arxiv.org/abs/1710.09829 https://www.oreilly.com/ideas/introducing-capsule-networks A kind

    of vector generalizations of neurons. - a new look of object recognitions - routing mech. to capture relations btw capsules - robust to overlapping and affine transformations capsule dim.
  23. New DL architecture: Deep Sets 23/23 ref: https://arxiv.org/abs/1703.06114 A model

    that is invariant under permutations: text input image input Digit summation experiments