Slide 1

Slide 1 text

NIPS 2017 report Yohei KIKUTA resume : https://github.com/yoheikikuta/resume twitter: @yohei_kikuta (in Japanese) 20180215 ML Kitchen #7 1/23

Slide 2

Slide 2 text

What is NIPS? Neural Information Processing Systems (NIPS) - originally started as a meeting on NN - is one of the top conferences for theoretical ML 2/23

Slide 3

Slide 3 text

Statistics of NIPS 2017 3/23

Slide 4

Slide 4 text

# of registrations 4/23 year

Slide 5

Slide 5 text

5/23 Log10(# of registrations) year

Slide 6

Slide 6 text

# of submitted papers 3,240 6/23

Slide 7

Slide 7 text

Algorithms Deep Learning 7/23 Research topics

Slide 8

Slide 8 text

# of accepted papers 679 8/23

Slide 9

Slide 9 text

acceptance rate 21% papers not posted online: 15% papers posted online: 29% 9/23

Slide 10

Slide 10 text

JSAI top conference reporters I attended NIPS as one of the reporters. 10/23

Slide 11

Slide 11 text

JSAI top conference reporters I attended NIPS as one of the reporters. It’s me. 11/23

Slide 12

Slide 12 text

Views of the conference 12/23

Slide 13

Slide 13 text

Invited talks 13/23

Slide 14

Slide 14 text

Sponsors 14/23

Slide 15

Slide 15 text

Exhibitions 15/23

Slide 16

Slide 16 text

Poster sessions 16/23

Slide 17

Slide 17 text

Interesting topics 17/23

Slide 18

Slide 18 text

Trends - GAN - Theoretical understandings of DL optimization - New directions of DL - Interpretablities of ML - Incomplete information games - Deep Reinforcement Learnings - Bayesian Deep Learnings - … 18/23

Slide 19

Slide 19 text

GAN: convergence around equilibrium points Convergence analysis of GANs - analysis near equilibrium points - ODE analysis of gradient flows - relation to eigenvalues of Jacobian double back prop. regularization 19/23 ref: https://arxiv.org/abs/1705.10461 https://arxiv.org/abs/1706.04156

Slide 20

Slide 20 text

GAN: an example of application 20/23 ref: https://arxiv.org/abs/1705.09368 Toward practical applications. Combinations of models and conditional “constraint”.

Slide 21

Slide 21 text

Understandings of DL: generalization 21/23 ref: https://arxiv.org/abs/1705.08741 https://arxiv.org/abs/1710.06451 https://arxiv.org/abs/1706.02677 SGD optimizations are controlled by “noise scale”: ε : learning rate, N : training set size, B : Batch size training error

Slide 22

Slide 22 text

New DL architecture: CapsNet 22/23 ref: https://arxiv.org/abs/1710.09829 https://www.oreilly.com/ideas/introducing-capsule-networks A kind of vector generalizations of neurons. - a new look of object recognitions - routing mech. to capture relations btw capsules - robust to overlapping and affine transformations capsule dim.

Slide 23

Slide 23 text

New DL architecture: Deep Sets 23/23 ref: https://arxiv.org/abs/1703.06114 A model that is invariant under permutations: text input image input Digit summation experiments