Upgrade to Pro — share decks privately, control downloads, hide ads and more …

自然勾配とその周辺

Masanari Kimura
November 20, 2020

 自然勾配とその周辺

自然勾配とその周辺の話題についてまとめました.

Masanari Kimura

November 20, 2020
Tweet

More Decks by Masanari Kimura

Other Decks in Research

Transcript

  1. CompML 確率的勾配降下法(Stochastic Gradient Descent) 確率的勾配降下法(SGD): !"# = ! − !

    ∇ ! , : Θ → ℝは微分可能な損失関数. From Jeremy Howard /@fast.ai 3
  2. CompML 自然勾配降下法(Natural Gradient Descent) (Amari 1998)によって提案. !"# = ! −

    ! $# ! (! ) $#(! )はRiemann計量 = (%& )の逆行列. 自然勾配はRiemann多様体上の最急方向であることが証明されている. 6
  3. CompML 自然勾配の導出I Riemann多様体の上では,十分近い2点と + の距離の二乗は ' = %& %&. ここで

    = (%& )はRiamann計量テンソル. 目的は • どの方向にを動かせば()が最も動くかを見つけること • このときの大きさは方向によらず一定とする. %& %& = ', %& %& = 1. ( = ) 7
  4. CompML 自然勾配の導出II 関数の最急方向は以下の最大化問題を解くことで得られる: + − = ∇ ⋅ , .

    . %& %& = 1 Lagrange未定乗数法より ( ∇ ⋅ − %& %& ∝ $#∇ この右辺$#∇()を関数の自然勾配と呼び,これがの最急方向になる. 8
  5. CompML 自然勾配と通常の勾配の関係 自然勾配は反変ベクトル % = %& % () 一方,通常の勾配は共変ベクトル &

    = & () これらが一致するのは %& = %& のときかつそのときに限り,これはEuclid空間. 9
  6. CompML Newton法と自然勾配降下法の関係 Newton法は関数()のHessianを用いて∇ = 0を解くことで,()の最小値 を得る: !"# = ! −

    ! $# ! ∇f() ) ここで = ∇∇()であり,自然勾配降下法は()を()で置き換えたもの: !"# = ! − ! $# ! (! ) 11
  7. CompML 自然勾配降下法と勾配消失問題 Theorem. 非線形関数の微分値′が小さいとき,自然勾配は消失しない. 自然勾配の大きさは ! ∇ℓ ' = $

    !"() = * +,-,.! $# ∇ℓ , ∇ℓ , /$# で与えられ, = 0 とすることで,最適な0 のまわりで以下のように近似可能: ! ∇ℓ ' ≈ ここではの次元. 14
  8. CompML 鏡像降下法(Mirror Descent) (Nemirovski and Yudin 1985)によって提案. • 勾配降下法をℓ' 罰則付き最適化問題として書き換えて非Euclid幾何を誘導

    • このとき近接写像はℓ' '以外を選ぶ. !"# = 1∈3 , ∇(! ) + 1 ! (, ! ) 近接写像として , 4 = # ' − ' 'を選ぶと,通常の勾配降下法に一致. 16
  9. CompML 鏡像降下法とBregman divergence 二階微分可能な狭義凸関数: Θ → ℝから誘導されるBregman divergenceは 5 ∥

    ′ = − − ∇ 4 , − ′ = ∑% % log % として負のエントロピーを選ぶと,KL divergenceに一致. Mirror Descentの近接写像として ⋅,⋅ = 5 [⋅,⋅]を選ぶと, !"# = 1∈3 , ∇(! ) + 1 ! _[ ∥ ! ] と書ける. 18
  10. CompML 自然勾配降下法と鏡像降下法の同等性 • Bregan divergenceを近接写像とした鏡像降下法は自然勾配降下法と同等 • (Raskutti and Sayan 2015)

    !"# = 1∈3 , ∇(! ) + 1 ! 5 [ ∥ ! ] ← ∇ !"# = ∇ ! − ! ∇1 ! ← !"# = ! − ! ∇1 f ← !"# = ! − ! ∇' ! $# ∇6 ! 19
  11. CompML 自然勾配の適応的計算 テイラー展開から, !"# = ! − ! $#∇ℓ .

    この逆行列を計算すると,以下のような再帰的な計算ができる: !"# $# = 1 + ! ! $# ! − ! ! $#∇ℓ ! , ! , ! ∇ℓ ! , ! , ! /! $#. ここで! はステップに依存する定数. 23
  12. CompML 自然勾配の計算量 自然勾配降下法: !"# = ! − ! $# !

    (! ) Fisher情報行列の逆行列の計算が重い. • 一般的には(7). • この計算が毎イテレーションごとにかかるので,かなり厳しい. 25
  13. CompML Fisher情報行列の近似 Fisher information: ∏" *#(+,-) Empirical Fisher information: ∏"

    *#(-|+") • KFAC (Martens and Grosse 2015):計算効率の良い構造近似 • Adam (Kingma and Ba 2015):勾配の二乗の移動平均でFisher情報行列の対角 要素を近似 26
  14. CompML Fisher情報行列の近似の限界 • (Kunstner et al. 2019) • Empirical Fisherは二次情報を捉えられない

    • Empirical Fisherが十分良い近似を行うための条件は多くの場合成り立たない 27
  15. CompML 状態・行動・報酬 状態空間 = ,行動空間 = . 各離散時間において,戦略(|! )に 応じて行動が選択される.

    状態に対応する期待報酬: < = p = !! |0 = 状態,行動に対応する期待報酬: < , = p !! |, ! ! ! !"# !"# !"# !"$ !"$ !"$ 29
  16. CompML 期待報酬とFisher情報行列 戦略(|; )を採用したときの期待報酬は,割り引き率< を用いて = < u ; ,

    . 一方Fisher情報行列は,ある状態のときのFisher情報行列()の期待値 = u< . 30
  17. CompML 状態行動価値関数の近似による計算効率化 Natural Gradient Policyは計算コストが高いので,<の近似を考える: < , = p %

    , % = , ⋅ 基底関数を , = ∇1 log (|; )ととると, ∇1 = u < u∇1 , < , = となるので,自然勾配は結局以下のように書き換えられる: { ∇1 = $#∇1 = . 32
  18. CompML 自然勾配のDeep Learningへの応用 • Topmoumoute Online Natural Gradient Algorithm (Roux

    et al. 2007) • Revisiting Natural Gradient for Deep Networks (Pascanu and Bengio 2013) • Exact Natural Gradient in Deep Linear Networks and Its Application to the Nonlinear Case (Bernacchia 2018) • Fisher Information and Natural Gradient Learning in Random Deep Networks (Amari et al. 2019) 34
  19. CompML 自然勾配の強化学習への応用 • A Natural Policy Gradient (Kakade 2002) •

    Natural Gradient Deep Q-Learning (Knight et al. 2018) • Compatible Natural Gradient Policy Search (Pajarinen et al. 2019) 35
  20. CompML Natural Gradient Boosting • 自然勾配に基づくBoosting • (Duan et al.

    2019) https://stanfordmlgroup.github.io/projects/ngboost/ 37
  21. CompML References • Amari, Shun-Ichi. 1998. “Natural Gradient Works Efficiently

    in Learning.” Neural Computation 10 (2): 251–76. • Blair, Charles. 1985. “Problem Complexity and Method Efficiency in Optimization (AS Nemirovsky and DB Yudin).” SIAM Review 27 (2): 264. • Mertikopoulos, Panayotis, and Mathias Staudigl. 2018. “On the Convergence of Gradient-Like Flows with Noisy Gradient Input.” SIAM Journal on Optimization: A Publication of the Society for Industrial and Applied Mathematics 28 (1): 163–97. • Raskutti, G., and S. Mukherjee. 2015. “The Information Geometry of Mirror Descent.” IEEE Transactions on Information Theory / Professional Technical Group on Information Theory 61 (3): 1451–57. • Martens, James, and Roger Grosse. 2015. “Optimizing Neural Networks with Kronecker-Factored Approximate Curvature.” In International Conference on Machine Learning, 2408–17. PMLR. • Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. • Kunstner, Frederik, Philipp Hennig, and Lukas Balles. 2019. “Limitations of the Empirical Fisher Approximation for Natural Gradient Descent.” In Advances in Neural Information Processing Systems, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d\textquotesingle Alché-Buc, E. Fox, and R. Garnett, 32:4156–67. Curran Associates, Inc. 39
  22. CompML References • Mishkin, Aaron, Frederik Kunstner, Didrik Nielsen, Mark

    Schmidt, and Mohammad Emtiyaz Khan. 2018. “SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient.” In Advances in Neural Information Processing Systems, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 31:6245–55. Curran Associates, Inc. • Duan, Tony, Anand Avati, Daisy Yi Ding, Khanh K. Thai, Sanjay Basu, Andrew Y. Ng, and Alejandro Schuler. 2019. “NGBoost: Natural Gradient Boosting for Probabilistic Prediction.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1910.03225. • Karakida, Ryo, and Kazuki Osawa. 2020. “Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks.” arXiv [stat.ML]. arXiv. http://arxiv.org/abs/2010.00879. • Roux, Nicolas, Pierre-Antoine Manzagol, and Yoshua Bengio. 2008. “Topmoumoute Online Natural Gradient Algorithm.” In Advances in Neural Information Processing Systems, edited by J. Platt, D. Koller, Y. Singer, and S. Roweis, 20:849–56. Curran Associates, Inc. • Pascanu, Razvan, and Yoshua Bengio. 2013. “Revisiting Natural Gradient for Deep Networks.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1301.3584. • Bernacchia, Alberto, Mate Lengyel, and Guillaume Hennequin. 2018. “Exact Natural Gradient in Deep Linear Networks and Its Application to the Nonlinear Case.” In Advances in Neural Information Processing Systems, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 31:5941–50. Curran Associates, Inc. • Amari, Shun-Ichi, Ryo Karakida, and Masafumi Oizumi. 2019. “Fisher Information and Natural Gradient Learning in Random Deep Networks.” In , edited by Kamalika Chaudhuri and Masashi Sugiyama, 89:694–702. Proceedings of Machine Learning Research. PMLR. • Kakade, Sham M. 2002. “A Natural Policy Gradient.” In Advances in Neural Information Processing Systems, edited by T. Dietterich, S. Becker, and Z. Ghahramani, 14:1531–38. MIT Press. • Knight, Ethan, and Osher Lerner. 2018. “Natural Gradient Deep Q-Learning.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1803.07482. • Pajarinen, Joni, Hong Linh Thai, Riad Akrour, Jan Peters, and Gerhard Neumann. 2019. “Compatible Natural Gradient Policy Search.” Machine Learning 108 (8): 1443–66. 40
  23. CompML References • Amari, S., H. Park, and K. Fukumizu.

    2000. “Adaptive Method of Realizing Natural Gradient Learning for Multilayer Perceptrons.” Neural Computation 12 (6): 1399–1409. • Park, H., S. I. Amari, and K. Fukumizu. 2000. “Adaptive Natural Gradient Learning Algorithms for Various Stochastic Models.” Neural Networks: The Official Journal of the International Neural Network Society 13 (7): 755–64. • Zhao, Junsheng, and Xingjiang Yu. 2015. “Adaptive Natural Gradient Learning Algorithms for Mackey–Glass Chaotic Time Prediction.” Neurocomputing 157 (June): 41–45. 41