Slide 25
Slide 25 text
References
[Hayakawa and Suzuki, 19] Satoshi Hayakawa and Taiji Suzuki. On the minimax optimality and superiority of
deepneural network learning over sparse parameter spaces. arXiv preprint arXiv:1905.09195,2019.
[Imaizumi and Fukumizu, 19] Masaaki Imaizumi and Kenji Fukumizu. Deep neural networks learn non-smooth
functions effectively. Proceedings ofMachine Learning Research, volume 89, pages 869–878. PMLR, 2019
[Petersen and Voigtlaender, 19] Philipp Petersen and Felix Voigtlaender. Optimal approximation of piecewise smooth
functions using deep ReLU neural networks. Neural Networks, 108:296–330, 2018.
[Schmidt-Hieber, 17] Johannes Schmidt-Hieber. Nonparametric regression using deep neural networks with ReLU
activation function. arXiv preprint arXiv:1708.06633, 2017.
[Suzuki, 19] Taiji Suzuki. Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces:
optimal rate and curse of dimensionality. InInternational Conference onLearning Representations, 2019.
[Tsybakov, 08] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer Publishing Company,
Incorporated, 1st edition, 2008. ISBN 0387790519, 9780387790510.
[Yarotsky, 18] Dmitry Yarotsky. Universal approximations of invariant maps by neural networks. arXiv preprint
arXiv:1804.10306, 2018.
ICML/ICLR 2019 Reading@DeNA, Sibuya 25