Slide 97
Slide 97 text
参考文献 (4)
97
● [Jaderberg et al., 2017] M. Jaderberg, V. Dalibard, S. Osindero, W. M Czarnecki, J. Donahue, A. Razavi, O. Vinyals, T. Green, I. Dunning, K. Simonyan, et al,
“Population based training of neural networks,” arXiv preprint arXiv:1711.09846, 2017.
● [Swersky et al., 2013] K. Swersky, J. Snoek, R. Adams, “Multi-Task Bayesian Optimization,” In Advances in neural information processing systems, pages
2004–2012, 2013.
● [Akiba et al., 2019] T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama. Optuna: “A next generation hyperparameter optimization framework,” in
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019.
● [Li and Talwalkar, 2019] L. Li and A. Talwalkar, “Random Search and Reproducibility for Neural Architecture Search,” arXiv:1902.07638, 2019.
● [Sciuto et al., 2019] C. Sciuto, K. Yu, M. Jaggi, C. Musat, M. Salzmann, “Evaluating the search phase of neural architecture search,” arXiv:1902.08142, 2019.
● [Klein and Hutter, 2019] A. Klein and F. Hutter, “Tabular benchmarks for joint architecture and hyperparameter optimization”, arXiv:1905.04970, 2019.
● [Ying et al., 2019] C. Ying, A. Klein, E. Christiansen, E. Real, K. Murphy, and F. Hutter. NAS-Bench-101: Towards reproducible neural architecture search.
ICML, volume 97 of Proceedings of Machine Learning Research, pp. 7105–7114, 2019.