Slide 12
Slide 12 text
参考文献
● [鈴木ら 2020] 鈴木正敏, 鈴木潤, 松田耕史, ⻄田京介, 井之上直也. “JAQKET:クイズを題材にした日本語QAデータセットの
構築”. 言語処理学会第26回年次大会(NLP2020)発表論文集
● [Devlin et al. 2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and KristinaToutanova. BERT: Pre-training of Deep
Bidirectional Trans-formers for Language Understanding. InNAACL, volume 1,pages 4171–4186, 2019.
● [Sun et al. 2019] Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang. How to Fine-Tune BERT for Text Classification?.
arXiv [cs.CL]. arXiv. http://arxiv.org/abs/1905.05583, 2019.
● [Erickson et al. 2020] Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, Alexander
Smola. AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arXiv [cs.LG]. arXiv.
https://arxiv.org/abs/2003.06505, 2020.
12