Slide 35
Slide 35 text
ࢀߟจݙ
[1] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K.: BERT: Pre-training of deep bidirectional
transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[2] ౻Ҫ ඒᇠ, ؠ ҆ߒ, Ճ౻ ߃ত: ܖॻςΩετͷಛੳͱͦͷࣗಈྨͷࢼΈʙ
ϦʔΨϧςοΫͷԠ༻ʹ͚ͯʙ, ୈ 6 ճࣗવݴޠॲཧγϯϙδϜ, pp.1-6, 2019.
[3] ٢ଜࡷ྄, അઇ೫, ࣛౡٱ࢚: Ϋϥυιʔγϯ άʹΑΔϚϧνϥϕϧྨͷͨΊͷ
RAkEL Λ༻͍࣭ͨཧ๏, ਓೳֶձશࠃେձจू, Vol.JSAI2016, pp.1-4, 2016.
[4] Zhang, M.-L. and Zhou, Z.-H. : Multilabel Neural Networks with Applications to Funcntional
Genomics and Text Categorization. In IEEE Transactions on Knowledge and Data Engineering,
Vol.18, Issue10, pp.1338-1351, 2006.
100% |################################################################################################################ | fin.