Slide 86
Slide 86 text
References v
[14] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al.
Language models are unsupervised multitask learners.
OpenAI blog, 1(8):9, 2019.
[15] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, P. J. Liu,
et al.
Exploring the limits of transfer learning with a unified text-to-text transformer.
J. Mach. Learn. Res., 21(140):1–67, 2020.
[16] A. Rajkomar, E. Oren, K. Chen, A. Dai, N. Hajaj, P. Liu, X. Liu, M. Sun, P. Sundberg, H. Yee,
K. Zhang, G. Duggan, G. Flores, M. Hardt, J. Irvine, Q. Le, K. Litsch, J. Marcus, A. Mossin,
and J. Dean.
Scalable and accurate deep learning for electronic health records.
npj Digital Medicine, 1, 01 2018.