Slide 85
Slide 85 text
18. Hyung Won Chung et al.: Scaling Instruction-Finetuned Language Models. ICLR 2022
19. Srinivasan Iyer et al.: OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of
Generalization. CoRR abs/2212.12017 (2022)
20. Long Ouyang et al.: Training language models to follow instructions with human feedback. CoRR abs/2203.02155
(2022)
21. Amelia Glaese et al.: Improving alignment of dialogue agents via targeted human judgements. CoRR
abs/2209.14375 (2022)
22. Holly Else: Abstracts written by ChatGPT fool scientists. Nature 613, 423 (2023)
23. Qihuang Zhong et al.: Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.
CoRR abs/2302.10198 (2023)
24. Yejin Bang et al.: A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and
Interactivity. CoRR abs/2302.04023 (2023)
25. Chengwei Qin et al.: Is ChatGPT a General-Purpose Natural Language Processing Task Solver? CoRR
abs/2302.06476 (2023)
26. Terry Yue Zhuo et al.: Exploring AI Ethics of ChatGPT: A Diagnostic Analysis. CoRR abs/2301.12867 (2023)
27. Tom Kocmi and Christian Federmann: Large Language Models Are State-of-the-Art Evaluators of Translation
Quality. CoRR abs/2302.14520 (2023)
28. Biyang Guo et al.: How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. CoRR
abs/2301.07597 (2023)
29. William Fedus et al.: Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity.
JMLR 23 1-39 (2022)
30. Yejin Bang et al.: Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM. SC2021
31. Deepak Narayanan et al.: A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination,
and Interactivity. CoRR abs/2302.04023 (2023)
32. Timo Schick et al.: Toolformer: Language Models Can Teach Themselves to Use Tools. CoRR abs/2302.04761 (2023)
33. Hugo Touvron et al.:LLaMA: Open and Efficient Foundation Language Models. CoRR abs/2302.13971 (2023)
参考⽂献
85