Slide 17
Slide 17 text
References
1) Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
2) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog,
1(8), 9.
3) Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners.
Advances in neural information processing systems, 33, 1877-1901.
4) Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., ... & Lowe, R. (2022). Training language models to follow instructions
with human feedback. arXiv preprint arXiv:2203.02155.
5) Weizenbaum, J. (1966). ELIZA̶a computer program for the study of natural language communication between man and
machine. Communications of the ACM, 9(1), 36-45.
6) Nye, M., Tessler, M., Tenenbaum, J., & Lake, B. M. (2021). Improving coherence and consistency in neural sequence models with dual-system,
neuro-symbolic reasoning. Advances in Neural Information Processing Systems, 34, 25192-25204.
7) Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic perspectives, 19(4), 25-42.
8) Kahneman, D. (2011). Thinking, fast and slow. macmillan.
9) Wu, S., Irsoy, O., Lu, S., Dabravolski, V., Dredze, M., Gehrmann, S., ... & Mann, G. (2023). BloombergGPT: A Large Language Model for
Finance. arXiv preprint arXiv:2303.17564.
10) Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language
models. arXiv preprint arXiv:2201.11903.
11) Shen, Y., Song, K., Tan, X., Li, D., Lu, W., & Zhuang, Y. (2023). HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace. arXiv
preprint arXiv:2303.17580.
12) Qin, Y., Liang, S., Ye, Y., Zhu, K., Yan, L., Lu, Y., ... & Sun, M. (2023). ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world
APIs. arXiv preprint arXiv:2307.16789.
13) Liu, B., Jiang, Y., Zhang, X., Liu, Q., Zhang, S., Biswas, J., & Stone, P. (2023). Llm+ p: Empowering large language models with optimal planning
proficiency. arXiv preprint arXiv:2304.11477.
Carnot Inc.
Carnot Inc. 2023. All rights reserved. Do not distribute.