Slide 1

Slide 1 text

Generated by Stable Diffusion XL v1.0 — AI 2024 4 (WBS) 2024 4 — 2024-04 – p.1/17

Slide 2

Slide 2 text

https://speakerdeck.com/ks91/collections/prompt-engineering-2024-spring 2024 4 — 2024-04 – p.2/17

Slide 3

Slide 3 text

( 20 ) 1 • 2 • 3 Discord & • 4 • 5 6 RPG 7 “September 12th” 8 9 10 ∼ 11 Linux (Windows )(Mac ) 12 Open Interpreter ∼ 13 14 AGI (Artificial General Intelligence) 7 (5/6 ) / (2 ) OK / 2024 4 — 2024-04 – p.3/17

Slide 4

Slide 4 text

m P(w1 , . . . , w m ) (Wikipedia) 1 (Wikipedia) : ( ) ← (Generative Pre-training) : ( ) ( ) https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ 2024 4 — 2024-04 – p.4/17

Slide 5

Slide 5 text

ChatGPT GPT ChatGPT GPT OpenAI (GPT-3.5, GPT-4) GPT Generative Pre-trained Transformer ( ) GPT ( ) GPT-3.5, GPT-4 RLHF (Reinforcement Learning from Human Feedback; ) 2024 4 — 2024-04 – p.5/17

Slide 6

Slide 6 text

attention ( ) ( ) GPT / / / / / / / 2024 4 — 2024-04 – p.6/17

Slide 7

Slide 7 text

GPT ( ) GPT Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. “Improving Language Understanding by Generative Pre-Training”. Available at: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf. GPT-2 Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models are Unsupervised Multitask Learners”. Available at: https://paperswithcode.com/paper/language-models-are-unsupervised-multitask. GPT-3 Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. “Language Models are Few-Shot Learners”. Available at: https://doi.org/10.48550/arXiv.2005.14165. GPT-4 OpenAI. 2023. “GPT-4 Technical Report”. Available at: https://doi.org/10.48550/arXiv.2303.08774. 2024 4 — 2024-04 – p.7/17

Slide 8

Slide 8 text

( = ) ( ) 2024 4 — 2024-04 – p.8/17

Slide 9

Slide 9 text

GPT ( ) GPT Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. “Improving Language Understanding by Generative Pre-Training”. Available at: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf. GPT-2 Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models are Unsupervised Multitask Learners”. Available at: https://paperswithcode.com/paper/language-models-are-unsupervised-multitask. GPT-3 Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. “Language Models are Few-Shot Learners”. Available at: https://doi.org/10.48550/arXiv.2005.14165. GPT-4 OpenAI. 2023. “GPT-4 Technical Report”. Available at: https://doi.org/10.48550/arXiv.2303.08774. 2024 4 — 2024-04 – p.9/17

Slide 10

Slide 10 text

GPT GPT-1 (2018) Improving Language Understanding by Generative Pre-Training GPT-1 2024 4 — 2024-04 – p.10/17

Slide 11

Slide 11 text

GPT GPT-2 (2019) Language Models are Unsupervised Multitask Learners GPT-2 ( : ) 2024 4 — 2024-04 – p.11/17

Slide 12

Slide 12 text

GPT GPT-3 (2020) Language Models are Few-Shot Learners GPT-3 GPT-3 ( 1750 ) (Few-Shot) GPT-3 2024 4 — 2024-04 – p.12/17

Slide 13

Slide 13 text

GPT GPT-4 (2023) GPT-4 Technical Report GPT-4 GPT-4 10% GPT-4 Transformer 2024 4 — 2024-04 – p.13/17

Slide 14

Slide 14 text

Generative Pre-Training, Language Models (GPT, GPT-2, GPT-3) : : Improving Language Understanding (GPT : 1.17 ) ( ) ( ) Unsupervised Multitask Learners (GPT-2 : 15 ) Few-Shot Learners (GPT-3 : 1,750 ) → GPT 2024 4 — 2024-04 – p.14/17

Slide 15

Slide 15 text

( ) ( ← ChatGPT ) ( ) GPT-4 BibTEX ACM HTML (abstract) ( ) AI 2024 4 — 2024-04 – p.15/17

Slide 16

Slide 16 text

( ) GPT 3 1. (GPT ) 2. ( ) 3. ( ) 3 (↑ 3. ) 3. (GPT ) (↑ 3 ) 2024 4 — 2024-04 – p.16/17

Slide 17

Slide 17 text

2024 4 — 2024-04 – p.17/17