Upgrade to Pro — share decks privately, control downloads, hide ads and more …

大規模言語モデルの原理と使いこなしの原則 / Principles of Large Lang...

Sponsored · SiteGround - Reliable hosting with speed, security, and support you can count on.

大規模言語モデルの原理と使いこなしの原則 / Principles of Large Language Models and How to Use Them Effectively

早稲田大学大学院経営管理研究科「プロンプトエンジニアリング ─ 生成 AI の応用」2026春のオンデマンド教材 第7回で使用したスライドです。

Avatar for Kenji Saito

Kenji Saito PRO

April 19, 2026

More Decks by Kenji Saito

Other Decks in Technology

Transcript

  1. Generated by Stable Image Core × Nano Banana 2 —

    AI 2026 7 (WBS : ) 2026 7 — 2026-04 – p.1/22
  2. ( 20 ) 1 • 2 • 3 (Windows WSL

    ) • 4 (macOS Lima ) • 5 (macOS ) • 6 • 7 • 8 9 RPG 10 “September 12th” 11 12 13 ∼ 14 AGI (Artificial General Intelligence) 7 (4/27 ) / (2 ) OK / 2026 7 — 2026-04 – p.3/22
  3. ( ) ChatGPT ( ) ( ) 2026 7 —

    2026-04 – p.4/22
  4. 2025 4 (2026 ) ChatGPT ( 4.5 preview) ← ChatGPT

    – Deep Research ← ( )( 21 ) OpenAI Playground (gpt-4o) ← ( 4.1 ) Google ( ) ← Perplexity ← Grok ← Claude ← . . . 2026 7 — 2026-04 – p.5/22
  5. ChatGPT GPT ChatGPT GPT OpenAI (GPT-5.4, GPT-4o) GPT Generative Pre-trained

    Transformer ( ) (deep learning) a GPT ( ) GPT-3.5 a : ( ) 2026 7 — 2026-04 – p.9/22
  6. — ( ) ↓ ELSIE PREPARE TO MEET THY GOD

    ( ( )) e t h ← Wikipedia “re” (2 ) “e[ ]” (2 ) “th” “the” “th” 1 “th-e” “art-ific-ial”’ 2026 7 — 2026-04 – p.10/22
  7. —         

    (p = 0.xx) (p = 0.yy) . . . GPT ( ) GPT 2026 7 — 2026-04 – p.11/22
  8. attention ( ) ( ( )) GPT / / /

    / / 2026 7 — 2026-04 – p.12/22
  9. ( → → → ) OpenAI API ( ; Responses

    ) "TTJTUBOUT $IBU $PNQMFUJPOT ػೳͷ֊૚ͱͯ͠ݟΔ ग़དྷΔ͜ͱͷ֦͕Γͱͯ͠ݟΔ $PNQMFUJPOT $IBU "TTJTUBOUT ଓ͖Λॻ͍ͯ͘ΕΔ νϟοτ΋Ͱ͖Δ ิ׬͢ΔػೳΛ Ԡ༻͢Δ ର࿩͢ΔػೳΛ Ԡ༻͢Δ ഇࢭ΁ ഇࢭ΁ ͲΜͳ૬खͳͷ͔΋ ͋Β͔͡ΊϓϩάϥϛϯάͰ͖Δ API : Application Programming Interface ( ) 2026 7 — 2026-04 – p.13/22
  10. GPT ( ) GPT Alec Radford, Karthik Narasimhan, Tim Salimans,

    and Ilya Sutskever. 2018. “Improving Language Understanding by Generative Pre-Training”. Available at: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf. GPT-2 Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models are Unsupervised Multitask Learners”. Available at: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf. GPT-3 Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. “Language Models are Few-Shot Learners”. Available at: https://doi.org/10.48550/arXiv.2005.14165. GPT-4 OpenAI. 2023. “GPT-4 Technical Report”. Available at: https://doi.org/10.48550/arXiv.2303.08774. 2026 7 — 2026-04 – p.14/22
  11. GPT ( ) GPT Alec Radford, Karthik Narasimhan, Tim Salimans,

    and Ilya Sutskever. 2018. “Improving Language Understanding by Generative Pre-Training”. Available at: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf. GPT-2 Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models are Unsupervised Multitask Learners”. Available at: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf. GPT-3 Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. “Language Models are Few-Shot Learners”. Available at: https://doi.org/10.48550/arXiv.2005.14165. GPT-4 OpenAI. 2023. “GPT-4 Technical Report”. Available at: https://doi.org/10.48550/arXiv.2303.08774. 2026 7 — 2026-04 – p.16/22
  12. Generative Pre-Training, Language Models (GPT, GPT-2, GPT-3) : : Improving

    Language Understanding (GPT : 1.17 ) ( ) ( ) Unsupervised Multitask Learners (GPT-2 : 15 ) Few-Shot Learners (GPT-3 : 1,750 ) → GPT 2026 7 — 2026-04 – p.17/22
  13. ( ) ( ) ( ← ChatGPT ) ( )

    . . . . . . BibTEX ( ) HTML (abstract) GAMER PAT ( ) ← ( AI ) 2026 7 — 2026-04 – p.18/22
  14. ( ) GPT 3 1. (GPT ) 2. ( )

    3. ( ) 3 (↑ 3. ) 3. (GPT ) (↑ 3 ) 2026 7 — 2026-04 – p.19/22
  15. ( ) : OpenAI (provider : openai_responses) 100 USD input

    output gpt-5.4-pro 30 180 gpt-5.4 2.5 15 gpt-5.2 1.75 14 gpt-5.4-mini 0.75 4.5 gpt-5.4-nano 0.2 1.25 gpt-5-nano 0.05 0.4 . . . ( ) ( ) (attention ) ( ) 2026 7 — 2026-04 – p.20/22
  16. OpenAI Pro , ( ) , , ( ) mini

    , ( ) nano FAQ , Discord bot ( ) (cf. ) nano 2026 7 — 2026-04 – p.21/22