Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Liu, Pengfei, et al. “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing.” ACM Computing Surveys 55.9 (2023): 1-35. https://dl.acm.org/doi/10.1145/3560815
in Natural Language Processing Liu, Pengfei, et al. “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing.” ACM Computing Surveys 55.9 (2023): 1-35. https://dl.acm.org/doi/10.1145/3560815
Ƹ 𝑧を探索 ◼事前学習済みLM 𝑃(⋅; 𝜃)を用いて回答候補𝑧の集合を探索 3. 最高スコアの回答 Ƹ 𝑧から最高スコアの出力ො 𝑦へマッピング 9 回答集合 zで埋めたときのLM確率 I love this movie. [X] Overall, it was a [Z] movie. [X] Overall, it was a [Z] movie. Z={excellent, good, OK, bad, horrible} best Overall, it was a good movie. 入力𝑥 Step 1 Step 2 Step 3 出力𝑦 Ƹ 𝑧 = good
プレートを探索 2種類のプロンプトの形状 ◼Close prompts: 文字列の空白を埋める ➢マルクLMを使って解くタスク ◼Prefix prompts: 文字列の接尾辞を続ける ➢生成に関するタスク,自動回帰LMを用いて解くタスク 13 Close Prompts I love this movie, it is a [Z] movie. Prefix Prompts I love this movie. What's the sentiment of the review? [Z] Ex.
◼Sample Ordering ➢追加するプロンプトの順番によって性能が変化 ➢学習サンプルの最適な順を探索することで性能を向上 25 China’s capital is [Z]. Great Britain’s capital is London. Japan’s capital is Tokyo. China’s capital is [Z]. augment