Upgrade to Pro — share decks privately, control downloads, hide ads and more …

A Quick Overview to Unlock the Potential of LL...

Ayana Niwa
August 07, 2024
110

A Quick Overview to Unlock the Potential of LLMs through Prompt Engineering

2024/08/07 @Tokyo AI

Ayana Niwa

August 07, 2024
Tweet

Transcript

  1. A Quick Overview to Unlock the Potential of LLMs through

    Prompt Engineering 2024/08/07 @Tokyo AI Ayana Niwa
  2. Self-Introduction Ayana Niwa (Ph.D. in Engineering) • Researcher at Tokyo

    Institute of Technology • Research Scientist at Megagon Labs Tokyo, Recruit Co., Ltd. Research Interest: Interpretability and controllability in natural language generation (NLG) and instruction uncertainty 2 @ayaniwa1213
  3. Background Prompts is an effective communication interface between humans and

    LLMs Prompt is an input to a generative model and is used to guide its output 3 LLM • ChatGPT, Gemini, Claude, ... Designing prompts is crucial to maximizing the capabilities of LLMs and achieving the desired outputs. Prompt engineering is the strategy of designing and crafting better prompts 》Translate English to Japanese: 》sea otter => らっこ 》cheese => Instruction Example User input チーズ
  4. Purpose of This Talk There is an immense and rapidly

    growing body of knowledge on LLM prompting, making it increasingly difficult to keep up with the latest developments. The purpose of this talk is to provide a quick overview of prompting, helping the audience grasp key points when crafting prompts. If you are not very familiar with prompting… This talk will be a good start to systematically learn about prompting If you are already knowledgeable about prompting… Please use this to organize your thoughts 4
  5. Disclaimers 5 To provide as broad an overview as possible,

    detailed explanations are omitted. • For more in-depth surveys, please refer to these papers. • The Prompt Report: A Systematic Survey of Prompting Techniques (2024/07) • A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications (2024/02) • Reasoning with Language Model Prompting: A Survey (2023) • I will also introduce many studies. Please see the links to each paper for more information. The following topic is beyond the scope of this talk. 1. Soft prompt consisting of vectors 2. Tasks involving non-linguistic prompts, such as those for images or audio
  6. Three Categories of Prompting Strategies 6 Prompting strategies that are

    used relatively frequently can be categorized as follows. Incremental Generation Instruction Clarification Prompt Exploration
  7. Three Categories of Prompting Strategies 7 Prompting strategies that are

    used relatively frequently can be categorized as follows. Incremental Generation Instruction Clarification Prompt Exploration
  8. Instruction Clarification 8 It’s challenging to create prompts that LLMs

    can fully understand and execute. • Is the task definition clear enough? • Is there any ambiguity? • Can the LLM answer using its own knowledge? Importance: Transform prompts into ones that are "solvable." Instruction Clarification
  9. 9 The following strategies can be considered: • Few-shot Prompting

    • Providing concrete examples can help clarify instructions • Human-in-the-loop Prompting • Engaging in interactions with users can help clarify instructions • Additional Prompting • Providing extra context or details can help clarify the instructions Instruction Clarification Instruction Clarification
  10. Instruction Clarification Few-shot Prompting 10 Language Models are Few-Shot Learners

    Instead of rewriting the instructions, offering examples of following the instructions can help clarify them. • This can include providing both positive and negative examples
  11. Instruction Clarification Human-in-the-loop Prompting 11 Engaging in dialogue can clarify

    ambiguous instructions. Ambiguities in user preferences (e.g., output length or topic) Interactive-Chain-Prompting: Ambiguity Resolution for Crosslingual Conditional Generation with Interaction Ambiguities in language (e.g., polysemy) … AmbigNLG: Addressing Task Ambiguity in Instruction for NLG
  12. Instruction Clarification Additional Prompting 12 PromptAgent: Strategic Planning with Language

    Models Enables Expert-level Prompt Optimization Incorporate relevant domain knowledge or other additional information into the instructions to make them more precise and easier to follow for the LLM.
  13. Challenges in Instruction Clarification Instruction Following capability is the one

    of the challenges • Even simple instructions, such as specifying keywords or sequence lengths, are often not followed correctly (left image). • When prompts are lengthy, models may struggle to utilize information from the middle sections (right image). 13 Instruction-Following Evaluation for Large Language Models Lost in the Middle: How Language Models Use Long Contexts
  14. Three Categories of Prompting Strategies 14 Prompting strategies that are

    used relatively frequently can be categorized as follows. Incremental Generation Prompt Exploration Instruction Clarification
  15. Incremental Generation 15 Incremental Generation To facilitate high-level predictions, some

    incremental generation process ( ) is effective to reach a correct answer ( ).
  16. Strategies for incremental generation ( ): 1. Splitting the reasoning

    process into multiple steps 2. Splitting the task into multiple subtasks 3. Iterating the refinement process Incremental Generation 16 Incremental Generation Input Output Thought Because A, B should be … Task Output Task A Task B Input Output Output 1 Refinement Thought Generation Decomposition Model-based criticism
  17. Incremental Generation Thought Generation Thought generation involves various methods that

    encourage the LLM to express its thought process when solving a problem. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models 17 Automatic Chain of Thought Prompting in Large Language Models Chain-of-Thought Automatic Chain of Thought
  18. Incremental Generation Decomposition This strategy decomposes complex problems into simpler

    sub-questions. • Thought Generation naturally breaks down problems into simpler components Least-to-Most Prompting Enables Complex Reasoning in Large Language Models 18
  19. Graph of Thoughts: Solving Elaborate Problems with Large Language Models

    19 Decomposition strategies are becoming more complex, evolving into structures like trees and graphs, allowing to create more flexible reasoning processes Incremental Generation Decomposition • Generate multiple thought candidates • Abandon, refine, and aggregate thoughts
  20. Incremental Generation Model-based Criticism Have the LLM provide feedback on

    its output, and use that to refine the response • Several methods have been proposed, including iterative self-feedback and revision (left image) and generating related questions for verification (right image). Chain-of-Verification Reduces Hallucination in Large Language Models Self-Refine: Iterative Refinement with Self-Feedback 20
  21. 21 Challenge in Incremental Generation Consistency between the input, intermediate

    step, and output is the problem • CoT reasoning is possible even with invalid demonstrations (see below) • LLM might fail to identify what is the problem within the input, leading the wrong feedback and wrongly refined output (Are Large Language Models Good Prompt Optimizers? ) Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters
  22. Three Categories of Prompting Strategies 22 Prompting strategies that are

    used relatively frequently can be categorized as follows. Incremental Generation Prompt Exploration Instruction Clarification
  23. Prompt Exploration 24 The following prompting strategies can be considered:

    • Ensembling • Generate multiple outputs and aggregate them to generate the final output. • Prompt Optimizer • LLM generates multiple possible prompts, scores them, then creates variations of the best ones Prompt Exploration
  24. Prompt Exploration Ensembling Aggregate the results of multiple prompts to

    obtain a better answer. 25 Making Language Models Better Reasoners with Step-Aware Verifier There are also studies on ensembles of multiple chains-of-thought and models for a single prompt. - Multiple chains of thought: Tree of Thoughts: Deliberate Problem Solving with Large Language Models - Multiple models: Getting MoRE out of Mixture of Language Model Reasoning Experts
  25. Prompt Exploration Prompt Optimization LLM-based automatic prompt optimization, which leverages

    LLMs as prompt optimizers to obtain suitable prompts within discrete natural language spaces. 26 Large Language Models Are Human-Level Prompt Engineers
  26. 27 Challenge in Prompt Exploration It is challenging to optimize

    the large and discrete prompt space • LLM is highly sensitive to prompts: Even when prompts have the same meaning, the results can vary significantly depending on the wording, format, and phrasing (see below) • The optimal prompt can be model-specific and task-specific (Large Language Models as Optimizers ) Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting Evaluating the Zero-shot Robustness of Instruction- Sensitivity to wording and format Sensitivity to phrasing
  27. Takeaways 28 》Translate English to Japanese: 》This is the name

    of animals. 》sea otter => らっこ 》mouse => 》Translate the following English animal names into Japanese: 》sea otter => らっこ 》mouse => 》Let’s think step by step: Instruction clarification Prompt exploration Incremental Generation Combining these strategies effectively can unlock the potential of LLMs. Stay informed and adaptive! Important prompting strategies: The word "mouse" can be translated into (1) マウス, a computer mouse (2) ねずみ, an animal. Therefore, the answer is: ねずみ