Slide 1

Slide 1 text

It's hard to give clear instructions

Slide 2

Slide 2 text

SMS Goeben August 1914, the Mediterranean

Slide 3

Slide 3 text

TL;DR, Wait until war is declared, and destroy the Goeben Winston Churchi!, head of Admiralty, July 31, 1914

Slide 4

Slide 4 text

SMS Goeben

Slide 5

Slide 5 text

SMS Goeben

Slide 6

Slide 6 text

TL;DR, Wait until war is declared, and destroy the Goeben "… do not at this stage be brought to action against superior forces." Winston Churchi!, head of Admiralty, July 31, 1914

Slide 7

Slide 7 text

SMS Goeben

Slide 8

Slide 8 text

SMS Goeben

Slide 9

Slide 9 text

For Churchi!: Superior forces = Austrian battleships

Slide 10

Slide 10 text

I am sure that your unclear instructions never caused major disruption of someone's fleet*, but you definitely have many stories * the Goeben did that to the Russian fl"t in the Black Sea during WW1

Slide 11

Slide 11 text

Let's talk about giving clear instructions!

Slide 12

Slide 12 text

Prompt Engineering , and other people who love talking to computers for developers

Slide 13

Slide 13 text

Human communication is complicated

Slide 14

Slide 14 text

Luckily, it's a bit more clear how Large Language Models (LLMs) work

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

No content

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

Source: https://platform.openai.com/tokenizer

Slide 24

Slide 24 text

Prompts: - Prompts are instructions - You te! an LLM what you want, and it tries to reply based its training and your instructions - more clear instructions = better reply - LLM always answer, but not always based on truth

Slide 25

Slide 25 text

Slobodan Stojanovic cofounder/CTO @ Vacation Tracker AWS Serverless Hero @slobodan_

Slide 26

Slide 26 text

Prompts are just instructions

Slide 27

Slide 27 text

So, how LLMs work?

Slide 28

Slide 28 text

You give your instructions

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

You get some unexpected wisdom or hallucination

Slide 31

Slide 31 text

No content

Slide 32

Slide 32 text

Tokens and vectors

Slide 33

Slide 33 text

@slobodan_ Source: https://cthiriet.com/blog/infinite-memory-llm

Slide 34

Slide 34 text

Anatomy of a prompt

Slide 35

Slide 35 text

The prompt is a set of textual instructions that fit LLM's context and other limitations

Slide 36

Slide 36 text

"Who is faster: Godzi!a or T-Rex?" A valid prompt

Slide 37

Slide 37 text

"Write a 500-word article about the bad influence of Amazon's RTO policy on Lambda cold starts" Also valid prompt

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

No content

Slide 40

Slide 40 text

But LLMs are products that, like most other products, evolve with user needs and requests

Slide 41

Slide 41 text

System prompts

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

Some parts of your instructions might be more important than other or you might want to make them repeatable

Slide 44

Slide 44 text

Usage via the API gives you additional superpowers

Slide 45

Slide 45 text

Repeatability

Slide 46

Slide 46 text

(simulated) conversations

Slide 47

Slide 47 text

more control

Slide 48

Slide 48 text

No content

Slide 49

Slide 49 text

Successful prompts require writing

Slide 50

Slide 50 text

I know you hate writing

Slide 51

Slide 51 text

But you wrote thousands of lines of JavaScript or Python last month

Slide 52

Slide 52 text

That's also just a bunch of text and instructions

Slide 53

Slide 53 text

7 Habits of highly effective prompters

Slide 54

Slide 54 text

Five simple tricks for better prompting results

Slide 55

Slide 55 text

1. Hint the beginning of the answer

Slide 56

Slide 56 text

We often need a response from an LLM to follow the specific structure we defined

Slide 57

Slide 57 text

User: // Some long instructions But always reply with valid JSON and nothing else! Here's your JSON: ```json { "some": "JSON", Assistant:

Slide 58

Slide 58 text

I SAID JSON ONLY!!! Works, sometimes

Slide 59

Slide 59 text

But there's something else you can do!

Slide 60

Slide 60 text

Write the beginning of the reply in the API request!

Slide 61

Slide 61 text

User: // Your instructions Answer with valid JSON and nothing else. { " Assistant: System: // Your system prompt

Slide 62

Slide 62 text

User: // Your instructions Answer with valid JSON and nothing else. { " Assistant: System: // Your system prompt

Slide 63

Slide 63 text

User: // Your instructions Answer with valid JSON and nothing else. { " Assistant: System: // Your system prompt some": "valid", "JSON": true }

Slide 64

Slide 64 text

2. Give examples

Slide 65

Slide 65 text

We can use the same idea for one more trick

Slide 66

Slide 66 text

ChatGPT and Claude often don't do the task perfectly immediately. It takes some back and forth before you get the desired outcome.

Slide 67

Slide 67 text

But we often try to do just one direct command via the API

Slide 68

Slide 68 text

There's a fancy name for a single direct co#and: Zero-Shot Prompting

Slide 69

Slide 69 text

Because, there's another popular fancy name: Few-Shot Prompting

Slide 70

Slide 70 text

"Great product, 10/10": {"label": "positive"} "Didn't work very well": {"label": "negative"} "Super helpful, worth it":

Slide 71

Slide 71 text

"Great product, 10/10": {"label": "positive"} "Didn't work very well": {"label": "negative"} "Super helpful, worth it": {"label": "positive"}

Slide 72

Slide 72 text

Like a mini "in-prompt" training

Slide 73

Slide 73 text

But you can also use this differently (and for more complicated use cases)

Slide 74

Slide 74 text

User: Assistant: "Great product, 10/10" {"label": "positive"} User: Assistant: "Didn't work very we!" {"label": "negative"} User: Assistant: "Super helpful, worth it" System: // Your system prompt

Slide 75

Slide 75 text

User: Assistant: "Great product, 10/10" {"label": "positive"} User: Assistant: "Didn't work very we!" {"label": "negative"} User: Assistant: "Super helpful, worth it" {"label": "POSITIVE"} System: // Your system prompt

Slide 76

Slide 76 text

No content

Slide 77

Slide 77 text

3. Think step by step

Slide 78

Slide 78 text

The next trick would work good with humans, too

Slide 79

Slide 79 text

I want you to create this unreasonable feature! And I want it now!

Slide 80

Slide 80 text

Let's think step by step. How would that feature help our users?

Slide 81

Slide 81 text

But in reality, it would probably just give you a new Linkedin employment status

Slide 82

Slide 82 text

#opentowork

Slide 83

Slide 83 text

Luckily, LLMs have no emotions

Slide 84

Slide 84 text

Adding "Let's think step by step" or a similar phrase uses more tokens for the response, but it often gives you better results

Slide 85

Slide 85 text

OpenAI O1 preview: "We've developed a new series of AI models designed to spend more time thinking before they respond."

Slide 86

Slide 86 text

OpenAI O1 preview: "They can reason through complex tasks and solve harder problems than previous models in science, coding, and math."

Slide 87

Slide 87 text

4. Use tools

Slide 88

Slide 88 text

LLMs sucks at math and some other tasks

Slide 89

Slide 89 text

Tricks above might improve it a bit but often not enough

Slide 90

Slide 90 text

But all major LLMs support tools

Slide 91

Slide 91 text

So, instead of this:

Slide 92

Slide 92 text

User: Assistant: "Do some complex calculations" Brand new ha!ucination System: // Your system prompt

Slide 93

Slide 93 text

You can do this:

Slide 94

Slide 94 text

User: Assistant: Do some complex calculations and give me an analysis. I need an answer from calc tool, here are the arguments to pass it System: // Your system prompt

Slide 95

Slide 95 text

User: Assistant: Do some complex calculations and give me an analysis. I n"d an answer from calc tool, here are the arguments to pass it User: Assistant: // Ca!s JS function and returns result Here's the analysis System: // Your system prompt

Slide 96

Slide 96 text

Tools allow an LLM to ask you for some help

Slide 97

Slide 97 text

Tools power AI Agents!

Slide 98

Slide 98 text

But, tools are just simple functions! (JavaScript, Python, PHP, or whatever you want)

Slide 99

Slide 99 text

5. Ask an LLM to improve the prompt

Slide 100

Slide 100 text

Can an LLM write a prompt?

Slide 101

Slide 101 text

A prompt is just a textual command. It can review and improve it!

Slide 102

Slide 102 text

Guess what? It works even better if you provide clear instructions!

Slide 103

Slide 103 text

Commander's Intent

Slide 104

Slide 104 text

Commander's Intent: - Purpose: Why personnel must complete the assignment. - Task: What the objective or goal entails. - End state: How the result should look.

Slide 105

Slide 105 text

You can try something like this: Here's my prompt: ${INITIAL PROMPT } Rewrite it to fo!ow the Co#ander's Intent Statement. Ask for missing details.

Slide 106

Slide 106 text

"Considerations in Communicating intent" Sources of Power: How People Make Decisions Book by Gary Klein

Slide 107

Slide 107 text

"There are seven types of information that a person could present to help the people receiving the request to understand what to do" Sources of Power Book by Gary Klein

Slide 108

Slide 108 text

1. The purpose of the task (the higher level goals) 2.The objective of the task (an image of desired outcome) 3.The sequence of steps in the plan 4.The rationale for the plan 5.The key decisions that may have to be made 6.Antigoals 7.Constrains and other considerations

Slide 109

Slide 109 text

"A! seven types of information are not always necessary." Sources of Power Book by Gary Klein

Slide 110

Slide 110 text

@slobodan_ Summary

Slide 111

Slide 111 text

@slobodan_ • Hint the beginning of an answer, LLM will follow • Give examples (few-shot prompting) • Ask an LLM to think step by step • Help an LLM with tools (tools are just functions) • Ask an LLM to help you improve the prompt

Slide 112

Slide 112 text

@slobodan_

Slide 113

Slide 113 text

@slobodan_