Slide 1

Slide 1 text

Prompt Engineering Karahan Yavuz KARA 220715001

Slide 2

Slide 2 text

What does Prompt Engineering mean? -A fancy way of guiding a LLM to produce the desired output.

Slide 3

Slide 3 text

Terminology, that we will talk about mostly;

Slide 4

Slide 4 text

Large Language Models(LLMs) Artificial intelligence model trained on massive amounts of text data to understand, generate, and manipulate human language. Definition GPT (Generative Pre-trained Transformer) PaLM (Pathways Language Model) Claude – Developed by Anthropic, LLaMA (Large Language Model Meta AI) Popular LLMs

Slide 5

Slide 5 text

HOW DOES LLM’S WORK

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

GPT - Generative, Pre-trained, Transformer

Slide 9

Slide 9 text

Prompt Construction

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

Prompt Construction Techniques 🔹"Write three different Instagram bios for a restaurant." 🔹 "Write a React code snippet where clicking a button triggers an alert." 1. Direct Instruction 🔹"I’m an entrepreneur opening a new coffee shop. My Instagram bio should reflect our love for high-quality coffee and our cozy atmosphere. Suggest three different bios." 2. Contextual Prompting 🔹 "Here are two tweet examples: 1️⃣ 'Coffee wakes us up in the morning, but the real magic happens when it’s shared with friends. ☕✨' 2️⃣ 'The best ideas are born over a cup of coffee. What will you create today? 🚀 #coffee' 🔹 Generate two more tweets in a similar style." 3. Few-shot Prompting

Slide 12

Slide 12 text

Clearly state what you want as the output, with enough detail but in a simple way. President of OpenAI Greg Brockman (2025 feb)

Slide 13

Slide 13 text

Prompt Injection (AI Manipulating)

Slide 14

Slide 14 text

Prompt Injection is a type of attack where a user manipulates a language model by injecting misleading or malicious instructions into the input prompt. This technique exploits the model's tendency to follow prompts literally, potentially overriding its original behavior or security constraints.

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

prompts.chat - developed by github

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

Temperature and Tokens

Slide 19

Slide 19 text

What are tokens? A token is the smallest unit of text that a language model processes. A token ID is a unique numerical identifier assigned to a specific token within a language model's vocabulary.

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

"temperature" parameter controls the randomness and creativity of the generated text. Adjusting this parameter influences the model's selection of words, thereby affecting the diversity and predictability of its output Temperature

Slide 22

Slide 22 text

I Like The It When That Trains Frogs 0. 05 0. 3 0. 3 0. 15 0. 15 0. 05 probabilities of the next word if you are working with a low temperatured llm the results will not be much creative

Slide 23

Slide 23 text

it is basically a temperature config for users :) (with much more optimization issues)

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

Is Prompt Engineering a real skill?

Slide 26

Slide 26 text

Yes, prompt engineering is a real skill, but its importance depends on the context in which you're using LLMs. Prompt engineering is a skill, but its importance may diminish over time as AI systems become more context-aware. However, for now, those who understand how to structure queries effectively will always have an edge in extracting the best results.

Slide 27

Slide 27 text

https://www.researchgate.net/figure/Overview-of-prompting-technique- categorization_fig2_377478767 https://platform.openai.com/tokenizer https://pathway.com/bootcamps/rag-and-llms/coursework/module- 3-prompt-engineering-and-token-limits/navigating-token-limits https://community.openai.com/

Slide 28

Slide 28 text

Thank you! Karahan Yavuz KARA 220715001