Slide 1

Slide 1 text

Summaraizer Lessons learned along the way

Slide 2

Slide 2 text

StefMa.guru Stefan May Android Developer since 2014 Principal Android Developer @ioki since 2020 github.com/@StefMa StefMa.medium.com x.com/StefMa91

Slide 3

Slide 3 text

StefMa.guru Stefan May Android Developer since 2014 Principal Android Developer @ioki since 2020 github.com/@StefMa StefMa.medium.com x.com/StefMa91 ki = künstliche intelligenz = artificial intelligence

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

“AI, can you help me?” ✨✨✨ ✨✨✨

Slide 11

Slide 11 text

Summaraizer

Slide 12

Slide 12 text

Summaraizer

Slide 13

Slide 13 text

Summaraizer

Slide 14

Slide 14 text

Summaraizer 👉 https://github.com/ioki-mobility/summaraizer Go, CLI and Module Supports Multiple Sources (GitHub, Reddit, GitLab, more to come) Supports Multiple Providers (Ollama, OpenAI, Mistral, more to come) 👉 https://github.com/ioki-mobility/summaraizer-action JavaScript Supports Multiple Providers

Slide 15

Slide 15 text

Summaraizer Lessons learned along the way

Slide 16

Slide 16 text

Tokens

Slide 17

Slide 17 text

1 Token ~= 4 Characters (english alphabet)

Slide 18

Slide 18 text

Tokens Model Token (context) window gpt4o 128.000 Llama3 8.000 Claude 3 200.000 Gemini 1.000.000 (“soon” 2.000.000)

Slide 19

Slide 19 text

Tokens

Slide 20

Slide 20 text

Token window is a limitation for your input (prompt) and output

Slide 21

Slide 21 text

Tokens Example: Token window: 5 tokens (5*4 ~= 20 chars) Input: Why is the sky blue? (20 chars, “5 tokens”) Output: (0 chars left, “0 tokens”)

Slide 22

Slide 22 text

Tokens Example: Token window: 5 tokens (5*4 ~= 20 chars) Input: The sky is (10 chars, “2.5 tokens”) Output: blue! (5 chars, “2.25 tokens”)

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

Tokens Stuffing: Just put in all the data in (and hope for the best)

Slide 25

Slide 25 text

Tokens Stuffing: Just put in all the data in (and hope for the best) MapReduce: Summarize chunks of the data and put all the summarization into a final prompt

Slide 26

Slide 26 text

Tokens

Slide 27

Slide 27 text

Tokens “Summary 1” “Summary 2”

Slide 28

Slide 28 text

Please summarize this: Summary 1, Summary 2, …, Summary N

Slide 29

Slide 29 text

Tokens Stuffing: Just put in all the data (and hope for the best) MapReduce: Summarize chunks of the data and put all the summarization into a final prompt Refine: Summarize chunks of data and put the summarization plus the next chunk of data to the prompt until your data ends

Slide 30

Slide 30 text

Tokens “Summary 1”

Slide 31

Slide 31 text

Tokens “Summary 1” “Summary 2”

Slide 32

Slide 32 text

Tokens Stuffing: Just put in all the data (and hope for the best) MapReduce: Summarize chunks of the data and put all the summarization into a final prompt Refine: Summarize chunks of data and put the summarization plus the next chunk of data to the prompt until your data ends

Slide 33

Slide 33 text

Streaming

Slide 34

Slide 34 text

Streaming

Slide 35

Slide 35 text

Streaming

Slide 36

Slide 36 text

Current models are just predicting next word machines

Slide 37

Slide 37 text

Current models are just predicting next token machines

Slide 38

Slide 38 text

Streaming Example: Input: The sky is

Slide 39

Slide 39 text

Streaming Example: Input: The sky is Tokenizer

Slide 40

Slide 40 text

Streaming Example: Input: The sky is Tokenizer The sky is

Slide 41

Slide 41 text

Streaming Example: Input: The sky is Tokenizer Neural Network

Slide 42

Slide 42 text

Streaming Example: Input: The sky is Tokenizer Neural Network (next) Token Probability blue 0.9 nice 0.4 dog 0.1

Slide 43

Slide 43 text

Streaming Example: Input: The sky is Tokenizer Neural Network (next) Token Probability blue 0.9 nice 0.4 dog 0.1 Greedy decoding

Slide 44

Slide 44 text

Streaming Example: Input: The sky is blue Tokenizer

Slide 45

Slide 45 text

Streaming Example: Input: The sky is blue Tokenizer Neural Network

Slide 46

Slide 46 text

Streaming Example: Input: The sky is blue Tokenizer Neural Network (next) Token Probability because 0.7 AI 0.1 frankfurt 0.2

Slide 47

Slide 47 text

Streaming Example: Input: The sky is blue Tokenizer Neural Network (next) Token Probability because 0.7 AI 0.1 frankfurt 0.2

Slide 48

Slide 48 text

Streaming Example: Input: The sky is blue because…n Tokenizer Neural Network

Slide 49

Slide 49 text

Streaming Example: Input: The sky is blue because…n Tokenizer Neural Network End of sequence (token)

Slide 50

Slide 50 text

“Then how can it answer questions?”

Slide 51

Slide 51 text

“Then how can it answer questions?” Why is the sky blue?

Slide 52

Slide 52 text

Prompting

Slide 53

Slide 53 text

Prompting How to separate comments?

Slide 54

Slide 54 text

Prompting How to separate comments? Good old

Slide 55

Slide 55 text

Prompting How to separate comments? Good old Solution: Separate comments using enclosing tags Example: Why is the sky blue? I actually don’t know. Maybe ask @john The sky is blue because…

Slide 56

Slide 56 text

Prompting

Slide 57

Slide 57 text

Prompting

Slide 58

Slide 58 text

Prompting mistral:7b

Slide 59

Slide 59 text

Prompting llama3:latest

Slide 60

Slide 60 text

Prompting gemma:2b-instruct

Slide 61

Slide 61 text

Model variants

Slide 62

Slide 62 text

Model variants llama3:latest mistral:7b gemma:2b-instruct

Slide 63

Slide 63 text

llama3:latest mistral:7b gemini:pro gemma:2b-instruct gemini:flash Model variants

Slide 64

Slide 64 text

Model variants llama3:latest mistral:7b gemini:pro gemma:2b-instruct gemini:flash llama3:70b llama3:8b gemma:2b gemma:7b gemma:text mistral:instruct

Slide 65

Slide 65 text

Model variants llama3:latest mistral:7b gemini:pro gemma:2b-instruct gemini:flash llama3:70b llama3:8b gemma:2b gemma:7b mistral:instruct codellama:[7b|13b|34b|70b] codegemma:[2b|instruct|code] gemma:text

Slide 66

Slide 66 text

Model variants llama3:latest mistral:7b gemini:pro gemma:2b-instruct gemini:flash llama3:70b llama3:8b gemma:2b gemma:7b mistral:instruct codellama:[7b|13b|34b|70b] codegemma:[2b|instruct|code] gemma:text

Slide 67

Slide 67 text

Model variants [model]:[x]b [model]:[text|instruct|...]

Slide 68

Slide 68 text

Model variants [model]:[x]b b stands for billion (parameters) [model]:[text|instruct|...] variants differs on training (data) and/or are fine-tuned

Slide 69

Slide 69 text

Model variants More parameters: Are “better” on a variety of tasks Less parameters: Might be “optimized” for a specific task More parameters: Uses more resources Is slower Tend to have a bias on (a) topic(s) Less parameters: Uses less resources Are faster Might not have a bias on (a) topic(s) [model]:[x]b

Slide 70

Slide 70 text

Model variants Text Is optimized for general text processing like translations, text summarization or text generation Instruct Is optimized for responding with completions for an specific instruct. [model]:[text|instruct|...]

Slide 71

Slide 71 text

Thank You For Listening!