Slide 1

Slide 1 text

JetBrains AI Deep Dive Uladzislau Sazanovich [email protected] 1

Slide 2

Slide 2 text

Structure ● JetBrains AI Overview ● Code Completion Overview ● Case Study: One-line completion 2

Slide 3

Slide 3 text

What are the characteristics of IDE features that come to mind? 3

Slide 4

Slide 4 text

Non-deterministic 4

Slide 5

Slide 5 text

Test generation 5

Slide 6

Slide 6 text

Commit generation 6

Slide 7

Slide 7 text

Built-in things 7

Slide 8

Slide 8 text

AI Chat 8

Slide 9

Slide 9 text

Code completion 9

Slide 10

Slide 10 text

Code Completion Overview 10

Slide 11

Slide 11 text

fun testGeneration() { val lm = LanguageModel() val prefix = "Once upon a time " 11

Slide 12

Slide 12 text

fun testGeneration() { val lm = LanguageModel() val prefix = "Once upon a time " Next word prediction, using prefix 12

Slide 13

Slide 13 text

fun testGeneration() { val lm = LanguageModel() val prefix = "Once upon a time " Language modeling! 13 Next word prediction, using prefix

Slide 14

Slide 14 text

fun testGeneration() { val lm = LanguageModel() val prefix = "Once upon a time " 14

Slide 15

Slide 15 text

fun testGeneration() { val lm = LanguageModel() val prefix = "Once upon a time " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } 15

Slide 16

Slide 16 text

fun testGeneration() { val lm = LanguageModel() val prefix = "Once upon a time " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } Doesn’t make any sense 16

Slide 17

Slide 17 text

fun testGeneration() { val lm = LanguageModel() val prefix = "My dog " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } 17

Slide 18

Slide 18 text

fun testGeneration() { val lm = LanguageModel() val prefix = "My dog " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } Prefix 18

Slide 19

Slide 19 text

fun testGeneration() { val lm = LanguageModel() val prefix = "My dog " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } Suffix Prefix 19

Slide 20

Slide 20 text

fun testGeneration() { val lm = LanguageModel() val prefix = "My dog " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } file: src/test/com/dogs/retriever/Generate.kt 20

Slide 21

Slide 21 text

fun testGeneration() { val lm = LanguageModel() val prefix = "My dog " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } file: src/test/com/dogs/retriever/Generate.kt Suffix Prefix One-line completion 21

Slide 22

Slide 22 text

22

Slide 23

Slide 23 text

./ Example of LanguageModel usage fun runGeneration() { | 23

Slide 24

Slide 24 text

Multi-line completion ./ Example of LanguageModel usage fun runGeneration() { val lm = LanguageModel() val prefix = "My dog is " val generation = lm.predict(prefix) } 24

Slide 25

Slide 25 text

Multi-line completion ./ Example of LanguageModel usage fun runGeneration() { val lm = LanguageModel() val prefix = "My dog is " val generation = lm.predict(prefix) } Much harder task 25

Slide 26

Slide 26 text

26

Slide 27

Slide 27 text

Prompt 27 Type Declarations Local Variables Base Class Relevant Files MORE!

Slide 28

Slide 28 text

Prompt We want all of it 28 Type Declarations Local Variables Base Class Relevant Files MORE!

Slide 29

Slide 29 text

Prompt But we can’t 16k tokens Type Declarations Local Variables Base Class Relevant Files MORE! 29

Slide 30

Slide 30 text

Prompt 30

Slide 31

Slide 31 text

Prompt We use on-device ML to rank 31

Slide 32

Slide 32 text

Prompt We use on-device ML to rank As well a good old heuristics 32

Slide 33

Slide 33 text

IDE Multi-line context fun runGeneration() { val lm = LanguageModel() val prefix = "My dog is " val generation = lm.predict(prefix) } 33

Slide 34

Slide 34 text

IDE LanguageModel.kt class LanguageModel() { ● fun predict(prefix: String) { ... } } Multi-line context fun runGeneration() { val lm = LanguageModel() val prefix = "My dog is " val generation = lm.predict(prefix) } 34

Slide 35

Slide 35 text

IDE LanguageModel.kt class LanguageModel() { ● fun predict(prefix: String) { ... } } Multi-line context fun runGeneration() { val lm = LanguageModel() val prefix = "My dog is " val generation = lm.predict(prefix) } ProjectInfo file: Run.kt lang: Kotlin 1.9 libraries: ... 35

Slide 36

Slide 36 text

36

Slide 37

Slide 37 text

Cascade One-line ☁ Multi-line ☁ LLM ☁ 37

Slide 38

Slide 38 text

Cascade Local 🦙 One-line ☁ Multi-line ☁ LLM ☁ 38

Slide 39

Slide 39 text

Cascade Local 🦙 One-line ☁ Multi-line ☁ LLM ☁ 39

Slide 40

Slide 40 text

Case Study: One-line completion 40

Slide 41

Slide 41 text

fun testGeneration() { val lm = LanguageModel() val prefix = "My dog " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } Just one line 41

Slide 42

Slide 42 text

LLMs are great ● Intelligent ● Huge context ● Easy to prototype 42

Slide 43

Slide 43 text

LLMs are great ● Intelligent ● Huge context ● Easy to prototype ● Slow and expensive 43

Slide 44

Slide 44 text

Training from Scratch ● Flexibility 44

Slide 45

Slide 45 text

Training from Scratch ● Flexibility ● Full control of data 45

Slide 46

Slide 46 text

Training from Scratch ● Flexibility ● Full control of data ● Any architecture 46

Slide 47

Slide 47 text

First try NVIDIA/ Megatron-LM starcoder 47

Slide 48

Slide 48 text

starcoder-15B First try 48

Slide 49

Slide 49 text

starcoder-15B starcoder-1B First try 49

Slide 50

Slide 50 text

starcoder-15B starcoder-1B First try 50 starcoder-160M

Slide 51

Slide 51 text

starcoder-15B starcoder-1B jetcoder-350M First try 51 starcoder-160M

Slide 52

Slide 52 text

val prefix = "My dog is golden " val expected = "retriever" val prefix = "My dog is golden " val exp = 1e6 MATCH MISS :( Line Accuracy 52

Slide 53

Slide 53 text

Line Accuracy, % 53

Slide 54

Slide 54 text

Let’s scale 54

Slide 55

Slide 55 text

Let’s scale 55 …a bit

Slide 56

Slide 56 text

56

Slide 57

Slide 57 text

57 ● Modern architecture ● Developed by META ● Supported in MegatronLM

Slide 58

Slide 58 text

560ms 7 000 000 000 parameters 58

Slide 59

Slide 59 text

560ms 590ms 350 000 000 parameters 7 000 000 000 parameters 59

Slide 60

Slide 60 text

size model has the same latency x20 60

Slide 61

Slide 61 text

● Added support for Starcoder architecture in DeepSpeed Optimise 61

Slide 62

Slide 62 text

● Added support for Starcoder architecture in DeepSpeed 590ms 326ms Optimise 62

Slide 63

Slide 63 text

~0.5B parameters 63

Slide 64

Slide 64 text

~0.5B parameters 6 layers wide-llama 64

Slide 65

Slide 65 text

~0.5B parameters 24 layers tall-llama 65

Slide 66

Slide 66 text

class LanguageModel { fun predict(prefix: String) { ... } } fun testGeneration() { val lm = LanguageModel() val prefix = "My dog " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } 66

Slide 67

Slide 67 text

class LanguageModel { fun predict(prefix: String) { ... } } fun testGeneration() { val lm = LanguageModel() val prefix = "My dog " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } ⭐ 72 tokens 67

Slide 68

Slide 68 text

class LanguageModel { fun predict(prefix: String) { ... } } fun testGeneration() { val lm = LanguageModel() val prefix = "My dog " val expected = "is a golden retriever" val generation = lm.predict(prefix) assertEquals(generation, expected) } ⭐ 72 tokens 🦙 96 tokens 68

Slide 69

Slide 69 text

\n\n\t\t\t\t ⭐ 1 token 🦙 6 tokens 69

Slide 70

Slide 70 text

Overall ● 🦙 0.5B ● Tokeniser from starcoder ● Tall and wide versions 70

Slide 71

Slide 71 text

Line Accuracy, % wide-llama-0.5B tall-llama-0.5B starcoder-160M starcoder-1B jetcoder-350M 71

Slide 72

Slide 72 text

Latency on A100 GPU, ms 72

Slide 73

Slide 73 text

We have a lot of 🦙🦙🦙! 73

Slide 74

Slide 74 text

Work in Progress 🔨🔨🔨 74

Slide 75

Slide 75 text

Thank you for listening! Uladzislau Sazanovich [email protected] 75