Slide 1

Slide 1 text

Coding with AI May 2025

Slide 2

Slide 2 text

"Evolution, Morpheus. Evolution! Like The Dinosaur. Look Out That Window. You've Had Your Time. The Future Is OUR World, Morpheus. The Future Is OUR Time."

Slide 3

Slide 3 text

Becoming Neo May 2025

Slide 4

Slide 4 text

Agenda ● Getting intuition for the technology and buzzword ● How to maximize value from working with AI and what are the limitations

Slide 5

Slide 5 text

So let’s start with the basics where π‘Žβˆˆ1...𝐴 is the head number, and 𝑓 is some function like RELU or whatever and the 𝐛s are biases (𝑀 is the attention mask and 𝑑 𝐸is the size of the embedding). the output of layer π‘™βˆˆ1...𝐋, 𝐗π₯ is ,

Slide 6

Slide 6 text

LMM - Large Language Model ● Large - trained on a huge data set and uses a huge number of parameters ● Language - geared toward understanding language ( ● Model - a type of Neural Network

Slide 7

Slide 7 text

Ok - so the real basics Preceptron (1957)

Slide 8

Slide 8 text

Network

Slide 9

Slide 9 text

No content

Slide 10

Slide 10 text

Network https://playground.tensorflow.org

Slide 11

Slide 11 text

embeddings https://projector.tensorflow.org/

Slide 12

Slide 12 text

Self Supervised Learning The quick brown fox jumps over the _____ The quick brown fox jumps over the _____ dog The _____ brown fox jumps over the lazy dog

Slide 13

Slide 13 text

Attention Attention helps helps a neural network link related words Handle disambiguation ● Lexical- e.g. flies as verb vs. part of noun , understand the like is related to arrow/banana ● Structural (fruit-flies is a unit) Time flies like an arrow; fruit flies like a banana

Slide 14

Slide 14 text

Consequences ● The main algorithm is next work (actually part of word) prediction β—‹ What we get is an option ● Common things are easy - if you are using common practices, working on areas that have a lot of good example, chances are AI can really push you fast β—‹ The corollary is that if you have unique patterns in your code, completely novel area AI will struggle ● Getting exactly what you want is hard ● Probable != Correct (aka β€œhellucinations”) ● CONTEXT is king

Slide 15

Slide 15 text

https://github.com/vectara/hallucination-leaderboard

Slide 16

Slide 16 text

Context is king!

Slide 17

Slide 17 text

No content

Slide 18

Slide 18 text

Agent

Slide 19

Slide 19 text

MCP Model Context Protocol (but there’s already AI in API)

Slide 20

Slide 20 text

RAG - Retrieval Augmented Generation ● Used to be a pre-process to enrich the context - now it is basically a big MCP for Search ● Helps bring relevant data (to help with Attention) ● Enforces permissions over data

Slide 21

Slide 21 text

Consequences ● The main algorithm is next work (actually part of word) prediction β—‹ What we get is an option ● Common things are easy - if you are using common practices, working on areas that have a lot of good example, chances are AI can really push you fast β—‹ The corollary is that if you have unique patterns in your code, completely novel area AI will struggle ● Getting exactly what you want is hard ● Probable != Correct (aka β€œhellucinations”) ● CONTEXT is king

Slide 22

Slide 22 text

Sample Rules

Slide 23

Slide 23 text

β€œNever Send A Human To Do A Machine's Job.β€œ

Slide 24

Slide 24 text

What’s a β€œmachine job” then? ● Write Unit test ● Check coverage ● Verify Standards ● Scan for vulnerabilities ● Convert Figma to code ● Alot more

Slide 25

Slide 25 text

But remember ● Simple refactoring are much harder for an LLM - they don’t copy paste, the regenerate so big refactoring can be risky ● Don’t try big things if you/LLM didn’t also write down all the tasks (e.g. in an MD file) the attention can break

Slide 26

Slide 26 text

Feedback loop ● Long contex β—‹ hard to hold attention on the right thing β—‹ Increase odds for hallucination (we predict on prediction rather on concrete knowledge) ● So create contexts often, work in small increments ● Don’t hesitate to git commit successful interim steps

Slide 27

Slide 27 text

β€œWe Can Never See Past The Choices We Don't Understand.”

Slide 28

Slide 28 text

LLM can also help you understand ● The code you or someone else wrote ● The essence of documentation (sometimes via tools) ● The plan before making changes ● Nuances of the code you or you and the LLM just created ● Trace why something is broken ● Analyze profiling data (CPU/ memory )

Slide 29

Slide 29 text

RISKS The S in LLM/MCP stands for security Once only Claude and I knew what this code does.. now.. Endless POC level code

Slide 30

Slide 30 text

β€œI Know Kung-Fu” (or do I)

Slide 31

Slide 31 text

Putting it all together ● (modified) RIPER framework ● Structured Workflow: Clear separation of development phases, with Research and Innovate unified for a streamlined discovery and ideation process. ● Memory Bank: Persistent documentation across sessions ● State Management: Explicit tracking of project phase and mode

Slide 32

Slide 32 text

No content

Slide 33

Slide 33 text

No content

Slide 34

Slide 34 text

No content

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

"I Don't Know The Future. I Didn't Come Here To Tell You How This Is Going To End. I Came Here To Tell You How It's Going To Begin.