Slide 1

Slide 1 text

1 Java + LLMs: A hands-on guide to building LLM Apps in Java with Jakarta Syed M Shaaf Developer Advocate Red Hat Bazlur Rahman Java Champion 🏆 Staff Software Developer at DNAstack

Slide 2

Slide 2 text

@bazlur.ca @shaaf.dev ● Systems do not speak Natural language, can’t translate and lack context outside of system boundaries. (e.g. sentiment) ● Generating content is costly and sometimes hard. ● Rapid data growth ● Rising Expectations: Customers demand instant, personalized solutions. ● Inefficiency: Manual processes increase costs and slow operations. ● Skill Gaps: Limited expertise in AI adoption. Systems, Data, Networks and a Solution? https://github.com/rokon12/llm-jakarta

Slide 3

Slide 3 text

@bazlur.ca @shaaf.dev Understanding the journey that brought us here... Expert System Machine learning Deep learning Foundation models No use of data Manually authored rules Brittle Labour intensive Data prep, feature eng. Supervised learning, unsupervised learning, classification Learning without labels, adapt, tune, massive data appetite https://github.com/rokon12/llm-jakarta

Slide 4

Slide 4 text

@bazlur.ca @shaaf.dev Foundation models Learning without labels, adapt, tune, massive data appetite ● Tasks ○ Translation, Summarization, Writing, Q&A ● “Attention is All you need”, Transformer architecture ● Recognize, Predict, and Generate text ● Trained on a Billions of words ● Can also be tuned further A LLM predicts the next token based on its training data and statistical deduction Large Language Models https://github.com/rokon12/llm-jakarta

Slide 5

Slide 5 text

@bazlur.ca @shaaf.dev Tokens Tokenization: breaking down text into tokens. e.g., Byte Pair Encoding (BPE) or WordPiece); handle diverse languages and manage vocabulary size efficiently. [12488, 6391, 4014, 316, 1001, 6602, 11, 889, 1236, 4128, 25, 3862, 181386, 364, 61064, 9862, 1299, 166700, 1340, 413, 12648, 1511, 1991, 20290, 15683, 290, 27899, 11643, 25, 93643, 248, 52622, 122, 279, 168191, 328, 9862, 22378, 2491, 2613, 316, 2454, 1273, 1340, 413, 73263, 4717, 25, 220, 7633, 19354, 29338, 15] https://platform.openai.com/tokenizer "Running", “unpredictability” (word-based tokenization). Or: "run" " ning" ; “un” “predict” “ability” (subword-based tokenization, used by many LLMs). “Building Large Language Models from scratch” - Sebastian Raschka

Slide 6

Slide 6 text

@bazlur.ca @shaaf.dev Amazing things Stupid mistakes “..Do not mix accuracy with truth..” @bazlur.ca @shaaf.dev https://github.com/rokon12/llm-jakarta

Slide 7

Slide 7 text

Truth is Discrete not continuous @bazlur.ca @shaaf.dev https://github.com/rokon12/llm-jakarta

Slide 8

Slide 8 text

@bazlur.ca @shaaf.dev

Slide 9

Slide 9 text

@bazlur.ca @shaaf.dev Langchain4J https://github.com/rokon12/llm-jakarta

Slide 10

Slide 10 text

DEMO

Slide 11

Slide 11 text

Function calling / Tools @Tool double squareRoot(double x) { return Math.sqrt(x); } - Call other services or functions to enhance the response. - E.g. Web APIs, internal system requests https://github.com/rokon12/llm-jakarta

Slide 12

Slide 12 text

Enhanced Information Retrieval RAG combines the strengths of retrieval-based and generative models, enabling it to pull in relevant, up-to-date information from external sources and databases. This ensures that responses are not only contextually accurate but also rich in current and specific details. Improved Answer Accuracy By integrating a retrieval component, RAG can provide more accurate answers, especially for factual questions. It retrieves relevant documents or snippets from a large corpus and uses these to guide the generative model, which helps in generating responses that are factually correct and informative. Versatile Applications RAG is highly versatile and can be applied across various domains such as customer support, knowledge management, and research assistance. Its ability to combine retrieved data with generative responses makes it suitable for complex tasks that require both extensive knowledge and contextual understanding. Retrieval Augmented Generation https://github.com/rokon12/llm-jakarta

Slide 13

Slide 13 text

Retrieval Augmented Generation What is the representation of the data? How do I want to split? Per document Chapter Sentence How many tokens do I want to end up with? How much overlap is there between segments?

Slide 14

Slide 14 text

1 4 Thank you! Syed M Shaaf Developer Advocate Red Hat Bazlur Rahman Java Champion 🏆 Empowering Developers through Speaking 🗣 Writing ✍ Mentoring 🤝 & Community Building 🌍 Published Author 📖 Contributing Editor at InfoQ and Foojay.IO fosstodon.org/@shaaf sshaaf https://www.linkedin.com/in/shaaf/ shaaf.dev https://bsky.app/profile/shaaf.dev https://x.com/bazlur_rahman rokon12 https://www.linkedin.com/in/bazlur/ https://bazlur.ca/ https://bsky.app/profile/bazlur.ca Source for the demo https://github.com/rokon12/llm-jakarta https://docs.langchain4j.dev/ LangChain4J