Slide 1

Slide 1 text

Understanding Jlama in Quarkus by Mario Fusco & Jake Luciani

Slide 2

Slide 2 text

Introducing Jlama

Slide 3

Slide 3 text

Jlama - Serve your LLM in pure Java ❖ Run LLM inference directly embedded into your Java application ❖ Based on the Java Vector API ❖ Integrated with Quarkus and LangChain4j ❖ Jlama includes out-of-the-box ➢ Support for many different LLM families ➢ A command line tool that makes it easy to use ➢ A tool for models quantization ➢ A pure Java tokenizer ➢ Distributed inference

Slide 4

Slide 4 text

… but why Java? Fast development/prototyping → Not necessary to install, configure and interact with any external server. Security → Embedding the model inference in the same JVM instance of the application using it, eliminates the need of interacting with the LLM only through REST calls, thus preventing the leak of private data. Legacy support: Legacy users still running monolithic applications on EAP can include LLM-based capabilities in those applications without changing their architecture or platform. Monitoring and Observability: Gathering statistics on the reliability and speed of the LLM response can be done using the same tools already provided by EAP or Quarkus. Developer Experience → Debuggability will be simplified, allowing Java developers to also navigate and debug the Jlama code if necessary. Distribution → Possibility to include the model itself into the same fat jar of the application using it (even though this could probably be advisable only in very specific circumstances). Edge friendliness → Deploying a self-contained LLM-capable Java application will also make it a better fit than a client/server architecture for edge environments. Embedding of auxiliary LLMs → Apps using different LLMs, for instance a smaller one to to validate the responses of the main bigger one, can use a hybrid approach, embedding the auxiliary LLMs. Similar lifecycle between model and app →Since prompts are very dependent on the model, when it gets updated, even through fine-tuning, the prompt may need to be replaced and the app updated accordingly.

Slide 5

Slide 5 text

Because we are not data scientists … well … no, seriously … why not Python?

Slide 6

Slide 6 text

Because we are not data scientists … well … no, seriously … why not Python? What we do is integrating existing models

Slide 7

Slide 7 text

Because we are not data scientists … well … no, seriously … why not Python? What we do is integrating existing models into enterprise- grade systems and applications

Slide 8

Slide 8 text

Because we are not data scientists … well … no, seriously … why not Python? What we do is integrating existing models Do you really want to do ● Transactions ● Security ● Scalability ● Observability ● … you name it in Python??? into enterprise- grade systems and applications

Slide 9

Slide 9 text

Integrating Jlama with Quarkus and LangChain4j io.quarkiverse.langchain4j quarkus-langchain4j-jlama ${quarkus.langchain4j.version} com.github.tjake jlama-core ${jlama.version} com.github.tjake jlama-native ${jlama.version} ${os.detected.classifier} Quarkus/LangChain4j/Jlama integration module Jlama native support for specific operating system (optional)

Slide 10

Slide 10 text

Integrating Jlama with Quarkus and LangChain4j io.quarkiverse.langchain4j quarkus-langchain4j-jlama ${quarkus.langchain4j.version} com.github.tjake jlama-core ${jlama.version} com.github.tjake jlama-native ${jlama.version} ${os.detected.classifier} ${quarkus.platform.group-id} quarkus-maven-plugin ${quarkus.platform.version} true --enable-preview --enable-native-access=ALL-UNNAMED jdk.incubator.vector Enable Vector API preview feature

Slide 11

Slide 11 text

Integrating Jlama with Quarkus and LangChain4j io.quarkiverse.langchain4j quarkus-langchain4j-jlama ${quarkus.langchain4j.version} com.github.tjake jlama-core ${jlama.version} com.github.tjake jlama-native ${jlama.version} ${os.detected.classifier} ${quarkus.platform.group-id} quarkus-maven-plugin ${quarkus.platform.version} true --enable-preview --enable-native-access=ALL-UNNAMED jdk.incubator.vector #application.properties quarkus.langchain4j.jlama.chat-model.model-name=tjake/Llama-3.2-1B-Instruct-JQ4 quarkus.langchain4j.jlama.chat-model.temperature=0 Configure a model from Hugging Face (it will be download automatically on 1st run)

Slide 12

Slide 12 text

A pure Java LLM-based application: the site summarizer @Path("/summarize") public class SiteSummarizerResource { @Inject private SiteSummarizer siteSummarizer; @GET @Path("/{type}/{topic}") @Produces(MediaType.TEXT_PLAIN) public Multi read(@PathParam("type") String type, @PathParam("topic") String topic) { return siteSummarizer.summarize(SiteType.determineType(type), topic); } } @RegisterAiService public interface SummarizerAiService { @SystemMessage(""" You are an assistant that receives the content of a web page and sums up the text on that page. Add key takeaways to the end of the sum-up. """) @UserMessage("Here's the text: '{text}'") Multi summarize(@V("text") String text); } @ApplicationScoped class SiteSummarizer { @Inject private SummarizerAiService summarizerAiService; public Multi summarize(SiteType siteType, String topic) { String html = SiteCrawler.crawl(siteType, topic); String content = TextExtractor.extractText(html, 20_000)); return summarizerAiService.summarize(content); } }

Slide 13

Slide 13 text

A pure Java LLM-based application: the site summarizer https://github.com/mariofusco/site-summarizer

Slide 14

Slide 14 text

Quarkus LangChain4j Workshop also covers Jlama https://quarkus.io/quarkus-workshop-langchain4j/

Slide 15

Slide 15 text

The Parasol demo https://github.com/rh-rad-ai-roadshow/parasol-insurance

Slide 16

Slide 16 text

The Parasol trained LLM }13.43 GB

Slide 17

Slide 17 text

Quantizing the Parasol model with Jlama jlama quantize rhelai/granite-7b-redhat-lab 31.3%

Slide 18

Slide 18 text

Making Parasol to run on Jlama

Slide 19

Slide 19 text

Making Parasol to run on Jlama

Slide 20

Slide 20 text

Jlama Performance Limits (Large Models) ● Inference is memory bound ● Inference is 99% Matrix Multiplication ● Bigger moder == Slower Inference ● Humans read at ~5 words/sec ● Latency 200ms / token MAX Sweet Spot

Slide 21

Slide 21 text

Jlama Performance Limits (Large Models) Performance of same sized model ~7GB (8B-10B Parameter model @ Q4) ● Plain Java + Threads ● Native SIMD C (via FFI) ● Java Vector API Just barely 5 tok/sec Too bad java + gpu doesn’t mix 🤔

Slide 22

Slide 22 text

Java + GPU? ● Project Babylon one day? ○ That will be great! But when… 202X? ● TornadoVM ○ Requires bespoke wrapper around jdk to compile/run applications (with all driver dependencies) ● What else is multi-platform, portable runtime for the GPU?

Slide 23

Slide 23 text

Jlama Performance (GPU Support Coming Soon!) Performance of same sized model ~7GB (8B-10B Parameter model @ Q4) ● Plain Java + Threads ● Native SIMD C (via FFI) ● Java Vector API ● Java <-> WebGPU Google Dawn WebGPU Engine + FFI Works on Win/Mac/Linux !

Slide 24

Slide 24 text

DEMO TIME !!!