Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Beyond Code Sharing: Building Smarter Apps with...

Beyond Code Sharing: Building Smarter Apps with KMP and Agents :: Devfest Chennai 2025

For years we've talked about code sharing with Kotlin Multiplatform (KMP). This talk explores the next frontier: integrating AI agents directly into your KMP architecture to create intelligent, context-aware mobile applications.

I'll start by explaining KMP and Koog, then I'll demonstrate usages and power of Koog by building a document management app that evolves from a simple KMP baseline to an AI-powered assistant—all from a single codebase serving Android, iOS, and backend.

The Journey:
Act 1: The Foundation
• Building a document scanner app with shared business logic across platforms
• Implementing OCR, storage (SQLDelight), and sync (Ktor) in common modules
• Structuring your KMP project for future agent integration
Act 2: Intelligence Layer with Koog
• What is Koog and how it simplifies AI agent integration in Kotlin
• Adding an agent that auto-classifies documents and extracts key fields
• Enabling natural language queries: "show my last electricity bill"
• Building proactive features: automatic reminders for document renewals
Act 3: Architectural Patterns
• Clean separation between business logic, and agent capabilities.
• How agent integration in KMP actually simplifies your architecture
• Practical patterns for handling agent responses across platforms
• Testing strategies for agent-enhanced features
Act 4: Production Reality
• Battle-tested patterns from scaling KMP at unicorn startups
• Why agent integration in KMP actually reduces complexity (counterintuitive but true)
Key Takeaways:
• Complete architectural blueprint with code samples and diagrams
• Step-by-step guide to adding Koog to existing KMP projects
• Common patterns for agent-UI interaction in multiplatform apps
• Real examples you can adapt for your own use cases

This isn't about AI hype—it's about pragmatically extending your Kotlin skills to build the next generation of mobile apps. You'll leave with actionable insights and real code you can apply immediately.

Avatar for Rivu Chakraborty

Rivu Chakraborty PRO

November 08, 2025
Tweet

More Decks by Rivu Chakraborty

Other Decks in Technology

Transcript

  1. Beyond Code Sharing: Building Smarter Apps with KMP and Agents

    (with Koog, Gemini and More) A Practical Guide to Native & Smart Apps for multiple platforms Rivu Chakraborty
  2. WHO AM I? • GDE (Google Developer Expert) for Android

    • Previously India’s first GDE for Kotlin • More than 14 years in the Industry • Founder @ Mobrio Studio • Previously ◦ JioCinema/JioHotstar, Byju’s, Paytm, Gojek, Meesho • Author (wrote multiple Kotlin books) • Speaker • Mentor • Learning / Exploring ML/AI • Community Person (Started KotlinKolkata) • YouTuber (http://youtube.com/@RivuTalks)
  3. • Led personally by me (Rivu), with my decades of

    experience of scaling 6+ unicorn startups, and many smaller ones • We do Mobile Dev tooling (products) as well as we consult with product based startups, helping them develop or scale their apps • We can help with anything to do with mobile, starting from code quality, migration, refactor to feature development • At Mobrio Studio, I have a team, who work under my direct super vision. • We don’t just develop for you, we train your team, so you’re independent in future https://mobrio.studio/
  4. WHY THIS TALK? • GenAI & Agents are hot •

    Gemini API, Gemini Nano (Experimental) and Gemma models allow apps to use AI easily • KMP lets us build once for Android, iOS, Web & More • Koog let’s you build agents for multiple platforms • We'll walk through real code & gotchas
  5. What’s KMP and Why? • A technology by JetBrains to

    share Kotlin code across platforms (Android, iOS, web, desktop, server). • Enables platform-specific UI while sharing core business logic (networking, database, state management). You control what you share and what you don’t • Write Once, Run Natively: Outputs native binaries (no VM or JS bridge).
  6. What’s KMP and Why? • Incremental Adoption: Can be integrated

    into existing apps module by module, reducing migration risk. • Kotlin Ecosystem: Leverages the robust Kotlin ecosystem including Coroutines, Serialization, Ktor, SQLDelight, etc. hps://kotlinlang.org/docs/multiplatform.html hps://speakerdeck.com/rivuchk/creating-sdks-for-multiple-platforms-with-kmp
  7. What is GenAI in Mobile Development? • GenAI brings creative

    intelligence to mobile apps by enabling them to generate rather than just respond. • Enables hyper-personalized, intelligent, and context-aware user experiences. • Enhances accessibility, productivity, and entertainment within apps. • Can run on-device (for privacy/speed) or via cloud APIs. • In mobile apps, GenAI powers features like: a. Text generation (e.g., storytelling, smart replies, chatbots) b. Image generation/editing c. Voice synthesis (TTS)
  8. What’s AI? Algorithm Input Output Developers write explicit algorithms that

    take input and produce a desired output. 1. Train the model with large dataset of input and output 2. Model is deployed on cloud/on-device to process input data i.e. inference Traditional Programming Machine Learning ML Model Training Input ML Model Output Run ML Inference Input Output
  9. What’s GenAI? • Generative AI introduces the capability to understand

    inputs such as text, images, audio and video and generate human-like responses. • This enables applications like chatbots, language translation, text summarization, image captioning, image or code generation, creative writing assistance, and much more. • At its core, an LLM is a neural network model trained on massive amounts of text data. It learns paerns, grammar, and semantic relationships between words and phrases, enabling it to predict and generate text that mimics human language.
  10. Why Gemini (by Google)? • Multimodal: Understands text, image, code,

    audio, and more. • Optimized for Android, iOS & Web • Enhances accessibility, productivity, and entertainment within apps. • Developer Friendly a. Easy-to-use libraries / APIs b. SDKs support prompting, streaming, and low-latency generation
  11. Different Ways To Integrate AI Directly in Mobile Apps 01

    Gemini API 02 Mediapipe / LLMInterference Library and Offline Model Can be used with any tflite / LiteRT Models, not Gemma Specific 03 Gemini Nano Currently Experimental, available only on Pixel 9 Devices Either Directly with GeminiAPI or By Using The Third Party Library by Shreyas 04 Firebase Vertex AI You can use Gemini APIs and models with Firebase Vertex API, reducing the need for handling intricate details yourself Koog You can use Koog, along with various LLM providers, and custom/inbuilt tools to build AI agents and use them directly in Mobile Apps 05
  12. Google Generative AI SDK for Kotlin Multiplatform by Shreyas Patil

    - hps://github.com/PatilShrey as/generative-ai-kmp API key stored in BuildKonfig Suspend function for story generation Works on Android & iOS GEMINI INTEGRATION (ONLINE)
  13. GENERATIVEMODEL IMPLEMENTATION (GEMINI) class GenerativeModelGemini(private val apiKey: String) : GenerativeModel

    { private val model by lazy { GeminiApiGenerativeModel( ... ) } override suspend fun generate(prompt: String, awaitReadiness: Boolean): Result<String> { return runCatching { val input = content { text(prompt) } val response = model.generateContent(input) response.text ?: throw UnsupportedOperationException("No text returned from model") } } } commonMain.dependencies { implementation("dev.shreyaspatil.generativeai:generativeai-google:<version>") } hps://github.com/PatilShreyas/generative-ai-kmp
  14. 01 USES MEDIAPIPE GENAI 02 TEXTGENERATOR EXPECT/ACTUAL for platform-specific code

    03 LOCALGENERATIVEMODEL wraps the logic OFFLINE MODE WITH GEMMA
  15. DOWNLOA D .TASK FILE FROM SERVER or Pack with App

    (not recommend ed) STORE IN INTERNAL APP DIRECTORY INIT MEDIAPIPE LLM AFTER DOWNLOAD COMPLETES MODEL DOWNLOAD & INITIALIZATION
  16. Download .task file and Store it in App Directory (Android

    Code) https://huggingface.co/google/gemma-3-1b-it val request = DownloadManager.Request(modelUrl.toUri()) .setNotificationVisibility(DownloadManager.Request.VISIBILITY_VISIBLE) // Visibility of the download Notification .setDestinationUri(Uri.fromFile(modelFile)) // Uri of the destination file .setDescription("Downloading Gemma 3 Model") // Title of the Download Notification .setTitle("Downloading The Model") // Description of the Download Notification .setRequiresCharging(false) // Set if charging is required to begin the download .setAllowedOverMetered(true) // Set if download is allowed on Mobile network .setAllowedOverRoaming(true) // Set if download is allowed on roaming network val downloadManager = context.getSystemService(DOWNLOAD_SERVICE) as DownloadManager
  17. Init MediaPipe LLM Interference private val llmInference: LlmInference by lazy

    { val options = LlmInference.LlmInferenceOptions.builder() .setModelPath(modelFile.absolutePath) .setMaxTokens(1024) .setMaxTopK(40) .build() LlmInference.createFromOptions(context, options) }
  18. Use The Model actual suspend fun generate(prompt: String): String {

    val result = withContext(Dispatchers.IO) { llmInference.generateResponse(prompt) } return result ?: throw IllegalStateException("Model didn't generate") }
  19. Generation Settings GeminiApiGenerativeModel( modelName = "gemini-2.0-flash", apiKey = apiKey, generationConfig

    = GenerationConfig.Builder().apply { topK = 40 } .build() ) private val llmInference: LlmInference by lazy { val options = LlmInference.LlmInferenceOptions.builder() .setModelPath(modelFile.absolutePath) .setMaxTokens(512) .setMaxTopK(40) .build() LlmInference.createFromOptions(context, options) }
  20. Generation Settings GeminiApiGenerativeModel( modelName = "gemini-2.0-flash", apiKey = apiKey, generationConfig

    = GenerationConfig.Builder().apply { topK = 40 } .build() ) private val llmInference: LlmInference by lazy { val options = LlmInference.LlmInferenceOptions.builder() .setModelPath(modelFile.absolutePath) .setMaxTokens(512) .setMaxTopK(40) .build() LlmInference.createFromOptions(context, options) }
  21. TopK • Top-K filters tokens for output. • For example

    a Top-K of 3 keeps the three most probable tokens. • Increasing the Top-K value will increase the randomness of the model response.
  22. TopK • Top-K filters tokens for output. • For example

    a Top-K of 3 keeps the three most probable tokens. • Increasing the Top-K value will increase the randomness of the model response.
  23. TopK • Top-K filters tokens for output. • For example

    a Top-K of 3 keeps the three most probable tokens. • Increasing the Top-K value will increase the randomness of the model response.
  24. maxTokens • Limits the maximum output length a model can

    generate • A token can be a whole word, part of a word (like "ing" or "est"), punctuation, or even a space. The exact way text is tokenized depends on the specific model’s tokenizer. • Whenever we call llmInference.generateResponse(prompt), the response generated by the local model will contain at most 512 tokens.
  25. maxTokens • Limits the maximum output length a model can

    generate • A token can be a whole word, part of a word (like "ing" or "est"), punctuation, or even a space. The exact way text is tokenized depends on the specific model’s tokenizer. • Whenever we call llmInference.generateResponse(prompt), the response generated by the local model will contain at most 512 tokens.
  26. Integrate Gemini Nano generativeModel = try { val generationConfig =

    generationConfig { context = getApplication<Application>().applicationContext temperature = 0.2f topK = 40 maxOutputTokens = 1024 } val downloadConfig = DownloadConfig( object : DownloadCallback { isReady.update { true } } ) GenerativeModel( generationConfig = generationConfig, downloadConfig = downloadConfig ) } catch (e: Exception) { Log.e("MainViewModel", "Failed to initialize AI Core: ${e.message}") null }
  27. Integrate Gemini Nano val generationConfig = generationConfig { context =

    getApplication<Application>().applicationContext temperature = 0.2f topK = 40 maxOutputTokens = 1024 }
  28. Vertex AI Google Recommends using Vertex AI in Firebase SDK

    for Android to access the Gemini API and the Gemini family of models directly from the app.
  29. Vertex AI implementation("com.google.firebase:firebase-vertexai:$version") class GenerativeModelVertex() : GenerativeModel { val generativeModel

    = Firebase.vertexAI.generativeModel("gemini-2.5-flash") override suspend fun generate(prompt: String, awaitReadiness: Boolean): Result<String> { return runCatching { model.generateContent(prompt) } } }
  30. Koog Koog is a Kotlin-based framework designed to build and

    run AI agents entirely in idiomatic Kotlin. It lets you create agents that can interact with tools, handle complex workflows, and communicate with users. https://docs.koog.ai/
  31. Koog implementation("ai.koog:koog-agents:0.5.2") suspend fun runAgent(prompt: String): String { return try

    { val agent = AIAgent( promptExecutor = simpleGoogleAIExecutor(apiKey), llmModel = GoogleModels.Gemini2_5Flash, systemPrompt = systemPrompt ) val result = agent.run(prompt) result } catch (e: Exception) { "Koog Agent Error: ${e.message}\n${e.stackTraceToString()}" } }
  32. Koog - tools • The AI Agent is the "Brain"

    • Tools are the "Hands" and "Eyes" • Example: The "Search" Tool • The AI is the "Smart Foreman" • Why It Maers: Without tools, an AI is just a fancy encyclopedia. With tools, an AI becomes a real personal assistant that can find current information and actually get things done for you in the real world.
  33. Koog - Tools Agents use tools to perform specific tasks

    or access external systems. expect class DatabaseOperationsToolSet( repository: YourRepository ) { suspend fun someDBOperation(): Result /** * Convert to Koog tools */ fun toTools(): List<Tool<*, *>> }
  34. Koog - Tools Android/JVM @LLMDescription("Meaningfull description of the class/toolset") expect

    class DatabaseOperationsToolSet( repository: YourRepository ) : ToolSet { { @Tool @LLMDescription("Meaningfull description of the function/operation") suspend fun someDBOperation(): Result {...} fun toTools(): List<Tool<*, *>> {...} }
  35. Koog - Tools Common private fun createAgent(): AIAgent<String, String> {

    return AIAgent( promptExecutor = MultiLLMPromptExecutor( GoogleLLMClient(apiKey) ), systemPrompt = """ ---Your System Prompt--- """.trimIndent(), llmModel = GoogleModels.Gemini2_5FlashLite, toolRegistry = dbToolRegistry ) }
  36. Koog - Memory • Your Brain vs. A Goldfish: By

    default, a simple AI is like a goldfish. • Memory is the "Notepad" • It Remembers "You" , it Remembers "What It Did" • Short-Term vs. Long-Term • Why It Maers: Without memory, an AI is just a tool. With memory, it becomes a personal assistant that gets smarter and more helpful the more you interact with it.
  37. Koog - Memory • Facts: This is the actual piece

    of information being saved. It’s the "note" itself. Koog has two types. ◦ SingleFact: For one piece of info (e.g., "User’s preferred theme is Dark"). ◦ MultipleFacts: For a list of info (e.g., "User knows Kotlin, Java, and Python"). • Concepts: This is the label or category for the fact. It’s like the heading on a page in the notepad. • Subjects: This is who or what the fact is about. It’s like the label on the "file drawer."
  38. Koog - Memory The AgentMemory Feature addresses the challenge of

    maintaining context in AI agent install(AgentMemory) { agentName = "query-agent" featureName = "natural-language-search" organizationName = "mobrio-studio" productName = "files-plus" memoryProvider = FilesPlusMemory.memoryProvider }
  39. Koog - Memory The AgentMemory Feature addresses the challenge of

    maintaining context in AI agent FilesPlusMemory.memoryProvider.save( fact = SingleFact( value = response, concept = responseConcept, timestamp = Clock.System.now().toEpochMilliseconds(), ), subject = User, scope = MemoryScope.Product("files-plus"), )
  40. Koog - Strategy • The "Strategy" is the AI’s "Recipe"

    or "Plan": If the AI is the "foreman" (brain), the tools are the "hands," and the memory is the "notepad," the strategy is the detailed, step-by-step "workflow" or "recipe" the foreman follows to get a job done.
  41. Koog - Strategy • Koog’s "Strategy Graph": In Koog, you

    don’t just write a simple list. You build a "Strategy Graph"—think of it as a flowchart for the AI’s "recipe." This lets you create very smart and complex plans. • Nodes (The Steps): The "Nodes" are the boxes in the flowchart. Each node is one action in the recipe. • Edges (The Arrows): The "Edges" are the arrows that connect the boxes. They show the agent which step to do next. • Subgraphs are "Recipe" Inside a "Recipe"
  42. Koog - Strategy val myStrategy = strategy<String, String>("my-strategy") { val

    nodeCallLLM by nodeLLMRequest() val executeToolCall by nodeExecuteTool() val sendToolResult by nodeLLMSendToolResult() edge(nodeStart forwardTo nodeCallLLM) edge(nodeCallLLM forwardTo nodeFinish onAssistantMessage { true }) edge(nodeCallLLM forwardTo executeToolCall onToolCall { true }) edge(executeToolCall forwardTo sendToolResult) edge(sendToolResult forwardTo nodeFinish onAssistantMessage { true }) edge(sendToolResult forwardTo executeToolCall onToolCall { true }) }
  43. Koog - Strategy val nodeProcessQuery by subgraph<String, String> { val

    processQuery by nodeLLMRequest() val executeToolCall by nodeExecuteTool() val sendToolResult by nodeLLMSendToolResult() val processToolResult by node<Message.Response, String> { input -> input.content } edge(nodeStart forwardTo processQuery) edge(processQuery forwardTo executeToolCall onToolCall { true }) edge(executeToolCall forwardTo sendToolResult) edge(sendToolResult forwardTo processToolResult) edge(processToolResult forwardTo processQuery) edge(processQuery forwardTo nodeFinish onAssistantMessage { true }) }
  44. COCOAPODS INTEGRATION FOR IOS Cocoapods integration for iOS is deprecated

    MEDIAPIPE GENAI MediaPipe GenAI supports Android, iOS and Web, however integrating it with KMP is challenging CHALLENGES FACED Koog Agents Using Koog makes building and using agents or even just calling any prominent LLM very easy in Mobile or any platform
  45. It’s easy to integrate GenAI with your KMP apps LLM

    Interference / MediaPipe works but its’ not for most of the usecases Code reusability across platforms with KMP KEY TAKEAWAYS Gemini Nano can be a game changer Koog makes it even easier