Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[AI Heroes Turin May 7th] Build smarter mobile ...

[AI Heroes Turin May 7th] Build smarter mobile & web apps: Integrate Gemini using Firebase AI Logic

As demand for intelligent features in mobile and web applications grows, developers face the challenge of securely integrating generative AI models without overcomplicating their backend infrastructure. In this session, we will explore Firebase AI Logic, a tool that gives you direct access to Google’s Gemini models through client-side SDKs.

We will dive into how Firebase AI Logic simplifies the implementation path for Android, Flutter, iOS and Web developers, allowing you to build features like multimodal chat, image generation, structured data output, live audio conversations, and hybrid inference entirely in the client-side.

**Key takeaways include:**

* **Client-Side simplicity:** How to use platform-specific SDKs to call Gemini APIs without managing a custom backend.
* **Security Best Practices:** Utilizing **Firebase App Check** to protect your API keys and prevent abuse from unauthorized clients.
* **Advanced Capabilities:** A look at leveraging multimodal inputs (text, images, audio, PDF), function calling, structured JSON outputs, voice interactions and hybrid on-device/cloud inference.
* **Production Readiness:** Tips for updating model parameters dynamically and monitoring usage to stay within quotas.

Avatar for Rosário P. Fernandes

Rosário P. Fernandes

May 07, 2026

More Decks by Rosário P. Fernandes

Other Decks in Technology

Transcript

  1. Build smarter mobile & web apps: Integrate Gemini using Firebase

    AI Logic Developer Relations Engineer @thatfiredev Rosário P. Fernandes
  2. BUILD RUN A/B Testing Crashlytics App Distribution Remote Config Test

    Lab Cloud Messaging Performance Monitoring Google Analytics Cloud Functions for Firebase Cloud Storage for Firebase Data Connect App Hosting Authentication Realtime Database Firestore Hosting App Check Firebase AI Logic
  3. Easy to integrate on iOS, Android, and the Web as

    well as a variety of gaming platforms 4
  4. 1. Go to the Firebase console (console.firebase.google.com) 2. Find the

    “AI Logic” section and click on “Get started” 3. Choose your desired API provider
  5. 10

  6. 1. Go to the Firebase console (console.firebase.google.com) 2. Find the

    “AI Logic” section and click on “Get started” 3. Choose your desired API provider 4. Add the SDKs to your app
  7. val model = Firebase.ai( backend = GenerativeBackend.googleAI() ).generativeModel( modelName =

    "gemini-3.1-flash-lite" ) val prompt = "Write a story about a magic backpack." val response = model.generateContent(prompt) Log.d(TAG, response.text)
  8. val model = Firebase.ai.generativeModel( modelName = "gemini-3.1-flash-lite" ) val prompt

    = content { inlineData( bytes = reportPDFfileStream.readBytes(), mimeType = "application/pdf" ) text("Summarize the important results in this report.") } val response = model.generateContent(prompt) Log.d(TAG, response.text)
  9. val model = Firebase.ai .generativeModel("gemini-3.1-flash-lite") val chat = model.startChat( history

    = listOf( content(role = "user") { text("Hello!") }, content(role = "model") { text("Hi, how can I help?") } ) ) val response = chat.sendMessage("I'd like to --.") Log.d(TAG, response.text)
  10. // Step 1: provide a JSON schema object using a

    standard format. val jsonSchema = Schema.obj( mapOf("characters" to Schema.array( Schema.obj( mapOf( "name" to Schema.string(), "age" to Schema.integer(), "species" to Schema.string(), "accessory" to Schema.enumeration(listOf("hat", "belt")) ), optionalProperties = listOf("accessory") ) )) )
  11. // Step 2: provide a JSON schema object using a

    standard format. val model = Firebase.ai.generativeModel( modelName = "gemini-3.1-flash-lite", generationConfig = generationConfig { responseMimeType = "application/json" responseSchema = jsonSchema }) val prompt = "For use in a card game, generate 10 animal-based characters." val response = generativeModel.generateContent(prompt)
  12. val model = Firebase.ai.generativeModel( modelName = "gemini-3.1-flash-image-preview", generationConfig = generationConfig

    { responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) } ) val prompt = "Generate an image of the Eiffel tower " + "with fireworks in the background." val generatedImageAsBitmap = model.generateContent(prompt) .candidates.first().content.parts .filterIsInstance<ImagePart>().firstOrNull()?.image
  13. val liveModel = Firebase.ai.liveModel( modelName = "gemini-3.1-flash-live-preview", generationConfig = liveGenerationConfig

    { responseModality = ResponseModality.AUDIO } ) val session = liveModel.connect() session.startAudioConversation()
  14. // System Instructions val model = Firebase.ai.generativeModel( modelName = "gemini-3.1-flash-lite",

    systemInstruction = content { text("You will respond as a music historian." + "Your tone will be upbeat and enthusiastic," + "spreading the joy of music. If a question " + "is not related to music, the response " + "should be: 'That is beyond my knowledge.'") } )
  15. // Thinking levels val generationConfig = generationConfig { thinkingConfig =

    thinkingConfig { includeThoughts = true thinkingLevel = ThinkingLevel.LOW } } val model = Firebase.ai.generativeModel( modelName = "gemini-3.1-flash-lite", generationConfig, )
  16. // Search grounding val model = Firebase.ai.generativeModel( modelName = "gemini-3.1-flash-lite",

    tools = listOf(Tool.googleSearch()) ) val response = model.generateContent("Who won the euro 2024?")
  17. // URL Context val model = Firebase.ai.generativeModel( modelName = "gemini-3.1-flash-lite",

    tools = listOf(Tool.googleSearch(), Tool.urlContext()) ) val response = model.generateContent( "Find the latest blog post from Firebase and " + "compare it to this article: $url" )
  18. // Code Execution val model = Firebase.ai.generativeModel( modelName = "gemini-3.1-flash-lite",

    tools = listOf(Tool.codeExecution()) ) val prompt = "What is the sum of the first 50 prime numbers? " + "Generate and run code for the calculation." val response = model.generateContent(prompt)
  19. suspend fun fetchWeather( city: String, state: String, date: String ):

    JsonObject { // ... Call an external weather API // Return a JsonObject return JsonObject(mapOf( "temperature" to JsonPrimitive(temp), "chancePrecipitation" to JsonPrimitive(precipitation), "cloudConditions" to JsonPrimitive(cloudConditions) )) }
  20. val fetchWeatherTool = FunctionDeclaration( "fetchWeather", "Get the weather conditions for

    a specific city on a specific date.", mapOf( "city" to Schema.string("The city for which to get the weather."), "state" to Schema.string("The US state for which to get the weather."), "date" to Schema.string("The date for which to get the weather." + " Date must be in the format: YYYY-MM-DD." ), ), )
  21. val chat = model.startChat() val prompt = "What was the

    weather in Boston on October 17, 2024?" val result = chat.sendMessage(prompt) val fetchWeatherCall = result.functionCalls.find { it.name == "fetchWeather" } val functionResponse = fetchWeatherCall?.let { val city = it.args["city"]!!.jsonPrimitive.content val state = it.args["state"]!!.jsonPrimitive.content val date = it.args["date"]!!.jsonPrimitive.content fetchWeather(city, state, date) }
  22. // Send the response(s) from the function back to the

    model // so that the model can use it to generate its final response. val finalResponse = chat.sendMessage(content("function") { part(FunctionResponsePart("fetchWeather", functionResponse!!)) }) // Log the text response. println(finalResponse.text ?: "No text in response")
  23. Hybrid Inference 5% - 10% Android devices support on-device models

    like Gemini Nano Chromium Gemini Nano (others) Android Gemini Nano (others) On Device Inference Firebase AI Logic In Cloud Inference Intelligent Routing Local first, Cloud first, Only local, Client SDKs Firebase AI Logic iOS Foundation Models Gemini API providers
  24. const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() }); const

    model = getGenerativeModel(ai, { mode: InferenceMode.PREFER_ON_DEVICE, inCloudParams: { model: "gemini-3.1-flash-lite" } }); const prompt = "Write a poem about an AI model that runs in the browser" const result = await model.generateContent(prompt); JavaScript (Web)
  25. val model = Firebase.ai(backend = GenerativeBackend.googleAI()) .generativeModel( modelName = "gemini-3.1-flash-lite",

    onDeviceConfig = OnDeviceConfig( mode = InferenceMode.PREFER_ON_DEVICE ) ) val prompt = "Write a poem about an AI model that fits inside a pocket" val response = model.generateContent(prompt) print(response.text) Kotlin (Android)
  26. • PREFER_ON_DEVICE: Attempt to use on-device model; otherwise, fall back

    to the cloud-hosted model. • ONLY_ON_DEVICE: Attempt to use on-device model; otherwise, throw an exception. • PREFER_IN_CLOUD: Attempt to use the cloud-hosted model; otherwise, fall back to the on-device model. • ONLY_IN_CLOUD: Attempt to use the cloud-hosted model; otherwise, throw an exception. Inference Modes
  27. let session = FirebaseAI.firebaseAI().generativeModelSession( model: .hybridModel( primary: .systemModel( useCase: .general

    ), secondary: .geminiModel( name: "gemini-3.1-flash-lite", ) ) ) let response = try await session.respond( to: "Write a poem about an AI model that fits inside a pocket" ) Swift (Apple platforms) New!
  28. Firebase Invalid Auth ID Token Valid Auth ID Token Auth

    (TLS 60m) API Keys secured on server Gemini API key IAM GCP Auth Security Gemini API providers
  29. Firebase Illegitimate request Legitimate request Auth (TLS 60m) Gemini API

    key IAM GCP Auth Security: App Check App Check (TLS >30min) Gemini API providers
  30. Firebase Illegitimate request Legitimate request Auth (TLS 60m) Gemini API

    key IAM GCP Auth Security: Replay Attack Protection App Check Replay Protection One-time token Gemini API providers
  31. Top LLM Vulnerabilities OWASP Top 10 Prompt Injection • Direct

    Prompt Injection • Indirect Prompt Injection Improper Output Handling • Insufficient validation and sanitization of model generated output Sensitive Information Disclosure • PII Leakage • Business IP Protection • Unauthorized Data Disclosure System Prompt Leakage • Application instructions may reveal sensitive internal logic    
  32. < /> Firebase AI Logic Make request Get response Generate

    Content Response Model Armor Make request Get response Block Request or Block Response Model Armor integration Content Safety model Ability to detect content safety categories (e.g., harassment, dangerous) Sensitive Data Protection service Ability to detect PII based on templates Prompt safety model Ability to detect prompt injection and jailbreak attempts Google AV and SafeBrowsing Detect malicious files and unsafe URLs sent or created Agent Platform Gemini API
  33. < /> System Instructions You're a storyteller that tells nice

    and joyful stories with happy endings. User Prompt Create a story about a cat with the length of 40 words in Spanish language. Full prompt is in the client Security: Prompt
  34. val generativeModel = Firebase.ai.templateGenerativeModel() val response = generativeModel.generateContent("storyteller-v10", mapOf( "topic"

    to topic, "length" to length, "language" to language ) ) _output.value = response.text --- model: 'gemini-3.1-flash-lite' input: Schema:... output: schema:... tools:... —-- {{role "system"}} You're a storyteller that tells nice and joyful stories with happy endings. {{role "user"}} Create a story about {{topic}} with the length of {{length}} words in {{language}} language. Your client code Your prompt template Security: Server Prompt Templates
  35. < /> Template Name: Storyteller-v1 Parameters: {cat, 200, English} Template

    Storage Prompt Composer Retrieve & Compose Full Prompt Firebase AI Logic (Server) Joke-Teller-v05 Storyteller-v1 --- model: 'gemini-3-flash' input: schema:... tools:... —-- {{role "system"}} You're a storyteller that tells nice and joyful stories with happy endings. {{role "user"}} Create a story about {{topic}} with the length of {{length}} words in the {{language}} language. Enhanced Security: Prompts hidden from client, protecting IP Centralized Management: Manage prompts in the Firebase Console Faster Iteration: Prompts/models updates without app updates Observability: Watch the request in AI Monitoring Security: Prompt Templates benefits Gemini API providers
  36. < /> Firebase AI Logic (Server) --- model: 'gemini-3-flash' input:

    schema:... tools:... —-- {{role "system"}} You're a storyteller that tells nice and joyful stories with happy endings. {{role "user"}} Create a story about {{topic}} with the length of {{length}} words in the {{language}} language. Make request Get response Generate Content Response Cloud Functions Triggers Before Generate Content After Generate Content Scenarios: Access Control, Custom Guardrails, Context Injection, etc Cloud Functions triggers Gemini API providers
  37. import { beforeGenerateContent } from "firebase-functions/v2/ai"; import { HttpsError }

    from "firebase-functions/v2/https"; export const validateAndFilterPrompt = beforeGenerateContent((event) => { const uid = event.authId; if (event.authType === "unauthenticated" || !uid) { throw new HttpsError("unauthenticated", "You must be signed in to use this AI service." ); } const contents = event.data.request.contents || []; for (const content of contents) { for (const part of content.parts || []) { if (part.text && part.text.toLowerCase().includes("cat")) { throw new HttpsError( "invalid-argument", "Prompts containing the word 'cat' are not allowed." ); } } } return {}; });
  38. Thank you ! Learn more or connect with me: •

    Docs: firebase.google.com/docs/ai-logic • 𝕏: @thatfiredev • LinkedIn: linkedin.com/in/rosariopfernandes Developer Relations Engineer @thatfiredev Rosário P. Fernandes