brings AI capabilities directly into the browser, allowing websites and web apps to perform tasks like translation, summarization, and writing assistance using local “expert” models, no need to host or manage AI models yourself. What is this stuff?
on the user’s device, offering faster, private, offline-capable experiences. It protects data, cuts server costs, and delivers premium features without backend load. What’s the benefits?
go to chrome://flags, and enable all this flags: • chrome://flags/#optimization-guide-on-device-model (The above and previous flag is usually sufficient. However, if the model availability check does not show its available, you can enable the additional flags below depending on which API is unavailable.) • chrome://flags/#prompt-api–for-gemini-nano • chrome://flags/#prompt-api-for-gemini-nano-multimodal-input • chrome://flags/#summarization-api-for-gemini-nano • chrome://flags/#writer-api-for-gemini-nano • chrome://flags/#rewriter-api-for-gemini-nano • chrome://flags/#language-detection-api • chrome://flags/#translation-api • chrome://flags/#language-detection-api • chrome://flags/#proofreader-api-for-gemini-nano (latest update) 2. Relaunch Chrome Enable all available API
either using this way: a. Open DevTools and send await LanguageModel.availability(); in the console. b. If this returns “available”, then you are all set. 2. or this way: Check if the model is ready
'unavailable') { return } if (available === 'available') { summarizer = await Summarizer.create() } else { summarizer = await Summarizer.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Use regular API for complete response const result = await summarizer.summarize( 'Text to summarize' ); Summarizer API Example Checks if Summarizer API is available. If needed, downloads the model with progress shown. Then uses summarize() to get a summary from text.
'unavailable') { return } if (available === 'available') { summarizer = await Summarizer.create() } else { summarizer = await Summarizer.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Use streaming API for real-time updates const stream = await summarizer.summarizeStreaming( 'Text to summarize' ); let result = '' for await (const chunk of stream) { result += chunk } Summarizer (Streaming) API Example Checks if Summarizer API is available. If needed, downloads the model with progress shown. Then uses summarizeStreaming() to get a streamed summary from text.
'unavailable') { return } if (available === 'available') { writer = await Writer.create() } else { writer = await Writer.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Use regular API for complete response const result = await writer.write( 'Write a product description for an eco-friendly water bottle' ); Writer API Example Checks if Writer API is available. If not ready, it downloads the model with progress shown. Then, it uses write() to generate text from a prompt.
'unavailable') { return } if (available === 'available') { writer = await Writer.create() } else { writer = await Writer.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Use streaming API for real-time updates const stream = await writer.writeStreaming( 'Write a product description for an eco-friendly water bottle' ); let result = '' for await (const chunk of stream) { result += chunk } Writer (Streaming) API Example Checks if Writer API is available. If not ready, it downloads the model with progress shown. Then, it uses writeStreaming() to generate streamed text from a prompt.
'unavailable') { return } if (available === 'available') { rewriter = await Rewriter.create() } else { rewriter = await Rewriter.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Use regular API for complete response const result = await rewriter.rewrite( 'Please rewrite this sentence to be more formal.' ); Rewriter API Example Checks if Rewriter API is available. If needed, downloads the model with progress. Then uses rewrite() to improve or rephrase the text.
'unavailable') { return } if (available === 'available') { rewriter = await Rewriter.create() } else { rewriter = await Rewriter.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Use streaming API for real-time updates const stream = await rewriter.rewriteStreaming( 'Please rewrite this sentence to be more formal.' ); let result = '' for await (const chunk of stream) { result += chunk } Rewriter (Streaming) API Example Checks if Rewriter API is available. If needed, downloads the model with progress. Then uses rewriteStreaming() to improve or rephrase the text.
'unavailable') { return } if (available === 'available') { translator = await Translator.create() } else { translator = await Translator.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } const result = await translator.translate( 'Text to translate goes here...', targetLang: 'id', sourceLang: 'en' ); Translator API Example Checks if Translator API is available. If needed, downloads the model with progress. Then uses translate() to convert text from English to Indonesia.
'unavailable') { return } if (available === 'available') { detector = await LanguageDetector.create() } else { detector = await LanguageDetector.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } const result = await detector.detect( 'Text to detect language from goes here...'); Language Detector API Example Checks if Language Detector API is available. If needed, downloads the model with progress. Then uses detect() to identify the language of the text.
'unavailable') { return } if (available === 'available') { proofreader = await Proofreader.create() } else { proofreader = await Proofreader.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) }, includeCorrectionTypes: true, includeCorrectionExplanations: true, expectedInputLanguages: ["en"] }) } // Use regular API for complete response const result = await proofreader.proofread( 'I seen him yesterday atss the stroe, and he bought twa loafs of bread.', includeCorrectionTypes: true, includeCorrectionExplanations: true, expectedInputLanguages: ["en"] ); Proofreader API Example Checks if Proofreader API is available. If needed, downloads the model with progress. Then uses proofread() to fix grammar, show error types, and give explanations for English text.
'unavailable') { return } if (available === 'available') { model = await LanguageModel.create() } else { model = await LanguageModel.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Use regular API for complete response const result = await model.prompt( 'Your prompt goes here' ); Text Prompt API Example Checks if Language Model API is available. If needed, downloads the model with progress. Then uses prompt() to generate a text response based on your input.
'unavailable') { return } if (available === 'available') { model = await LanguageModel.create() } else { model = await LanguageModel.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Use streaming API for real-time updates const stream = await model.promptStreaming( 'Your prompt goes here' ); let result = '' for await (const chunk of stream) { result += chunk } Text Prompt (Streaming) API Example Checks if Language Model API is available. If needed, downloads the model with progress. Then uses prompt() to generate a streamed text response based on your input.
'unavailable') { return } if (available === 'available') { model = await LanguageModel.create() } else { model = await LanguageModel.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Get image data from a file input or canvas const imageFile = document.querySelector('input[type="file"]').files[0] const imageData = await imageFile.arrayBuffer() // Create a prompt with both text and image const prompt = { text: 'What can you tell me about this image?', images: [imageData] } const result = await model.generateContent(prompt); Multimodality Prompt API (Image) Example Checks if Language Model API is available. If needed, downloads the model with progress. Then sends both text and image as a prompt using generateContent() to get a response based on the image.
'unavailable') { return } if (available === 'available') { model = await LanguageModel.create() } else { model = await LanguageModel.create({ monitor(m) { m.addEventListener('downloadprogress', (e) => { console.log(`Downloaded ${e.loaded * 100}%`) }) } }) } // Get audio data from a file input or MediaRecorder const audioFile = document.querySelector('input[type="file"]').files[0] const audioData = await audioFile.arrayBuffer() // Create a prompt with both text and audio const prompt = { text: 'What is being said in this audio?', audio: audioData } const result = await model.generateContent(prompt); Multimodality Prompt API (Audio) Example Checks if Language Model API is available. If needed, downloads the model with progress. Then sends both text and audio using generateContent() to get a response based on the audio input.
reviews locally before submitting or replying 2. Autocorrect and grammar check in writing apps without internet 3. Summarize articles or emails locally in a reading app 4. Identify products from images for offline catalog search 5. Run local sentiment analysis on user feedback 6. Process and respond to voice commands in offline apps 7. Auto-tag photos using on-device object detection Some use case in real world project
support (Chrome 128+ only) 2. Not available on all devices. Older or low-end hardware may not run models well 3. Smaller models = less powerful than cloud-based AI (because it’s Gemini Nano) 4. Large model downloads can impact mobile data and storage 5. Requires fallback logic for unsupported browsers 6. Features are still experimental. Mostly still in Canary/Dev versions for latest capabilities There’s a drawback?!