Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[Hands-on] How to build Agentic Apps with Flutt...

[Hands-on] How to build Agentic Apps with Flutter @박제창

2025-07-26
2025 IO Extended Incheon

Hands-on
How to build Agentic Apps with Flutter

박제창
@jaichangpark

Avatar for JaiChangPark

JaiChangPark

July 27, 2025
Tweet

More Decks by JaiChangPark

Other Decks in Programming

Transcript

  1. Google I/O Extended 25 Proprietary & Confidential Preparation 4 1.

    Wifi - swcore / Inha1954* 2. Credit 3. Google Gemini API Key 4. Flutter SDK & IDE
  2. Google I/O Extended 25 Proprietary & Confidential Gemini API Key

    1. GCP 프로젝트에서 키 발행 2. AI Studio 에서 API Key 발행 주의: API 키는 외부에 공유하지 마세요. 5
  3. Google I/O Extended 25 Proprietary & Confidential 1. Gemini a.

    https://gemini.google.com/app 2. Gemini API a. https://ai.google.dev/ b. https://ai.google.dev/gemini-api/docs 3. Vertex AI 4. Firebase AI Logic 5. Gemini CLI Gemini 15
  4. Google I/O Extended 25 Proprietary & Confidential 1. 이전: 공식문서를

    보면서 하나씩 구현 2. 현재: Agents 기반 Vibe Coding Vibe Coding 20
  5. 22 final response = await http.post( url, headers: { 'x-goog-api-key':

    apiKey, 'Content-Type': 'application/json', }, body: jsonEncode({ 'contents': [ {'parts': [{'text': 'How does AI work?'}]} ], 'generationConfig': {'thinkingConfig': {'thinkingBudget': 0}} }), );
  6. 23 if (response.statusCode == 200) { final jsonResponse = jsonDecode(response.body);

    return jsonResponse['candidates'][0]['content']['parts'][0]['text']; } else { throw Exception('Failed to load content: ${response.statusCode}'); }
  7. 35

  8. 37 void sendMessage(String message, WidgetRef ref) { final chatStateNotifier =

    ref.read(chatStateNotifierProvider.notifier); final logStateNotifier = ref.read(logStateNotifierProvider.notifier); chatStateNotifier.addUserMessage(message); logStateNotifier.logUserText(message); chatStateNotifier.addLlmMessage(message, MessageState.complete); logStateNotifier.logLlmText(message); }
  9. 38 @riverpod class ChatStateNotifier extends _$ChatStateNotifier { /// Initializes the

    chat state with an empty message list. @override ChatState build() => ChatState.initial(); ///... }
  10. Google I/O Extended 25 Proprietary & Confidential 40 dart pub

    global activate flutterfire_cli flutter pub add firebase_core firebase_ai flutterfire configure
  11. 41 @riverpod Future<FirebaseApp> firebaseApp(Ref ref) => Firebase.initializeApp(options: DefaultFirebaseOptions.currentPlatform); @riverpod Future<GenerativeModel>

    geminiModel(Ref ref) async { await ref.watch(firebaseAppProvider.future); final model = FirebaseAI.googleAI().generativeModel( model: 'gemini-2.5-flash', ); return model; }
  12. 42 @Riverpod(keepAlive: true) Future<ChatSession> chatSession(Ref ref) async { final model

    = await ref.watch(geminiModelProvider.future); return model.startChat(); }
  13. 43 class GeminiChatService { GeminiChatService(this.ref); final Ref ref; Future<void> sendMessage(String

    message) async { final chatSession = await ref.read(chatSessionProvider.future); final chatStateNotifier = ref.read(chatStateNotifierProvider.notifier); final logStateNotifier = ref.read(logStateNotifierProvider.notifier); chatStateNotifier.addUserMessage(message); logStateNotifier.logUserText(message); final llmMessage = chatStateNotifier.createLlmMessage(); try { final response = await chatSession.sendMessage(Content.text(message));
  14. 44 @override Widget build(BuildContext context, WidgetRef ref) { final model

    = ref.watch(geminiModelProvider); return MaterialApp( home: model.when( data: (data) => MainScreen( sendMessage: (text) { ref.read(geminiChatServiceProvider).sendMessage(text); }, ), loading: () => LoadingScreen(message: 'Initializing Gemini Model'), error: (err, st) => ErrorScreen(error: err), ), ); }
  15. Google I/O Extended 25 Proprietary & Confidential 46 What are

    system prompts? A system prompt is a special type of instruction given to an LLM that sets the context, behavior guidelines, and expectations for its responses. Unlike user messages, system prompts: • Establish the LLM's role and persona • Define specialized knowledge or capabilities • Provide formatting instructions • Set constraints on responses • Describe how to handle various scenarios
  16. Google I/O Extended 25 Proprietary & Confidential 47 Why system

    prompts matter System prompts are critical for creating consistent, useful LLM interactions because they: 1. Ensure consistency: Guide the model to provide responses in a consistent format 2. Improve relevance: Focus the model on your specific domain (in your case, colors) 3. Establish boundaries: Define what the model should and shouldn't do 4. Enhance user experience: Create a more natural, helpful interaction pattern 5. Reduce post-processing: Get responses in formats that are easier to parse or display
  17. Google I/O Extended 25 Proprietary & Confidential 48 Understanding the

    system prompt structure Let's break down what this prompt does: 1. Definition of role: Establishes the LLM as a "color expert assistant" 2. Task explanation: Defines the primary task as interpreting color descriptions into RGB values 3. Response format: Specifies exactly how RGB values should be formatted for consistency 4. Example exchange: Provides a concrete example of the expected interaction pattern 5. Edge case handling: Instructs how to handle unclear descriptions 6. Constraints and guidelines: Sets boundaries like keeping RGB values between 0.0 and 1.0
  18. 49 class GeminiTools { FunctionDeclaration get setColorFuncDecl => FunctionDeclaration( 'set_color',

    'Set the color of the display square based on red, green, and blue values.', parameters: { 'red': Schema.number(description: 'Red component value (0.0 - 1.0)'), 'green': Schema.number(description: 'Green component value (0.0 - 1.0)'), 'blue': Schema.number(description: 'Blue component value (0.0 - 1.0)'), }, ); }
  19. 51 ## Your Capabilities You are knowledgeable about colors, color

    theory, and how to translate natural language descriptions into specific RGB values. You have access to the following tool: `set_color` - Sets the RGB values for the color display based on a description ## How to Respond to User Inputs When users describe a color: 1. First, acknowledge their color description with a brief, friendly response 2. Interpret what RGB values would best represent that color description 3. Use the `set_color` tool to set those values (all values should be between 0.0 and 1.0) 4. After setting the color, provide a brief explanation of your interpretation
  20. Google I/O Extended 25 Proprietary & Confidential 52 Understanding the

    system prompt structure Let's break down what this prompt does: 1. Definition of role: Establishes the LLM as a "color expert assistant" 2. Task explanation: Defines the primary task as interpreting color descriptions into RGB values 3. Response format: Specifies exactly how RGB values should be formatted for consistency 4. Example exchange: Provides a concrete example of the expected interaction pattern 5. Edge case handling: Instructs how to handle unclear descriptions 6. Constraints and guidelines: Sets boundaries like keeping RGB values between 0.0 and 1.0
  21. 54