Upgrade to Pro — share decks privately, control downloads, hide ads and more …

MCP, A2A, and AG-UI: Architecting Agent Intelli...

MCP, A2A, and AG-UI: Architecting Agent Intelligence Beyond Chat

Agentic systems are emerging as the next paradigm in AI application design - moving beyond static chatbots toward dynamic, context-aware, and modular ecosystems of intelligent agents. To operate effectively at scale, these systems must integrate three foundational capabilities: perceiving their environment, collaborating across specialized components, and interacting with users in real time. This talk introduces an architecture that leverages three interoperable protocols to achieve these goals. The Model Context Protocol (MCP) enables dynamic context hydration and semantic grounding, allowing agents to operate on structured and unstructured inputs tailored to specific tasks. The Agent-to-Agent Protocol (A2A) facilitates orchestrated collaboration between modular agents, enabling delegation, specialization, and distributed reasoning. The Agent-User Interaction Protocol (AG-UI) provides a real-time interface layer that closes the loop with users, supporting direct feedback, steerability, and reactive experiences. Together, these protocols form a cohesive foundation for building intelligent systems that are decoupled, composable, and resilient - minimizing integration debt while enabling rich, responsive behaviors. This session explores design patterns, practical challenges, and architectural strategies for applying MCP, A2A, and AG-UI in real-world agentic applications, offering a blueprint for anyone aiming to architect intelligence beyond chat.

Avatar for Max Schulte

Max Schulte

November 27, 2025
Tweet

More Decks by Max Schulte

Other Decks in Programming

Transcript

  1. MCP, A2A, and AG-UI: Architecting Agent Intelligence Beyond Chat MLCon

    Berlin 2026 Max Marschall @MaxOSchulte Consultant / Architect @ Thinktecture
  2. Model Context Protocol (MCP) In this Session The Agent–User Interaction

    (AG-UI) Protocol Agent-to-Agent Protocol (A2A)
  3. The Model Context Protocol Think of MCP like a USB-C

    port for AI applications. [...] MCP provides a standardized way to connect AI applications to external systems. https://modelcontextprotocol.io/docs/getting-started/intro
  4. HOST • LLM applications (Claude Desktop, IDEs) that initiate connections

    The Model Context Protocol CLIENT • 1:1 connection handler inside host, manages protocol communication SERVER • Provides context, tools, and capabilities to clients • Servers can't talk to each other
  5. • Do not repeat your self • (KISS) • No

    standardization across implementations • Fragile when systems change Why should we care?
  6. M x N ➡ M+N App 1 App 3 App

    2 Database Documentation FAQ App 1 App 3 App 2 Database Documentation FAQ MCP M AI Applications N Tools, Ressources, Prompts ➡
  7. • Tools • Ressources • Prompts • Elicitation • Sampling

    • Server Composition • Progress Report Features More to come (e.g. MCP-UI)
  8. ... is the ability to connect a large language model

    (LLM) to reliable sources of information to ensure accurate and relevant results. • Improves relevance • Prevents hallucinations • Connects to real-world • Alternatives: Retrievel-Augmented Generation (RAG) & Fine-Tuning Grounding
  9. ... is to provide an automated way to add concise

    contextual information based on the changing requirements. Examples: • Web search / web access • Documentation access Context-Hydration
  10. • Limit your tools • Scope your tools • Tool

    compatibility • Feature compatibility Best Practices
  11. • Different tools by authentication / authorization • Update tool

    lists depending on preceding queries • Agent Skill enable / disable MCPs • Tool change (command) Context Size Solution: Dynamic Tools
  12. • Async-Tools • MCP-UI • Code execution • (Antrophic /

    Claude) Agent Skills + MCP = RAG-MCP. Not “more tools,” but a better relationship to tools Outlook
  13. • Invented by Google • Gifted to Linux Foundation •

    "Build with ADK (or any framework), equip with MCP (or any tool), and communicate with A2A, to remote agents, local agents, and humans." A2A Origins https://a2a-protocol.org/latest/
  14. • A2A treats agents as standard enterprise applications, relying on

    established web security practices. Identity handled at HTTP transport layer. • Production deployments MUST use HTTPS • OAuth 2.0, OpenID Connect, API Keys, Manual TLS • OpenAPI Aligned: Follows OpenAPI Security Scheme speci fi cation • Server Responsibility: Authenticate every request • In-Task Auth: Use auth-required state for secondary credentials during task execution Security & Authentication
  15. Goals Interoperability Bridge communication gaps between disparate agentic systems Discovery

    Enterprise-grade secure communication patterns Security Enterprise-grade secure communication patterns Collaboration Enable agents to delegate tasks, exchange context, work together
  16. • Simple: Reuses HTTP, JSON-RPC 2.0, Server-Sent Events • Enterprise

    Ready: Auth, security, privacy, tracing, monitoring built-in • Async First: Designed for long-running tasks, human-in-the-loop • Modality Agnostic: Text, audio/video, structured data, embedded UI • Opaque Execution: No need to share internal thoughts, plans, or tools Guiding Principles
  17. Client • Discover agents via Agent Cards • Authenticate requests

    • Send messages / create tasks • Handle streaming responses • Expose webhook for noti fi cations Client Agent vs Remote Agent Remote • Publish Agent Card /.well - known/agent - card.json • Authenticate incoming requests • Process tasks & generate artifacts • Stream responses via SSE • Send push noti fi cations
  18. Core Concepts 1. Agent Card Metadata, self-describing manifest 2. Task

    Stateful work unit, lifecycle, contains status, history and artifacts 3. Message Message ("user" / "agent"), 1-X parts, conversation history 4. Part Smallest Unit, TextPart, FilePart, DataPart 5. Artifact Generated result output, composed parts 6. Context Identi fi er to group tasks, maintains state across interactions
  19. • id Unique task identi fi er • contextId Grouping

    identi fi er • status Current TaskStatus submitted ➡ working ➡ completed / failed / canceled / rejected • Interrupts: input-required & auth-required • history Message array • artifacts Generated outputs • metadata Extension data Task
  20. • tasks/send - Submit task message to agent • tasks/get

    - Retrieve task status • tasks/cancel - Cancel running task • tasks/sendSubscribe - Send and subscribe to updates Core Protocol Methods
  21. { "jsonrpc": "2.0", "id": "req-001", "method": "message.send", "params": { "message":

    { "role": "user", "parts": [ { "kind": "text", "text": "Generate an image of a sailboat on the ocean." } ] "messageId": "msg-user-001" } } } Task Followup { "jsonrpc": "2.0", "id": "req-001", "result": { "id": "task-boat-gen-123", "contextId": "ctx-conversation-abc", "status": { "state": "completed" }, "artifacts": [ { "artifactId": "artifact-boat-v1-xyz", "name": "sailboat_image.png", "description": "A generated image of a sailboat on the ocean.", "parts": [ { "kind": "file", "file": { "name": "sailboat_image.png", "mimeType": "image/png", "bytes": "base64_encoded_png_data_of_a_sailboat" } } ] } ], { "jsonrpc": "2.0", "id": "req-002", "method": "message.send", "params": { "message": { "role": "user", "messageId": "msg-user-002", "contextId": "ctx-conversation-abc", "referenceTaskIds": [ "task-boat-gen-123" ], "parts": [ { "kind": "text", "text": "Please modify the sailboat to be red." } ] } } } { "jsonrpc": "2.0", "id": "req-002", "result": { "id": "task-boat-color-456", "contextId": "ctx-conversation-abc", "status": { "state": "completed" }, "artifacts": [ { "artifactId": "artifact-boat-v2-red-pqr", "name": "sailboat_image.png", "description": "A generated image of a red sailboat on the oce "parts": [ { "kind": "file", "file": { "name": "sailboat_image.png", "mimeType": "image/png", "bytes": "base64_encoded_png_data_of_a_RED_sailboat" } } ] } ], "kind": "task" } }
  22. Key Takeaways • Open standard for agent interoperability • Built

    on existing web standards • Enterprise-grade security built-in • Complements MCP for complete stack Agent-to-Agent Protocol (A2A) Of fi cial Resources a2a-protocol.org github.com/a2aproject/A2A https://github.com/a2aproject/a2a- inspector https://www.a2aprotocol.net/docs/ speci fi cation
  23. The Full Protocol Stack Model Context Protocol (MCP) The Agent–User

    Interaction (AG-UI) Protocol Agent-to-Agent Protocol (A2A)
  24. The Agent–User Interaction (AG-UI) Protocol AG-UI is an open, lightweight,

    event-based protocol that standardizes how AI agents connect to user-facing applications. https://docs.ag-ui.com/introduction
  25. • MCP connects agents to tools & context • A2A

    enables agent-to-agent communication • No standard for agent-to-user interaction • Each framework invented custom protocols Missing Link
  26. Traditional APIs Don't Work for Agents Agentic applications break the

    request/response model. Client makes request → server returns data → interaction ends. Agents don't work this way.
  27. • Custom WebSocket formats per agent • Ad-hoc JSON parsing

    and text hacks • Reinvent adapters for each framework • Complex state synchronization WITHOUT AG-UI (PAIN POINTS)
  28. Standardisations Without AG-UI With AG-UI Framework fragmentation Every UI must

    code adapters for LangGraph, CrewAI, Mastra, etc. (M×N problem) Single protocol; any framework works with any UI (M+N) State synchronization Manual polling or complex WebSocket logic JSON Patch state deltas, standardized sync Human-in-the-loop Custom approval workflows per app Standardized HITL events built into protocol Debugging Server-side logs only; UI-agent communication is a black box Full client-side event tracing, transparent execution Generative UI No standard for agents to specify UI components Built-in component rendering events
  29. CopilotKit Framework • Manages Communication • Helps with state •

    Human in the loop (HITL) • Use ready react components AG-UI Client https://docs.copilotkit.ai/pydantic-ai/quickstart/pydantic-ai?path=exiting-agent
  30. Server Communication Events Category Description Lifecycle Events Monitor the progression

    of agent runs Text Message Events Handle streaming textual content Tool Call Events Manage tool executions by agents State Management Events Synchronize state between agents and UI Activity Events Represent ongoing activity progress Special Events Support custom functionality Draft Events Proposed events under development https://docs.ag-ui.com/concepts/events ➡ ∑16 standard events
  31. • SSE (Server-Sent Events) - Default, text-based, HTTP • Binary

    Protocol - High performance, custom serialization • WebSocket - Bidirectional, low latency • HTTP Polling - Fallback for restrictive networks Communication Transport
  32. 1. TEXT_MESSAGE_START { messageId, role: "assistant" } 2. TEXT_MESSAGE_CONTENT (×N)

    { messageId, delta: "Hello" } { messageId, delta: " world" } 3. TEXT_MESSAGE_END { messageId } Text / Tool Streaming Event Pattern 1. TOOL_CALL_START { toolCallId: "call_123", toolName: "search_database" } 2. TOOL_CALL_ARGS { toolCallId: "call_123", args: { query: "users" } } 3. TOOL_CALL_END { toolCallId: "call_123", result: [...] }
  33. • Bi-directional state management • STATE_SNAPSHOT Complete state object for

    initialization or refresh • STATE_DELTA JSON Patch (RFC 6902) for ef fi cient incremental updates State
  34. Static • Agent speci fi es exact component and props

    to render • E.g. Component is mapped to a speci fi c event in the UI UI Capbilities Declarative • Agent describes intent, framework selects components • Agent knows about components and props
  35. • Simple request/response with no streaming • Batch processing without

    user interaction • Static content delivery • Ultra-low latency requirements (<10ms) No-Usecase
  36. • Real-time Streaming Complexity • Tool Orchestration • Agents call

    functions, run code, hit APIs — UI must show progress and handle approval • Concurrency & Cancellation • Uni fi ed Interface: LangChain, CrewAI, Mastra all speak different dialects Solved Problems
  37. • Keep Humans in the loop • Visual Feedback •

    Actual questions and commits • Increase acceptance through observability • Loading placeholder effect AG-UI