control & convenience Level 1: Low level Orchestration framework e.g. LangChain Level 0: Build Your Own Use function calling to build your own agent Level 4: Agentspace Builder & Conversational Agents Level 2: Graph E.g. LangGraph Low Flexibility Agent Frameworks Hard to Use High Flexibility
Make agent development feel like software development. • Simplify creation, deployment, and orchestration. Core Principles • Model-agnostic (Optimized for Gemini, but supports others via LiteLLM). • Deployment-agnostic (Local, Cloud Run, Agent Engine). • Compatibility with other frameworks (e.g., LangChain, CrewAI). Introducing ADK
The foundation for all other agents. • LlmAgent (or Agent alias): The "thinking" part, powered by an LLM for reasoning, decision-making, and tool use. • Workflow Agents: Deterministic controllers for orchestrating sub-agents. ◦ SequentialAgent ◦ ParallelAgent ◦ LoopAgent • Custom Agents: For unique, non-LLM based logic. Core Concept: The Agent
google.adk.models.lite_llm import LiteLlM # if using non-Gemini weather_agent = LLMAgent( name="weather_reporter", model="gemini-2.0-flash", # model=LiteLlM(model="openai/gpt-4o") # Alternative description="Provides current weather info for a given city", instruction="You are a weather bot. Use the get_weather_tool " "to find the weather. Clearly state the city and " "the weather report.", tools=[get_weather_tool] )
price for a given symbol Args: symbol (str): The stock symbol (e.g. "SPY") Returns: dict: {'status': 'success', 'price': 583.09} or {'status': 'error', 'message': 'Not found'} """ if symbol == "QQQ": return {'status': 'success', 'price': 510.09} return {'status': 'error', 'message': 'Not found'} stock_price_tool = FunctionTool(func=get_stock_price) # Agent using this tool # financial_agent = Agent(..., tools= [stock_price_tool])
a callable function or Tool. • The "called" agent executes its full logic. • Its final response is returned as the "tool result" to the calling agent. • Enables hierarchical task decomposition and expert consultation. • Different from sub-agent delegation where control is fully transferred. With AgentTool, the caller gets the result back and decides the next step. AgentTool: Agents Calling Agents
Agent summarizer_agent = Agent( name="TextSummarizer", model="gemini-2.0-flash", instruction="Summarize the given text concisely") # Calling Agent researcher_agent = Agent( name="ResearchLead", model="gemini-2.0-flash", instruction="You have a summarizer tool. Use it to summarize " "research papers", tools=[AgentTool(agent=summarizer_agent)] # Wrap summarizer agent )
the user asks for weather," "delegate to 'WeatherExpert'. If the user says goodbye, " "delegate to 'FareWeller'.") # Descriptions for sub-agents are key for LLM to decide delegation sub_agents=[ Agent=(name="WeatherExpert", model="gemini-2.0-flash", tools=[get_weather], description="Handles weather requests"), Agent=(name="FareWeller", model="gemini-2.0-flash", tools=[say_goodbye], description="Responds to farewells")] )
to perform web searches. • Integrates grounding from Google Search directly into the agent's response capabilities. • Requires a Gemini 2.x model. Code Execution from google.adk.code_executors import BuiltInCodeExecutor from google.adk.code_executors import VertexAICodeExecutor • Enables the agent to execute the Python code generated by the LLM • Agent=(..., code_executor=(BuitlInCodeExecutor())) Built-in Tools: Google Search & Code Execution
environment. Python Deployment Options • Vertex AI Agent Engine: (Recommended for Python) Fully managed, auto-scaling service for ADK Agents. ◦ The Agent Starter Pack is a collection of production-ready generative AI agent templates. ◦ from vertexai import agent_engines • Cloud Run: Deploy as a container-based application. ◦ Use adk deploy cloud_run … (CLI Helper) ◦ Or, manually with a Dockerfile and gcloud run deploy. Java deployment typically uses Cloud Run with a Dockerfile. Deployment: Taking Your Agent Live
Runner.run_live() and LiveRequestQueue. • Check adk-docs/streaming for quickstarts Safety & Security: Best practices for guardrails, auth, sandboxed code execution. (See adk-docs/safety). Model Context Protocol (MCP): Open Standard for LLM-tool communication. ADK can consume MCP tools. (See adk-docs/mcp). Community Resources • GitHub Discussions for questions and sharing. • Sample agent repositories. • Contribution guide for docs and code. (adk-docs/contributing-guide) More ADK & Community Resources