Upgrade to Pro — share decks privately, control downloads, hide ads and more …

MCP: Principles and Practice

MCP: Principles and Practice

Large‑language‑model agents are only as useful as the context and tools they can reach. Anthropic’s Model Context Protocol (MCP) proposes a universal, bidirectional interface that turns every external system—SQL databases, Slack, Git, web browsers, even your local file‑system—into first‑class “context providers.”

In just 30 minutes we’ll step from high‑level buzzwords to hands‑on engineering details:
* How MCP’s JSON‑RPC message format, streaming channels, and version‑negotiation work under the hood.
* Why per‑tool sandboxing via isolated client processes hardens security (and what happens when an LLM tries rm ‑rf /).
* Techniques for hierarchical context retrieval that stretch a model’s effective window beyond token limits.
* Real‑world patterns for accessing multiple tools—Postgres, Slack, GitHub—and plugging MCP into GenAI applications.

Expect code snippets and lessons from early adoption. You’ll leave ready to wire your own services into any MCP‑aware model and level‑up your GenAI applications—without the N×M integration nightmare.

Avatar for Luca Baggi

Luca Baggi

March 04, 2026

More Decks by Luca Baggi

Other Decks in Programming

Transcript

  1. 📍Outline ✔ Memes 🎯 Takeaways 📏 From tool calls to

    MCP 🔬 What’s inside an MCP? 🐘 Contextual problems 🤝 Enter: skills, code mode, etc 🎬 So what?
  2. 🖐 But f irst, some well-deserved aknowledgments Check out the

    original talk and code from Fabio, Gabriele and Lele
  3. 🎯 Takeaways • MCP is more than a REST API:

    it has resources, tools and prompts. • It’s designed for agents, and it should be built for agents - i.e., you can’t just port your REST API into an MCP and call it a day. • MCPs were historically context-hungry: we’re working on it (code execution, progressive discovery, skills). • MCPs still have a place in enterprise environments.
  4. 📏 From tool calls to MCP In the end, it’s

    all (JSON) strings • Models only emit text, so how can they “use tools”? • Simply put, we train an LLM to generate valid JSON when the user prompts it with some “tool de f inition”. • Then we just add the function call output back into the LLM and have it synthesise the f inal answer.
  5. 📏 From tool calls to MCP What if we create

    a class…? • This three-steps process is quite easy to abstract in a class. • Pair this with reasoning models, that can interleave tool calls in their reasoning, for example to gather context, and here you have a powerful agent. • There, you just (re) invented an agent.
  6. 🔬 What’s inside an MCP? Server-side: Tools • "Tools are

    schema-de f ined interfaces that LLMs can invoke. MCP uses JSON Schema for validation. […] Tools may require user consent prior to execution, helping to ensure users maintain control over actions taken by a model.”
  7. 🔬 What’s inside an MCP? Server-side: Resources • "Resources provide

    structured access to information that the AI application can retrieve and provide to models as context.”
  8. 🔬 What’s inside an MCP? Server-side: Prompts • Prompts provide

    reusable templates. • Prompts are user-controlled, requiring explicit invocation.
  9. 🔬 What’s inside an MCP? Client-side: Roots • Roots de

    f ine f ilesystem boundaries for server operations, allowing clients to specify which directories servers should focus on.
  10. 🔬 What’s inside an MCP? Client-side: Sampling • “Sampling enables

    servers to perform AI-dependent tasks without directly integrating with or paying for AI models. Instead, servers can request that the client—which already has AI model access—handle these tasks on their behalf. […] Because sampling requests occur within the context of other operations—like a tool analyzing data”