Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Between AI and API: Real-Time Integration for L...

Avatar for Vadim Klimov Vadim Klimov
September 20, 2025
7

Between AI and API: Real-Time Integration for LLM Applications using MCP

Event: SAP Inside Track Belgium 2025
Date: September 20, 2025
Speaker: Vadim Klimov
Session: Between AI and API - Real-time integration for LLM applications using MCP

Avatar for Vadim Klimov

Vadim Klimov

September 20, 2025
Tweet

More Decks by Vadim Klimov

Transcript

  1. Between AI and API Real-time integration for LLM applications using

    MCP Vadim Klimov SAP Integration Architect SAP Inside Track Belgium September 20, 2025
  2. Speaker Info Integration and cloud architect - SAP Cloud technologist

    - SAP BTP | AWS | Azure Speaker at SAP technology events Author at SAP PRESS / Rheinwerk Publishing linktr.ee/vadimklimov Dr. Vadim Klimov
  3. The Problem Large Language Model (LLM) knowledge cutoff Knowledge of

    LLMs is static, limited to what they were trained on up to a certain point in time. •Outdated or incomplete answers. •Inability to answer domain-specific questions and questions requiring access to private or organizational internal data. Retrieval-Augmented Generation (RAG) Retrieval and pre-processing of information from external data sources, augmentation of the LLM prompt with relevant retrieved data. Interface fragmentation - lack of standardized interfaces between LLM applications and external data sources - leading to integration complexity, increased development and maintenance efforts. ?
  4. MCP at a Glance | Key Facts Open protocol that

    provides a standardized way to connect LLM applications with external data sources and tools. Adopted by OpenAI, Google, Microsoft, AWS and others. Rapidly expanding ecosystem. Official SDKs available for TypeScript, Python, Java, Kotlin, C#, Go, Rust, Swift, Ruby. Messages exchanged between client and server use JSON-RPC 2.0 format. Follows client-server architecture. Supported transport mechanisms are stdio and Streamable HTTP. Developed by Anthropic, released in November 2024. For more information, visit https://modelcontextprotocol.io/.
  5. Local data source LLM application MCP at a Glance |

    Architecture Overview MCP client MCP server MCP client MCP server Remote service LLM MCP host Model Context Protocol Model Context Protocol
  6. MCP Server | Primitives Text or binary data that provides

    additional context to the model. Read-only access to data sources, each resource is uniquely identified by a URI. Application-controlled: the LLM application determines how to incorporate resource content into the context based on application requirements. Executable functions that allow the model to perform actions. Model-controlled: the model discovers and invokes tools based on contextual understanding and prompts. Prompt templates (predefined structured message templates) that help guide the interaction between the user, the model and the MCP server. User-controlled: users trigger prompts by issuing commands through the user interface. Resources Tools Prompts
  7. MCP Server | Challenges | Discovery and Lifecycle Management Discovery

    • MCP registry - metadata based catalog, discovery, self-service • Active MCP server discovery Monitoring • Tool invocation and execution monitoring • Resource access monitoring • Distributed tracing • Anomaly detection Deployment • Local vs. remote MCP servers • Infrastructure provisioning • Secrets management
  8. MCP Server | Challenges | Performance Model confusion from tool

    overload • The model’s tool selection accuracy may decline when presented with a large number of tools Context distraction and confusion • Context degradation syndrome - loss of focus, semantic and context drift • Inconsistent and inaccurate model output • Hallucination Context window overflow • Token tax • Truncation risk - potential loss of contextual information • Resource drain Backend overload • Backend resource drain and performance degradation - in extreme cases, leading to denial of service
  9. MCP Server | Challenges | Security Context integrity • Prompt

    injection • Context poisoning • Context clash • Hallucination Tools integrity • Tool poisoning • Cross-tool contamination • Rug pulls • Tool squatting • Tool impersonation Identity and access control • Shadow access • Confused deputy • Unintended proxy • Secret exposure and credential theft Data leakage • Data exfiltration channel Privilege escalation • Sandbox escape Network and communication • Insecure connections Resource abuse • Denial of service • Denial of wallet
  10. MCP | Key Considerations (1/4) While MCP proxy servers have

    valid use cases and there are tools to automate generation of MCP servers based on the API specifiction, avoid using MCP as a bridge to existing APIs without careful evaluation. Keep the MCP server as a lightweight layer between the LLM application and external data sources. Use API gateways and other middleware tools to complement MCP servers where appropriate. Modern APIs are often resource-centric (and may align with MCP resources), but MCP tools must be task-oriented. Consider developing higher-level tools to encapsulate and orchestrate API interactions.
  11. MCP | Key Considerations (2/4) Use token optimization and context

    window compression techniques. Shift focus from prompt engineering to context engineering - pay close attention to what goes into the context. Implement guardrails to sanitize inputs to and outputs from the model. Curate MCP server responses by applying entity filtering, field pruning and selecting appropriate data formats. Consider chunking and map-reduce techniques to address some performance and context window size limitations.
  12. MCP | Key Considerations (3/4) A security-first approach is essential.

    Enforce practices to mitigate supply chain risks, thoroughly assess and audit onboarded MCP servers. Review the authentication and authorization mechanisms that are currently in use, as well as the infrastructure that enables and supports them. Encourage the adoption of OAuth based authorization flows, consider the use of dynamic client registration as a part of it. Use human-in-the-loop control for critical workflow steps. Design multi-agent systems with clearly separated agents, each operating in its own context and accessing only the MCP servers and tools relevant to its role and AI agent persona.
  13. MCP | Key Considerations (4/4) As MCP adoption grows, the

    MCP registry and gateway become increasingly critical as the single entry point for publication, discovery and inventory of verified MCP servers and tools, along with centralized management of server interactions. For inspiration, see MCP Registry, Smithery, GitHub MCP Registry, Docker MCP Catalog and Docker MCP Toolkit, Azure API Center, MCP Server Cloud. Establish infrastructure for MCP servers monitoring and anomaly detection.
  14. MCP streamlines AI-to-data integration by standardizing the way AI-powered applications

    connect to external data sources and tools, making LLMs aware of real-time data. MCP also introduces challenges. Many are neither new nor unique to MCP, but MCP enables new exploitation methods, and increased MCP adoption intensifies and amplifies them.