Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[DevDojo] Introduction to LLMs & AI Agents - 2025

Avatar for mercari mercari PRO
November 25, 2025

[DevDojo] Introduction to LLMs & AI Agents - 2025

This slide introduces the principles of Large Language Models (LLMs) and the concepts of various tools that enable LLMs to interact with external environments.
It also explains the definition of AI agents, which combine LLMs with tools to enable autonomous task execution, and the Model Context Protocol (MCP) for connecting AI applications to external systems.

Avatar for mercari

mercari PRO

November 25, 2025
Tweet

More Decks by mercari

Other Decks in Technology

Transcript

  1. 2 Self-Introduction • @lukas • Joined Mercari in 2022 as

    an Android Engineer • Worked in the Client Architecture & Design System teams • Since 07/2025 part of the AI Task Force
  2. 3 1. Presentation a. How do LLMs work? b. What

    are AI Agents? 2. Hands-on session Outline of this session
  3. 4 • Machine-learning models that handle natural language • Trained

    on vast amounts of text ◦ Pre-training: Self-supervised learning to create base model ◦ Post-training: Training for specific tasks • Operates using tokens ◦ Example: Tokenizer - OpenAI API • Model input: A sequence of tokens • Model output: A single token What are Large Language Models (LLMs)?
  4. 5 • Each run of the LLM produces a single

    token • To output a full sentence, a number of LLM passes are chained ◦ E.g. for input: “ABCDEF”, we get output “GHIJ” using the following sequence of calls ▪ ABCDEF -> G ▪ ABCDEFG -> H ▪ ABCDEFGH -> I ▪ ABCDEFGHI -> J How do LLMs produce the output they do?
  5. 6 • Limitations of models ◦ Lack of memory: LLMs

    are adjusted to the training phase only ◦ Knowledge cutoff: Models don’t have knowledge of recent news, discoveries, confidential data etc • So, how can these models be useful for us of they have no Mercari-internal knowledge? ◦ In-Context learning ◦ (Traditional) RAG ◦ Tools Limitations of models
  6. 7 • Mechanism to allow LLMs to interact with their

    “environment” ◦ Think of it like function calls! • LLMs can “call” a tool, that ◦ Returns information that they can use for their response (In-Context learning) ◦ Changes the environment (E.g. write a file) Tools
  7. 8 • LLMs can only generate tokens • To call

    a tool, four steps are necessary a. The LLM needs to know about the list of tools that are available to it (by providing it in the context) b. When it wants to call a tool, the model will generate a specific sequence of token. c. The system that serves the LLM to the user will interpret this as a tool call and run the tool. d. Provide the previous conversation + the result of the tool to the LLM to generate more tokens. How can LLMs call a tool?
  8. 9 • “An artificial intelligence (AI) agent is a system

    that autonomously performs tasks by designing workflows with available tools.” - IBM • The combination of LLM + tools allows for self-driven decision making. • Examples ◦ Coding agents (Claude Code, Codex) ◦ ChatGPT Research mode (Deep Research) What are AI Agents?
  9. 10 • “Open-source standard to connect AI applications to external

    systems” - What is the Model Context Protocol (MCP)? • Protocol that provides Resources, Prompts and Tools to AI Agents • Examples ◦ Atlassian MCP ◦ Github MCP ◦ Figma MCP Model Context Protocol (MCP)
  10. 11 • Security review ◦ Every AI service that you

    use for work needs to be approved by our security team! • LiteLLM ◦ Lightweight proxy around model APIs! • Coding Agents • MCPs ◦ List of approved servers AI tools at Mercari