Slide 1

Slide 1 text

Basta Spring 2025 Turbo RAG: AI-basierte Retriever-Auswahl mit Semantic Router Marco Frodl [email protected] Principal Consultant for Generative AI @marcofrodl

Slide 2

Slide 2 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router About Me Marco Frodl Principal Consultant for Generative AI Thinktecture AG X: @marcofrodl E-Mail: [email protected] LinkedIn: https://www.linkedin.com/in/marcofrodl/ https://www.thinktecture.com/thinktects/marco-frodl/

Slide 3

Slide 3 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Turbo 🚀 https://www.aurelio.ai/semantic-router Semantic Router is a superfast decision-making layer for your LLMs and agents. Rather than waiting for slow, unreliable LLM generations to make tool-use or safety decisions, we use the magic of semantic vector space — routing our requests using semantic meaning.

Slide 4

Slide 4 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Turbo 🚀 https://www.aurelio.ai/semantic-router Semantic Router is a superfast decision-making layer for your LLMs and agents. Rather than waiting for slow, unreliable LLM generations to make tool-use or safety decisions, we use the magic of semantic vector space — routing our requests using semantic meaning. It’s perfect for: input guarding, topic routing, tool-use decisions.

Slide 5

Slide 5 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Turbo 🚀 in Numbers In my RAG example, a Semantic Router using remote services is 3.4 times faster than an LLM and it is 30 times less expensive. A local Semantic Router is 7.7 times faster than an LLM and it is 60 times less expensive.

Slide 6

Slide 6 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Really? Safety Speed Budget

Slide 7

Slide 7 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Refresher: What is RAG? “Retrieval-Augmented Generation (RAG) extends the capabilities of LLMs to an organization's internal knowledge, all without the need to retrain the model.

Slide 8

Slide 8 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Refresher: What is RAG? https://aws.amazon.com/what-is/retrieval-augmented-generation/ “Retrieval-Augmented Generation (RAG) extends the capabilities of LLMs to an organization's internal knowledge, all without the need to retrain the model. It references an authoritative knowledge base outside of its training data sources before generating a response”

Slide 9

Slide 9 text

Ask me anything Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Simple RAG Question Prepare Search Search Results Question Answer LLM Vector DB Embedding Model Question as Vector Workflow Terms - Retriever - Chain Elements Embedding- Model Vector- DB Python LLM LangChain

Slide 10

Slide 10 text

Our sample content Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Simple RAG in a nutshell

Slide 11

Slide 11 text

Which retriever do you want? Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Multiple Retriever

Slide 12

Slide 12 text

Best source determination before the search Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Advanced RAG Question Retriever Selection 0-N Search Results Question Answer LLM Embedding Model Vector DB A Question as Vector Vector DB B LLM Prepare Search or

Slide 13

Slide 13 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Demo: Dynamic Retriever Selection with LLM

Slide 14

Slide 14 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Embedding Model

Slide 15

Slide 15 text

Best source determination before the search Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Advanced RAG Question Retriever Selection 0-N Search Results Question Answer LLM Embedding Model Vector DB A Question as Vector Vector DB B LLM Prepare Search or

Slide 16

Slide 16 text

Best source determination before the search Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Advanced RAG w/ Semantic Router Question Retriever Selection 0-N Search Results Question Answer Embedding Model Vector DB A Question as Vector Vector DB B LLM Prepare Search or Embedding Model

Slide 17

Slide 17 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Demo: Semantic Router with RAG

Slide 18

Slide 18 text

LLM as Router Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Turbo 🐌

Slide 19

Slide 19 text

Semantic Router with remote embedding model Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Turbo 🚀

Slide 20

Slide 20 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Demo: Semantic Router running locally

Slide 21

Slide 21 text

Semantic Router with local embedding model Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Turbo 🚀

Slide 22

Slide 22 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Speed & Budget in Numbers SR Remote is 3.4 times faster than LLM (0,62s vs 0,18s) SR Local is 7.75 times faster than LLM (0,62s vs 0,08s) SR Remote is 30 times cheaper than LLM ($0,60 vs $0,02) SR Local is 60 times cheaper than LLM ($0,60 vs $0,01)

Slide 23

Slide 23 text

Turbo RAG AI-basierte Retriever-Auswahl mit Semantic Router Yes, please! Safety Speed Budget

Slide 24

Slide 24 text

Talk-Bewertung nicht vergessen! Euer Feedback zählt

Slide 25

Slide 25 text

Thank you! Any questions? Marco Frodl @marcofrodl Principal Consultant for Generative AI