Slide 18
Slide 18 text
Optimizing RAG
Human evals (
👍🏻 /
👎🏼)
RAGAS metrics
Faithfulness - ensuring retrieved context can act as a justification for the generated answer
Context Relevance - context is focused, with little to no irrelevant information
Answer Relevance - the answer addresses the actual question
ragas = Langchain::Evals::Ragas::Main.new(llm: llm)
ragas.score(answer: "", question: "", context: "")
#=> {
# ragas_score: 0.6601257446503674,
# answer_relevance_score: 0.9573145866787608,
# context_relevance_score: 0.6666666666666666,
# faithfulness_score: 0.5
# }