Slide 12
Slide 12 text
Abhiram Ravikumar | @abhi12ravi
RAG vs Fine Tuning
12
Training data
• Fine-tuning requires task-specific labelled data examples
• More time and cost
• RAG relies on pre-trained LLM & external knowledge bases
Adaptability
• LLM remains generalized in the case of RAG, whereas fine-tuning makes LLM
more specialized and tailored to specific tasks
Model Architecture
• In RAG, LLM architecture remains the same but in fine-tuning, params of pre-
trained LLMs are modified