Slide 4
Slide 4 text
C O R E
C O N C E P T S
Prompt – “programs” model using natural language inputs
Response – content “generated” by model for that input
Fabrication – generated content may not be rooted in fact
Base LLM – foundational model, trained on massive data
Instruction Tuned LLM – fine-tuned for a task
Prompt Engineering – iterate & evaluate for quality
Chat Completion – generates NLP responses, multi-turn
Embedding – convert text to numeric representation for model
Tokenization – chunk prompts into tokens, use in predictions