and you want to fine-tune your model. • You collect user photos and prompts. • You want it to feel personal, unique, better than the base model. • But what’s going into that training data? The model learns what you feed it - be careful what you teach. Risks: • Poisoned samples, behave strangely, leak data. • User-submitted content may contain bias, toxicity, or triggers. • Data leaks: PII in selfies, text, or metadata if not cleaned. Recommendations: • Validate and sanitize training data. • Anonymize personal user content. • Use dedicated data filters for toxicity, PII. • Monitor fine-tuning feedback loop.