Slide 11
Slide 11 text
▪ Model issues
▪ Biases, Hallucinations, Backdoored model
▪ User as attacker
▪ Jailbreaks, direct prompt injections, prompt extraction
▪ DAN (do anything now), Denial of service
▪ Third party attacker
▪ Indirect prompt injection, data exfiltration, request forgery
LLMs sicher in die Schranken weisen
Problems / Threats