Slide 10
Slide 10 text
▪ Model issues
▪ Biases, Hallucinations, Backdoored model
▪ User as attacker
▪ Jailbreaks, direct prompt injections, prompt extraction
▪ DAN (do anything now), Denial of service
▪ Third party attacker
▪ Indirect prompt injection, data exfiltration, request forgery
Absicherung von LLM-Integrationen in Ihre Business-Apps
P