Upgrade to Pro — share decks privately, control downloads, hide ads and more …

EU AI Act: Compliance ohne Kopfschmerzen

EU AI Act: Compliance ohne Kopfschmerzen

Der EU-AI-Act ist das weltweit erste umfassende Gesetz zur Regulierung von Künstlicher Intelligenz. In diesem Vortrag beleuchten wir die Entstehung und die Leitideen des Gesetzes und erläutern den risikobasierten Ansatz zur Einordnung von KI-Anwendungen. Im Fokus steht die Praxis: Was ändert sich für Entwickler und Unternehmen? Wie lässt sich Compliance umsetzen, ohne Innovation zu bremsen? Und wo liegen die strategischen Chancen? Der Talk liefert Orientierung und konkrete Denkanstöße für einen verantwortungsvollen und zukunftssicheren Umgang mit KI, der nicht gegen den EU-AIA-Act verstößt.

Avatar for Alexander Eimer

Alexander Eimer

March 12, 2026
Tweet

More Decks by Alexander Eimer

Other Decks in Technology

Transcript

  1. Goal of this talk You should understand what’s important when

    working with AI in the EU From techies for techies therefore …
  2. What is the EU AI Act? Think GDPR but for

    AI ▪ Risk-based framework: obligations scale with potential harm ▪ Extraterritorial reach: if your AI affects EU residents, you're in scope ▪ Already active: phased rollout since August 2024
  3. EU AI Act Goals Goal: Balancing innovation with safety •

    Risk based approach • Protect fundamental rights • Promote transparency • Promote innovation by defining legal boundaries • Harmonize rules across the EU
  4. As a techie, why should I care? "The AI Act

    is rooted in the EU Charter of Fundamental Rights — non-discrimination, human dignity, privacy. As a developer, you are part of the compliance chain." • Liability extends to developers, not just the company • Compliance = CE mark for AI = EU market access PROHIBITED AI PRACTICES € 35M or 7% of global annual turnover — whichever is higher e.g. social scoring, biometric mass surveillance NON-COMPLIANCE € 15M or 3% of global annual turnover — whichever is higher e.g. missing risk assessment, no documentation MISLEADING REGULATORS € 7.5M or 1% of global annual turnover — whichever is higher e.g. false info to notified bodies or authorities
  5. Penalties In comparison GDPR violations “only” applies fines: Up to

    4% of the global revenue or Up to 20 million euros
  6. What defines an AI? ‘AI system’ means a machine-based system

    that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; EU AI Act — Chapter I — Article 3 (1)
  7. What defines an AI? ‘AI system’ means a machine-based system

    that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; EU AI Act — Chapter I — Article 3 (1)
  8. Three Key AI Personas Provider (3) ‘provider’ means a natural

    or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge; Deployer (4) ‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity; Distributor (7) ‘distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market; EU AI Act — Chapter I — Article 3
  9. Three Key AI Personas Provider (3) ‘provider’ means a natural

    or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge; Deployer (4) ‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity; Distributor (7) ‘distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market; EU AI Act — Chapter I — Article 3
  10. AI Personas Provider Deployer Manufactur er Importers & Distributors create

    or develop AI systems and bring them to market import AI systems into the EU and are part of the supply chain develop products with AI under their own brand and market them under their own name use AI systems within the organisation can take on different roles Company
  11. AI Personas Provider Deployer Manufactur er Importers & Distributors create

    or develop AI systems and bring them to market import AI systems into the EU and are part of the supply chain develop products with AI under their own brand and market them under their own name use AI systems within the organisation can take on different roles Company
  12. Single VS General Purpose AI Specific Use-Case • Unsupervised Learning

    • Supervised Learning Examples • Visual inspection • Maintenance • Recommendations Broad application • Image Generation • Large Language Model (LLM) Examples • Gemini • GPT-4 • Llama Single Purpose General Purpose
  13. Your obligations depend on what your AI can do to

    people — not on the technology itself. The Risk-Based Approach LIMITED RISK Transparency Obligations Chatbots, AI-generated content, emotion recognition HIGH RISK Regulated AI Systems Health, employment, credit, education, border control, justice UNACCEPTABLE RISK Prohibited Systems Manipulates, exploits vulnerabilities, or enables mass surveillance MINIMAL RISK Everything Else Spam filters, recommendations, code assistants, games
  14. Definition: Systems that pose an unacceptable threat to fundamental rights.

    Banned outright → no compliance path exists. Key Triggers: ▪ Social scoring by public or private authorities ▪ Real-time biometric mass surveillance in public spaces ▪ Emotion recognition in workplace or educational settings Consequence: ▪ Do not build, deploy, or place on the market. In force since February 2025. Unacceptable Risk UNACCEPTABLE RISK Prohibited Systems
  15. Unacceptable Risk: In Practice IN THE WILD Recruitment video interview

    analysis — facial expression scoring marketed as "engagement analysis" Classification as prohibited emotion recognition is likely, but still contested in practice. ⚠ GRAY AREA YOUR CODE • Clearview AI Banned & fined across the EU for mass biometric surveillance • China's Social Credit System The reference case the AI Act was designed to prevent 〉 "HR mood tracking via keystroke patterns" → emotion recognition in workplace 〉 "Retail CCTV emotion detection" → biometric categorization in public 〉 "Benefit applicant screening via social media" → social scoring
  16. Definition: AI in high-stakes domains where errors can seriously harm

    health, safety, or fundamental rights. Not banned — but heavily regulated. Key Domains (Annex III): ▪ Employment: CV screening, hiring, performance evaluation ▪ Essential services: credit scoring, insurance, social benefits ▪ Education: admissions, grading, exam monitoring + 5 more domains: biometrics, infrastructure, law enforcement, migration, justice Key Obligations: ▪ Risk management system throughout the full lifecycle ▪ Technical documentation + automatic logging ▪ Meaningful human oversight + conformity assessment (CE marking) High Risk HIGH RISK Regulated AI Systems
  17. High Risk: In Practice IN THE WILD Medical image analysis

    — high-risk AI regulation or medical device (MDR)? Both may apply. Bank fraud detection — security tooling or essential financial service? Depends on scope. ⚠ GRAY AREA YOUR CODE • SCHUFA Automated credit scoring affecting loans & housing for millions • HireVue AI video interview analysis used in hiring at large enterprises 〉 "Loan approval recommendation model for a bank" → essential services = high-risk 〉 "AI-assisted exam proctoring system" → education & training = high-risk 〉 "ML model ranking applicants by 'cultural fit'" → employment = high-risk
  18. Definition: The risk isn't harm, it's deception. People have a

    right to know when they're talking to AI or seeing AI-generated content. Key Triggers: ▪ Chatbots and conversational AI (text or voice) ▪ AI-generated images, audio, or video Key Obligations: ▪ Disclose at the start of interaction that it's an AI ▪ Label AI-generated content as machine-generated Limited Risk LIMITED RISK Transparency Obligations
  19. Limited Risk: In Practice IN THE WILD AI-powered email auto-replies

    — when does a suggested reply become "conversational AI"? No clear guidance yet — watch for delegated acts from the European Commission. ⚠ GRAY AREA YOUR CODE • ChatGPT / Claude Must identify as AI at the start of every conversational interaction • Midjourney / DALL-E images AI-generated visuals in news or advertising must be labeled 〉 "Customer service chatbot" → must say "I'm a bot" — not just "Hi, I'm Mia!" 〉 "Marketing copy generator" → AI-generated posts need disclosure labels 〉 "Personalized AI voice messages" → must disclose AI origin
  20. Definition: The vast majority of AI systems. No mandatory requirements

    under the AI Act. Typical systems: ▪ Spam filters, content recommendations, code assistants ▪ Games, search ranking, photo organization Important: ▪ Other laws still apply: GDPR, consumer protection, product liability ▪ ⚠ Context can shift your category — the same model deployed differently may become high-risk Minimal Risk MINIMAL RISK Everything Else
  21. Minimal Risk: In Practice IN THE WILD GitHub Copilot used

    for medical coding in a clinical workflow → could become High Risk (healthcare) Recommendation engine allocating social housing → High Risk (essential services) It's not the technology — it's the use case and deployment context. ⚡ CONTEXT SHIFTS CATEGORY YOUR CODE • Netflix / Spotify Content recommendation engines — no AI Act obligations • GitHub Copilot Code completion — minimal risk in standard developer use 〉 "AI autocomplete in your IDE" → no AI Act requirements 〉 "Spam filter in your SaaS product" → no AI Act requirements
  22. General Purpose AI (GPAI) Beside these four groups General Purpose

    AI (GPAI) got added as ChatGPT was upcoming Provide technical documentation so downstream providers can fulfill their own obligations Includes Training methodology Model capabilities & limitations Instructions for safe use Energy consumption data Transparency Comply with EU copyright law when training on protected data Includes Publicly disclose training data summary Respect opt-out from text & data mining Honor rights reservations Copyright Directive Applies when training compute exceeds 10^25 FLOPs (e.g. GPT-4, Gemini Ultra) Additional obligations Adversarial testing & red-teaming Incident reporting to EU AI Office Cybersecurity measures Model evaluation reports Systemic Risk
  23. The AI decides something that matters for a person INDICATOR

    #1 WHAT TO DO Talk to your project manager Flag for EU AI Act review Who is affected if the AI is wrong? Is there real human review before decision? Check Annex III of the EU AI Act. WHAT TO LOOK FOR Employment & recruitment Credit & financial services Healthcare access Education & admission Law enforcement Social benefits EXAMPLE “We’re building a CV screener to pre-filter 500 applications.” → High-Risk AI, Annex III Category 4 (Employment) Raise this before writing any code.
  24. The AI sees faces, bodies, voices, or health data INDICATOR

    #2 WHAT TO DO Stop! Do not implement Escalate immediately Is this use-case prohibited under Article 5? Has legal / compliance been consulted? What is the GDPR basis for biometric data? WHAT TO LOOK FOR Face or body recognition Emotion or sentiment detections Voice identification Health data inference Political / religious / ethnic inference EXAMPLE “Can we add face-based attendance tracking instead of badge swipes?” → May be prohibited. Article 5 + GDPR Article 9 apply. Flag before writing a single line of code.
  25. Users can’t tell they’re talking to an AI INDICATOR #3

    WHAT TO DO Raise it before shipping Transparency is a legal requirement Do users know they are talking to an AI? Is generated content labeled as such? Article 50 — EU AI Act transparency rules. WHAT TO LOOK FOR Chatbot with a human name, no disclosure AI-generated text passed off as human-written Synthetic images or audio, unlabeled No option to reach a human instead EXAMPLE “Our support chatbot is named Laura — users think she’s a real employee.” → Transparency obligations under Article 50. Add a clear AI disclosure before launch.
  26. The AI touches infrastructure that can’t be wrong INDICATOR #4

    WHAT TO DO Escalate before any prototype High-Risk by definition What happens if the AI output is wrong? Is there a certified safety case for this? Check Annex III, Category 2, WHAT TO LOOK FOR Energy, water, or transport systems Industrial control systems (SCADA, PLCs) Medical devices or clinical decisions Financial market infrastructure Emergency response systems EXAMPLE “Can we add an ML model to auto-adjust parameters in our industrial control system?” → High-Risk AI, Annex III Category 2 (Critical Infrastructure) Full compliance required. Escalate before any PoC.
  27. Agentic AI — Autonomy triggers Human Oversight INDICATOR #5 WHAT

    TO DO High autonomy in a sensitive domain prevents low-risk exemptions Do the actions touch a high-risk domain? If yes → Art. 6(3) exemption voided. High-risk. Human-in-the-loop or human-on-the-loop? Is the AI’s activity logged and auditable? Who is responsible when the AI acts wrongly? WHAT TO LOOK FOR Sends mails or messages autonomously Executes code or modifies databases Makes purchases or bookings Calls external APIs with real-world effect Without human confirmation on each consequential step EXAMPLE “Our hiring agents ranks applicants and sends rejections — no human reviews before send.” → Employment + autonomous action = high-risk. Autonomous send = Article 14 applies. Design in human-in-the-loop or human-on-the-loop before this ships.
  28. High Risk: What You Actually Have to Do DOCUMENTATION Art.

    11 + Annex IV Technical documentation before market placement, covering: •Intended purpose & design specs •Training data & data governance •Testing & validation results •Risk management measures •Accuracy & robustness metrics Must be kept up-to-date throughout the full lifecycle. EU DATABASE REGISTRATION Art. 49 + Art. 71 Register in the EU public database before deployment: •Provider registers the AI system •Deployer registers their use case •Information is publicly accessible No registration = no EU market. Database operated by the European Commission. INFORM YOUR PEOPLE Art. 26(7) Deployers must inform before putting a high-risk system in use: •Workers' representatives •Affected individual workers •Works council (where applicable) Before deployment — not after. Applies especially in HR, performance & hiring contexts.
  29. GF: Dr. Josef Adersberger, Michael Stehnken, Michael Rohleder, Mario-Leander Reimer

    | Niederlassungen in München, Mainz, Rosenheim, Darmstadt QAware GmbH Aschauer Straße 30 81549 München Tel.: +49 89 232315-0 [email protected] www.qaware.de linkedin.com/company/qaware-gmbh xing.com/companies/qawaregmbh slideshare.net/qaware github.com/qaware