Upgrade to Pro — share decks privately, control downloads, hide ads and more …

European_Commission-Navigating_High-Risk_AI__Pr...

Avatar for Marketing OGZ Marketing OGZ PRO
September 17, 2025
24

 European_Commission-Navigating_High-Risk_AI__Practical_Guidance_and_Forthcoming_EU_Legislation_under_the_AI_Act.pdf

Avatar for Marketing OGZ

Marketing OGZ PRO

September 17, 2025
Tweet

More Decks by Marketing OGZ

Transcript

  1. Dr. Rimma Dzhusupova Global AI Solutions Lead at Mcdermott Inc.

    Assistant Professor, Technical University of Eindhoven (TU/e), The Netherlands Formal member of the European Commission Plenary for developing GenAI Code of Practice under AI Act, published on July 10 2025 th Mentor at AI4All Current Experience Education
  2. European Union AI Act Accountability for non complience Fines and

    penalties for violation Safety and respoct for fundamental rights
  3. To whom AI Act applies Applicability to Providers: Applicability to

    Users: The EU AI Act applies to providers who place AI systems on the market or put them into service within the EU, regardless of whether they are established within the EU or in a third country. This includes providers of general-purpose AI models The Act also applies to users (deployers) of AI systems within the EU. Any entity using AI systems within the EU falls under the scope of the regulation Practical Tip: Add compliance requirements as additional clauses in the contracts with the AI system supplier
  4. Tailored Treatment Plans Predictive Analytics Main Requirements on Transparency for

    High-Risk AI Systems Fundamental Rights Impact Assessment (FRIA) Applies to most high-risk AI systems except critical infrastructure safety components Implementation: Integrate FRIA into risk workflows and document all assessments for audit readiness Practical Tip: Integrate FRIA into risk management workflows and document all assessments for audit readiness
  5. Practical Guidance on AI Procurement 1) Identify all cases where

    your organisation procures AI systems, e.g. using off-the-shelf AI Systems supplied by third-party contractors 2) Check that the provider complies with the AI Act, IP legislation, GDPR etc. e.g. regarding the collection of data, management of data, cybersecurity measures 3) Define all requirements and include them as additional clauses in the contracts with the third-party contractors 4) Create compliance checklists to check that all documentation is provided and monitor the contractual relationship during/after the execution of the contract
  6. AI EU Act "Exceptions to the high-risk classification" Narrow Procedural

    Tasks: An AI system intended to perform a narrow procedural task may not be classified as high-risk if it does not pose a significant risk to health, safety, or fundamental rights Improvement of Human Activity: If an AI system is designed to improve the result of a previously completed human activity, it may be exempt from high-risk classification Pattern Detection: AI systems intended to detect decision-making patterns or deviations without replacing or influencing human assessment, provided there is proper human review, may not be considered high-risk Preparatory Tasks: Systems performing preparatory tasks relevant to use cases listed in Annex III, without significant risk, may also be exempt Profiling Exception: Despite the above exceptions, any AI system that performs profiling of natural persons is always considered high-risk These exceptions highlight scenarios where AI systems might not be classified as high- risk under the EU AI Act.
  7. Banned AI Practices & Special Rules for General-Purpose Models Unacceptable

    Risk AI Systems General-Purpose AI (GPAI) and Systemic Risk Certain AI uses are completely banned in the EU, such as social scoring by governments, manipulative or exploitative systems, and real-time remote biometric identification in public spaces (with targeted exceptions). Obligations for GPAI Providers: Transparency, technical documentation, incident reporting, and risk management requirements apply to general- purpose AI models, especially those considered systemically risky (e.g., large language models used at scale).
  8. Oversight: Authorities, NoBo & Whistleblowing Enforce, inspect, and handle complaints

    about AI compliance. In the Netherlands, the Dutch Data Protection Authority and the Dutch Authority for Digital Infrastructure (RDI) Market Surveillance Certifies high-risk AI systems for conformity. Reviews documentation and risk controls before and after launch Notified Body (NoBo) Anyone—including employees— can report breaches or risks to authorities without retaliation, as protected under the AI Act Whistleblowing
  9. Thank You For your attention! If you have any questions

    or would like to discuss further, please feel free to reach out. [email protected] https://www.linkedin.com/in/dr- rimma-dzhusupova-05616543/