Upgrade to Pro — share decks privately, control downloads, hide ads and more …

VCO-AI_Act_in_Motion___Towards_Responsible___Et...

Avatar for Marketing OGZ Marketing OGZ PRO
September 17, 2025
16

 VCO-AI_Act_in_Motion___Towards_Responsible___Ethical_AI.pptx.pdf

Avatar for Marketing OGZ

Marketing OGZ PRO

September 17, 2025
Tweet

More Decks by Marketing OGZ

Transcript

  1. AI Act in Motion – Towards Responsible & Ethical AI

    MICHELLE FISSER AND MENNO WEIJ DATA EXPO 2025
  2. 2 • >22 Years experience Compliance O cer / Professional

    • Current: Advisor, Training, interim Head of Compliance, Enterpreneur • Chairwoman VCO (non profit): Community for professionals with a focus on Ethics, Integrity & Compliance • Founder Compliance in Motion • Co-founder House of ESG INTRODUCTION • Techlawyer since 1998 • Co-founder Vraaghugo/ DOCbldr (legaltech) • Commissioner ‘Security of Things’ Fund • Editorial board member Dutch journal on internet law • Chairman Netherlands Association for Information Technology and Law • “Friend” of Business News Radio Michelle Fisser Menno Weij
  3. 3 Third parties Competitive practices Corporate governance Fraud Corruption Workplace

    health / safety Market Conduct Social media Product quality / liability ESG International dealings / trade Cyber security Financial reporting Modern Slavery Data privacy Financial Economic Crime Labour and Employment Market abuse GDPR AI ACT Sanctions AML6 CSDDD CSRD EiDAS DSA, DMA DIGITAL REGULATIONS IN THE COMPLIANCE JUNGLE © EU STORM IN THE COMPLIANCE JUNGLE - Compliance in Motion
  4. AI Act: risk-based approach 4 Prohibited AI practices (art. 5

    AI Act) Strict requirements (art. 6 AI Act et seq.) Transparency obligations (art. 50 AI Act) Voluntary codes of conduct (art. 95 AI Act) General Purpose AI-models: Separate requirements > specific transparency and disclosure obligations (Chapter V AI Act) Risk: the combination of the probability of an occurrence of harm and the severity of that harm (art. 3(2) AI Act)
  5. General purpose AI models (GPAI) 5 Obligations for GPAI without

    systematic risk (art. 53 AI Act): ▪ Maintaining technical documentation, including information on energy consumption of the model ▪ Making information available to downstream providers who integrate the GPAI model into their AI systems ▪ Complying with EU copyright law ▪ Providing summaries of training data Obligations for GPAI with systemic risk (art. 55 AI Act): Same obligations as for GPAI without systematic risk + ▪ Assessing model performance, including conducting adversarial training of the model (also known as 'red-teaming') ▪ Assessing and mitigating systemic risks ▪ Documenting and reporting serious incidents to the AI Office and inform about action(s) taken ▪ Ensuring security and physical protections are in place • GPAI: models that can be used to perform a wide range of tasks (art. 3(63) AI Act) • GPAI has a "systematic risk” if: the GPAI has high impact capabilities; presumed to have high impact capabilities if computation used for its training is measured in floating point operations greater than 1025 (rec. 65 AI Act)
  6. Provider and deployer 6 Provider – art. 3(3) AI Act

    ▪ Party that develops an AI systems or GPAI model (or has it developed) ▪ To be put on the market or put into use (whether for payment or free of charge) ▪ Under its own name or trademark Deployer – art. 3(4) AI Act ▪ Party that uses an AI system for specific purposes ▪ AI system is considered to be 'under the authority of the deployer', except when the system is used for personal, non-professional activities ▪ In previous versions of the AI Act called ‘users’ Shift from deployer to provider – art. 25(1) AI Act ▪ Deployers can become providers if they: • Put their name or trademark on a high-risk AI system (sub 1a) • Make a substantial modification to a high-risk AI system (sub 1b) • Modify the intended purpose of a non-high-risk AI system, including a general-purpose AI system, in such a manner that the AI system becomes high-risk AI (sub 1c)
  7. Safety components of products Critical infrastructures (e.g. transport), that could

    put the life and health of citizens at risk Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams) Employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures) Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan) Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of High risk systems 7
  8. 8 WHAT IS ETHICAL? Moral Right versus Wrong Profesional Standards

    Values based Fair Responsible Philosopher Immanuel Kant Focus on duty and moral rules than consequences
  9. 9 RESPONSIBLE AI: ETHICAL BEHAVIOR Prompt 05/09: ‘Please draft a

    visual of Responsible AI where Ethics is taken into account, not limited to the machine but also to the human feeding the machine’
  10. HUMAN OVERSIGHT: ETHICAL BOARD 10 HOW TO: TRANSLATION TO PRACTISE

    GOVERNANCE: MISION VISION REPONSIBLE AI POLICY FRAMEWORK: PRINCIPLES AND MINIMUM STANDARDS (FAIRNESS, NON DISCRIMINATION) – HOW TO CONTROL FRAMEWORK MONITORING AND REPORTING: TRANSPARENT AND EXPLAINABLE TRAINING AND AWARENESS HUMAN OVERSIGHT AND AUTONOMY: ETHICAL BOARD
  11. 11 KEY TAKE-AWAYS + FREE TOOL KEEP IT SIMPLE: DO

    NOT UNDERESTIMATE THE COMPLEXITY BEST PRACTISES: https://www.microsoft.com/en/ai/tools-practices RESPONSIBLE AI AND ETHICS ARE INTERTWINED WHAT DOES THE AI ACT MEAN FOR ME?