Slide 1

Slide 1 text

About the good and the bad things of [AI] Algorithms and the European AI Act Marc Salomon, Professor of Decision Science, University of Amsterdam

Slide 2

Slide 2 text

1 WHITE BOX MODELS (= explain what happens reasonably well) BLACK BOX MODELS (= difficult to explain) Decision tree Linear regression Neural network This talk is about “the good” and “the bad” of algorithms

Slide 3

Slide 3 text

2 What is the fear ? That we use (AI) algorithms that make decisions - intentionally or not - with far-reaching social, health, security and / or financial consequences

Slide 4

Slide 4 text

3

Slide 5

Slide 5 text

4 OMG, this does not happen here, …. Dutch childcare tax benefit affair. Algorithms determine who gets on the blacklist of the tax authorities and may run into big financial problems

Slide 6

Slide 6 text

5 Harm to our democratic system – Cambridge Analytica • Floating voters were identified using Facebook data • They were presented information (via Facebook and other channels) in favour of voting Trump

Slide 7

Slide 7 text

Harm to our wallets – digital cartels 6

Slide 8

Slide 8 text

7 But, …., there is also another fear to take very seriously Harm due to not using AI, for instance to save lives, because they can not be developed due to some laws

Slide 9

Slide 9 text

8 A difficult balance Forbidding “the good” (= too strict rules) Continuing with “the bad” (= too loose rules)

Slide 10

Slide 10 text

9 Extra complication in the discussion on algorithms: different people/cultures/ethical schools have different perspectives on “good” and “bad” (“good” and “bad” are subjective)

Slide 11

Slide 11 text

MIT Moral Machine to test people's perspectives

Slide 12

Slide 12 text

MIT Moral Machine to test people's perspectives

Slide 13

Slide 13 text

12 Different “schools” in ethics think differently

Slide 14

Slide 14 text

13 Different “schools” in ethics think differently Utilitarian approach: The greatest benefit to the greatest group. The Fairness of Justice Approach: All people should be treated the same Many other ethical schools…. Source: https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/thinking-ethically/

Slide 15

Slide 15 text

14 How to deal with this ? Different ethical schools Different applicable laws Technical limitations (math, CS, audit)

Slide 16

Slide 16 text

15 Mathematical limitations Explainability (= how good could we explain the outcome of the algorithm to people) Quality of outcome of the algorithm

Slide 17

Slide 17 text

Europe’s answer European AI Act 16

Slide 18

Slide 18 text

Source: EU Guidelines for Trustworthy AI, Independent High Level Expert Group 17

Slide 19

Slide 19 text

Source: EU Guidelines for Trustworthy AI, Independent High Level Expert Group 18

Slide 20

Slide 20 text

Source: EU Guidelines for Trustworthy AI, Independent High Level Expert Group 19

Slide 21

Slide 21 text

Act classifies risk of applications and domains Source: Eve Gaumond, Lawfare Blog, June 2021 20

Slide 22

Slide 22 text

21 Hurdles with the law

Slide 23

Slide 23 text

22 Hurdles with the European Act 1. Many concepts are very subjective. Not all rational thinkers would describe “harmful discrimination” in the same way

Slide 24

Slide 24 text

23 Hurdles with the European Act 1. Many concepts are very subjective. Not all rational thinkers would describe “harmful discrimination” in the same way 2. Translation of the legal text into mathematics is difficult. Different mathematical concepts to describe harmful discrimination.

Slide 25

Slide 25 text

24 Hurdles with the European Act 1. Many concepts are very subjective. Not all rational thinkers would describe “harmful discrimination” in the same way 2. Translation of the legal text into mathematics is difficult. Different mathematical concepts to describe harmful discrimination. 3. Monitoring / auditing concepts, including XAI, still not in mature stage.

Slide 26

Slide 26 text

25 Hurdles with the European Act 1. Many concepts are very subjective. Not all rational thinkers would describe “harmful discrimination” in the same way 2. Translation of the legal text into mathematics is difficult. Different mathematical concepts to describe harmful discrimination. 3. Monitoring / auditing concepts, including XAI, still not in mature stage. 4. Algorithms have very positive contributions: how to keep them ?

Slide 27

Slide 27 text

26

Slide 28

Slide 28 text

27

Slide 29

Slide 29 text

28

Slide 30

Slide 30 text

29

Slide 31

Slide 31 text

30 Time to share some “best practices”

Slide 32

Slide 32 text

31 Best practices: do a communication campaign

Slide 33

Slide 33 text

32 Best practices: provide reliable information

Slide 34

Slide 34 text

33

Slide 35

Slide 35 text

34

Slide 36

Slide 36 text

35 Best practices: be careful at buying algorithms

Slide 37

Slide 37 text

36 What are your thoughts on trustworthy AI ? Marc Salomon, Professor of Decision Science Program Director MBA Big Data & Business Analytics University of Amsterdam [email protected] www.linkedin.com/in/marcsalomon