Slide 1

Slide 1 text

Präsentation Master Juni 2016 When Machines Judge People How Algorithms Impact Societies and Education Ralph Müller-Eiselt Bertelsmann Foundation Dec 10, 2017

Slide 2

Slide 2 text

Dec 10,2017 | When Machines Judge People 2 ©flickr.com/lucellaribero, CC-BY-SA 2.0; ComputerBild, thedroidguy.com, Education Traffic Work Legal AI is no longer science fiction… …but common, every-day reality for every individual and the public sphere How to ensure that AI serves society?

Slide 3

Slide 3 text

Example: In New York City, people constantly interact with algorithms in the public sphere – citizens successfully demanded a law transparency Dec 10,2017 | When Machines Judge People 3 In NYC algorithms decide… …which high school children will attend …how teachers are evaluated (and when they are promoted or fired) …which prisoners are released early …where police squads patrol and how often …which buildings are checked for safety hazards © flickr.com/supafly, CC BY-NC-ND 2.0,flickr-com/police-mad-liam, CC BY 2.0 , flickr.com/angelicaportales, CC BY-NC-ND 2.0

Slide 4

Slide 4 text

Technology provides (at least) six potential added values for education Dec 10,2017 | When Machines Judge People DEMOCRATIZATION: MORE ACCESS – FOR THE GIFTED & MOTIVATED PERSONALIZATION: INDIVIDUALIZED LEARNING FOR ALL GAMIFICATION: OVERCOMING MOTIVATIONAL BARRIERS INTERACTION: WEQ BEATS IQ / THE POWER OF PEER-TO-PEER-LEARNING ORIENTATION: GUIDANCE THROUGH THE JUNGLE OF OPPORTUNITIES MATCHING: MAKING EDUCATION MORE VALUABLE

Slide 5

Slide 5 text

…and relies largely on algorithms Dec 10,2017 | When Machines Judge People 5 DEMOCRATIZATION PERSONALIZATION GAMIFICATION INTERACTION ORIENTATION MATCHING

Slide 6

Slide 6 text

Algorithms scale human decisions: WE determine whether they are catalysts for strengthening social equality – or for weakening it Dec 10,2017 | When Machines Judge People 6 SCENARIO: Better opportunities for all SCENARIO: Increasing social inequality + Personalized content + Better access to quality education + Matching job offers - Tageting weak „customers“ - Standardized discrimination - Large scale labor market exclusion

Slide 7

Slide 7 text

Support or Select? Dec 10,2017 | When Machines Judge People 7 © pixabay/OpenClipart-Vectors, CC0, publicdomainpictures/ George Hodan, CC0 Derive personal characteristics from (online) behavior Algorithmic career advisor (ASU, Tennessee, …) • Identify individual students‘ strengths and weaknesses • Offer information for better choices • Flag individuals for needed support Targeted advertisements (Some For-Profit Universities) • Predatory ads to identify & attract the vulnerable • Offer inadequate choices • Flag individuals as targets An algorithm can only be as good as the ethical norms you put in

Slide 8

Slide 8 text

Democratize or Discriminate? Dec 10,2017 | When Machines Judge People 8 © pixabay/OpenClipart-Vectors, CC0, publicdomainpictures/ George Hodan, CC0 Match students demand with schools Algorithmic distribution based on student wishes (NYC) • Distribute students according to their wishes & potential to schools regardless of race, religion, etc. • Transparent matching process Algorithmic distribution based on student status (France) • Distribute in part by ZIP code, thereby implicitly assessing socio-economic status • Secret criteria make process opaque An algorithm can only be as good as its (unintendeted) consequences – and only accepted if transpareny is ensured

Slide 9

Slide 9 text

Empower or Exclude? Dec 10,2017 | When Machines Judge people 9 © pixabay/OpenClipart-Vectors, CC0, publicdomainpictures/ George Hodan, CC0 Match employee skills to job requirements Competency-based recommendation • Analyze real competencies instead of paper-based degrees • Offer new job perspectives • Show chances for further qualification Algorithmic monoculture • Assessment according to monopolistic patterns • Effective exclusion from labor market • Deny second & third chance An algorithmic system can only be as good as the diversity it embraces

Slide 10

Slide 10 text

Lesson learned: Even well designed algorithms can produce bad results Dec 10,2017 | When Machines Judge People 10 Wrong intention Unintended consequences One-sided feedback Algorithm‘s purpose contradicts ethical values Algorithm‘s goal adheres to ethical norms, but its outcomes don‘t Self-fullfilling prophecies, negative predictions cannot be falsified. Lack of diversity User has too few alternatives to get a different choice & assessment … …

Slide 11

Slide 11 text

Dec 10,2017 | When Machines Judge People 11 Cause for concern?

Slide 12

Slide 12 text

Dec 10,2017 | When Machines Judge People 12 Artificial intelligence is not God-given, it’s designed by human beings – and can therefore be shaped

Slide 13

Slide 13 text

Dec 10,2017 | When Machines Judge People 13 Get in touch: [email protected] @bildungsmann @algoethik www.ethicsofalgorithms.org