Upgrade to Pro — share decks privately, control downloads, hide ads and more …

When Machines Judge People

Carina
December 15, 2017
2k

When Machines Judge People

Carina

December 15, 2017
Tweet

Transcript

  1. Präsentation Master Juni 2016 When Machines Judge People How Algorithms

    Impact Societies and Education Ralph Müller-Eiselt Bertelsmann Foundation Dec 10, 2017
  2. Dec 10,2017 | When Machines Judge People 2 ©flickr.com/lucellaribero, CC-BY-SA

    2.0; ComputerBild, thedroidguy.com, Education Traffic Work Legal AI is no longer science fiction… …but common, every-day reality for every individual and the public sphere How to ensure that AI serves society?
  3. Example: In New York City, people constantly interact with algorithms

    in the public sphere – citizens successfully demanded a law transparency Dec 10,2017 | When Machines Judge People 3 In NYC algorithms decide… …which high school children will attend …how teachers are evaluated (and when they are promoted or fired) …which prisoners are released early …where police squads patrol and how often …which buildings are checked for safety hazards © flickr.com/supafly, CC BY-NC-ND 2.0,flickr-com/police-mad-liam, CC BY 2.0 , flickr.com/angelicaportales, CC BY-NC-ND 2.0
  4. Technology provides (at least) six potential added values for education

    Dec 10,2017 | When Machines Judge People DEMOCRATIZATION: MORE ACCESS – FOR THE GIFTED & MOTIVATED PERSONALIZATION: INDIVIDUALIZED LEARNING FOR ALL GAMIFICATION: OVERCOMING MOTIVATIONAL BARRIERS INTERACTION: WEQ BEATS IQ / THE POWER OF PEER-TO-PEER-LEARNING ORIENTATION: GUIDANCE THROUGH THE JUNGLE OF OPPORTUNITIES MATCHING: MAKING EDUCATION MORE VALUABLE
  5. …and relies largely on algorithms Dec 10,2017 | When Machines

    Judge People 5 DEMOCRATIZATION PERSONALIZATION GAMIFICATION INTERACTION ORIENTATION MATCHING
  6. Algorithms scale human decisions: WE determine whether they are catalysts

    for strengthening social equality – or for weakening it Dec 10,2017 | When Machines Judge People 6 SCENARIO: Better opportunities for all SCENARIO: Increasing social inequality + Personalized content + Better access to quality education + Matching job offers - Tageting weak „customers“ - Standardized discrimination - Large scale labor market exclusion
  7. Support or Select? Dec 10,2017 | When Machines Judge People

    7 © pixabay/OpenClipart-Vectors, CC0, publicdomainpictures/ George Hodan, CC0 Derive personal characteristics from (online) behavior Algorithmic career advisor (ASU, Tennessee, …) • Identify individual students‘ strengths and weaknesses • Offer information for better choices • Flag individuals for needed support Targeted advertisements (Some For-Profit Universities) • Predatory ads to identify & attract the vulnerable • Offer inadequate choices • Flag individuals as targets An algorithm can only be as good as the ethical norms you put in
  8. Democratize or Discriminate? Dec 10,2017 | When Machines Judge People

    8 © pixabay/OpenClipart-Vectors, CC0, publicdomainpictures/ George Hodan, CC0 Match students demand with schools Algorithmic distribution based on student wishes (NYC) • Distribute students according to their wishes & potential to schools regardless of race, religion, etc. • Transparent matching process Algorithmic distribution based on student status (France) • Distribute in part by ZIP code, thereby implicitly assessing socio-economic status • Secret criteria make process opaque An algorithm can only be as good as its (unintendeted) consequences – and only accepted if transpareny is ensured
  9. Empower or Exclude? Dec 10,2017 | When Machines Judge people

    9 © pixabay/OpenClipart-Vectors, CC0, publicdomainpictures/ George Hodan, CC0 Match employee skills to job requirements Competency-based recommendation • Analyze real competencies instead of paper-based degrees • Offer new job perspectives • Show chances for further qualification Algorithmic monoculture • Assessment according to monopolistic patterns • Effective exclusion from labor market • Deny second & third chance An algorithmic system can only be as good as the diversity it embraces
  10. Lesson learned: Even well designed algorithms can produce bad results

    Dec 10,2017 | When Machines Judge People 10 Wrong intention Unintended consequences One-sided feedback Algorithm‘s purpose contradicts ethical values Algorithm‘s goal adheres to ethical norms, but its outcomes don‘t Self-fullfilling prophecies, negative predictions cannot be falsified. Lack of diversity User has too few alternatives to get a different choice & assessment … …
  11. Dec 10,2017 | When Machines Judge People 12 Artificial intelligence

    is not God-given, it’s designed by human beings – and can therefore be shaped
  12. Dec 10,2017 | When Machines Judge People 13 Get in

    touch: [email protected] @bildungsmann @algoethik www.ethicsofalgorithms.org