Upgrade to Pro — share decks privately, control downloads, hide ads and more …

SCARY STORIES ABOUT AI GONE WRONG

SCARY STORIES ABOUT AI GONE WRONG

As Artificial Intelligence becomes a bigger part of our lives, it’s time to examine the dangers. It is easy to see the negative impacts of failing to evaluate ethical consequences in retrospect, but often much harder when initially developing intelligent systems. In this talk we will examine three separate examples of the application of AI to various systems. We will look at humorous example(s) where the lack of effective human supervision prevented the system from doing its job altogether, examples where failing to correct early problems led to unintended (and icky) consequences, and effectively human-led AI process(es) where expertise enabled the engineers involved to correct the system before it got off track.

It can be easy to blame artificial intelligence when outcomes are bad, but ultimately we are responsible for guiding these systems according to what is valued. I will provide key questions that we can ask as engineers to avoid nasty surprises: what are the situations where this system will be forced to make hard decisions? What should its priorities be? How can we best validate these results along the way? Attendees will leave with a better understanding of the limitations and strengths of machine learning and how when left unattended intelligent systems can reach unforeseen conclusions.

Amanda Sopkin

August 07, 2019
Tweet

More Decks by Amanda Sopkin

Other Decks in Technology

Transcript

  1. Agenda Introduction to AI Funny examples of AI gone wrong

    Scary examples of AI gone wrong How to NOT go wrong
  2. “Machine learning”: the use of algorithms to parse data, learn

    from that data, and make informed decisions based on what it has learned
  3. Out of 2,830 startups in Europe that were classified as

    being AI companies, only 1,580 accurately fit that description, according to the eye-opening stat on page 99 of a new report from MMC, a London-based venture capital firm. In many cases the label, which refers to computer systems that can perform tasks normally requiring human intelligence, was simply wrong.
  4. AI Flops: tay.ai built by "mining relevant public data" and

    combining that with input from editorial staff, "including improvisational comedians."
  5. AI Flops: Google Photos Reddit user MalletsDarker posted three photos

    taken at a ski resort: two were landscapes, the other shot of his friend
  6. "No hard feelings on my part. I've always had very

    small eyes and facial recognition technology is relatively new and unsophisticated" - Lee
  7. penalized resumes that included the word “women’s,” as in “women’s

    chess club captain” “downgraded graduates of two all-women’s colleges”
  8. AI Scary Stories: Watson’s Oncology Advisor “Physicians like it. Physicians

    have said to me, if I took it away now, I’d have a revolt,” Deborah DiSanzo, general manager of IBM Watson Health, June 2017
  9. IBM published multiple studies demonstrating that Watson would achieve a

    high level of “concordance” with the treatment recommendations of oncologists
  10. “This product is a piece of s---. We bought it

    for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.” - Doctor at Jupiter Hospital
  11. Watson suggested that doctors give a cancer patient with severe

    bleeding a drug that could worsen the bleeding.
  12. The Mill Avenue collision, which killed 49-year-old Elaine Herzberg as

    she walked across the street midblock, was the first fatal crash with a pedestrian and a self-driving car.
  13. Musts for driverless cars - Plan for failure - Correct

    early and often - Supervise heavily - Consider the potential for harm @amandasopkin
  14. AI Scary Stories: Faceception “Gilboa envisions governments considering his findings

    along with other sources to better identify terrorists.”
  15. AI Scary Stories: Beauty.AI The machine’s algorithm was supposed to

    examine facial symmetry and identify wrinkles and blemishes in order to find the contestants who most embodied “human beauty.”
  16. The algorithm didn’t favor women with dark skin. Six thousand

    people from 100 countries around the world submitted their photos, and 44 winners were later announced -- only one of whom had dark skin.
  17. x y

  18. IBM’s Million Faces The first of its kind available to

    the global research community, DiF provides a dataset of annotations of 1 million human facial images. Using publicly available images from the YFCC-100M Creative Commons data set, we annotated the faces using 10 well-established and independent coding schemes from the scientific literature. The coding schemes principally include objective measures of human faces, such as craniofacial features, as well as more subjective annotations, such as human-labeled predictions of age and gender.
  19. “The Justice Department’s National Institute of Corrections now encourages the

    use of such combined assessments at every stage of the criminal justice process.”
  20. AI Scary Stories: Social Media Engagement “when left unchecked, people

    will engage disproportionately with more sensationalist and provocative content”
  21. “she held over 4,000 conversations with 700 users and was

    able to resolve the majority of those queries independently, allowing employees to get consistent support without delay”, Järborg adds.
  22. “Everything is just where I need it. I don’t have

    to lift up the heavy parts,” says Jürgen Heidemann, who has worked at SEW for 40 years, since he was 18. “This is more satisfying because I am making the whole system. I only did one part of the process in the old line.”
  23. What went well? - Potential for harm considered - Corrected

    early and often - Supervised heavily @amandasopkin
  24. “Using AI, the team will be able to optimize its

    climate recipes for multiple factors, including taste, cost, and sustainability, and create recipes for growing a myriad of crops.”
  25. What went well? - Open source (easy corrections) - Lots

    of (real) data - Increasees “experimental thoroughput” @amandasopkin
  26. Wrapping up... - Take AI seriously - Be a hover

    parent to your AI - Humans are often not the best source of truth @amandasopkin