An AI with an Agenda: How Our Biases Leak Into Machine Learning (NDC Minnesota 2019)

An AI with an Agenda: How Our Biases Leak Into Machine Learning (NDC Minnesota 2019)

In the glorious AI-assisted future, all decisions are objective and perfect, and there’s no such thing as cognitive biases. That’s why we created AI and machine learning, right? Because humans can make mistakes, and computers are perfect. Well, there’s some bad news: humans make those AIs and machine learning models, and as a result humanity’s biases and missteps can subtly work their way into our AI and models.

All hope isn’t lost, though! In this talk you’ll learn how science and statistics have already solved some of these problems and how a robust awareness of cognitive biases can help with many of the rest. Come learn what else we can do to protect ourselves from these old mistakes, because we owe it to the people who’ll rely on our algorithms to deliver the best possible intelligence!

6f6662ecab8176c54c3ad89ec158842c?s=128

Arthur Doler

May 08, 2019
Tweet

Transcript

  1. 1.

    Arthur Doler @arthurdoler arthurdoler@gmail.com Slides: Handout: AN AI WITH AN

    AGENDA How Our Biases Leak Into Machine Learning bit.ly/art-ai-with-agenda None
  2. 5.
  3. 6.
  4. 7.
  5. 8.
  6. 9.
  7. 10.
  8. 11.
  9. 12.
  10. 18.
  11. 20.
  12. 21.
  13. 22.
  14. 23.

    Class I – Phantoms of False Correlation Class II –

    Specter of Biased Sample Data Class III – Shade of Overly-Simplistic Maximization Class V – The Simulation Surprise Class VI – Apparition of Fairness Class VII – The Feedback Devil
  15. 24.
  16. 25.
  17. 26.
  18. 27.
  19. 28.
  20. 29.
  21. 33.
  22. 37.
  23. 38.
  24. 40.
  25. 41.
  26. 42.
  27. 43.
  28. 48.
  29. 52.
  30. 55.
  31. 56.
  32. 60.
  33. 61.
  34. 62.
  35. 67.

    KEEP IN MIND YOU NEED TO KNOW WHO CAN BE

    AFFECTED IN ORDER TO UN-BIAS
  36. 68.
  37. 71.
  38. 72.
  39. 77.
  40. 78.
  41. 80.
  42. 83.
  43. 85.
  44. 88.
  45. 89.
  46. 92.
  47. 93.
  48. 97.
  49. 100.
  50. 111.
  51. 115.
  52. 116.

    CLASS I - PHANTOMS OF FALSE CORRELATION Know what question

    you’re asking Trust conditional probability over straight correlation
  53. 117.

    CLASS II - SPECTER OF BIASED SAMPLE DATA Recognize data

    is biased even at rest Make sure your sample set is crafted properly Excise problematic predictors, but beware their shadow columns Build a learning system that can incorporate false positives and false negatives as you find them Try using adversarial techniques to detect bias
  54. 118.

    CLASS III - SHADE OF OVERLY-SIMPLISTIC MAXIMIZATION Remember models tell

    you what was, not what should be Try combining dependent columns and predicting that Try complex algorithms that allow more flexible reinforcement
  55. 119.

    CLASS V – THE SIMULATION SURPRISE Don’t confuse the map

    with the territory Always reality-check solutions from simulations
  56. 120.

    CLASS VI - APPARITION OF FAIRNESS Consider predictive accuracy as

    a resource to be allocated Possibly seek external auditing of results, or at least another team
  57. 121.

    CLASS VII - THE FEEDBACK DEVIL Ignore or adjust for

    algorithm-suggested results Look to control engineering for potential answers
  58. 122.
  59. 123.
  60. 125.
  61. 128.
  62. 129.

    AI Now Institute Georgetown Law Center on Privacy and Technology

    Knight Foundation’s AI ethics initiative fast.ai