Upgrade to Pro — share decks privately, control downloads, hide ads and more …

An AI with an Agenda: How Our Biases Leak Into Machine Learning (Codestock 2019)

An AI with an Agenda: How Our Biases Leak Into Machine Learning (Codestock 2019)

In the glorious AI-assisted future, all decisions are objective and perfect, and there’s no such thing as cognitive biases. That’s why we created AI and machine learning, right? Because humans can make mistakes, and computers are perfect. Well, there’s some bad news: humans make those AIs and machine learning models, and as a result humanity’s biases and missteps can subtly work their way into our AI and models.

All hope isn’t lost, though! In this talk you’ll learn how science and statistics have already solved some of these problems and how a robust awareness of cognitive biases can help with many of the rest. Come learn what else we can do to protect ourselves from these old mistakes, because we owe it to the people who’ll rely on our algorithms to deliver the best possible intelligence!

Arthur Doler

April 13, 2019
Tweet

More Decks by Arthur Doler

Other Decks in Technology

Transcript

  1. Arthur Doler @arthurdoler [email protected] Slides: Handout: AN AI WITH AN

    AGENDA How Our Biases Leak Into Machine Learning None
  2. Class I – Phantoms of False Correlation Class II –

    Specter of Biased Sample Data Class III – Shade of Overly-Simplistic Maximization Class V – The Simulation Surprise Class VI – Apparition of Fairness Class VII – The Feedback Devil
  3. KEEP IN MIND YOU NEED TO KNOW WHO CAN BE

    AFFECTED IN ORDER TO UN-BIAS
  4. CLASS I - PHANTOMS OF FALSE CORRELATION Know what question

    you’re asking Trust conditional probability over straight correlation
  5. CLASS II - SPECTER OF BIASED SAMPLE DATA Recognize data

    is biased even at rest Make sure your sample set is crafted properly Excise problematic predictors, but beware their shadow columns Build a learning system that can incorporate false positives and false negatives as you find them Try using adversarial techniques to detect bias
  6. CLASS III - SHADE OF OVERLY-SIMPLISTIC MAXIMIZATION Remember models tell

    you what was, not what should be Try combining dependent columns and predicting that Try complex algorithms that allow more flexible reinforcement
  7. CLASS V – THE SIMULATION SURPRISE Don’t confuse the map

    with the territory Always reality-check solutions from simulations
  8. CLASS VI - APPARITION OF FAIRNESS Consider predictive accuracy as

    a resource to be allocated Possibly seek external auditing of results, or at least another team
  9. CLASS VII - THE FEEDBACK DEVIL Ignore or adjust for

    algorithm-suggested results Look to control engineering for potential answers
  10. AI Now Institute Georgetown Law Center on Privacy and Technology

    Knight Foundation’s AI ethics initiative fast.ai