Upgrade to Pro — share decks privately, control downloads, hide ads and more …

ODSC East - Digital Discrimination: Cognitive Bias in Machine Learning

ODSC East - Digital Discrimination: Cognitive Bias in Machine Learning

Tools:
AI Fairness 360 Toolkit: http://aif360.mybluemix.net/
Model Asset Exchange: http://ibm.biz/model-exchange
Image Segmenter Web App: https://github.com/IBM/MAX-Image-Segmenter-Web-App
Diversity in Faces Dataset: https://www.research.ibm.com/artificial-intelligence/trusted-ai/diversity-in-faces/#acces
IBM's Call for Code Competition: https://callforcode.org/

Sources:

Podcasts/Tweets
https://leanin.org/podcast-episodes/siri-is-artificial-intelligence-biased
https://art19.com/shows/the-ezra-klein-show/episodes/663fd0b7-ee60-4e3e-b2cb-4fcb4040eef1
https://twitter.com/alexisohanian/status/1087973027055316994

Amazon
https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28
https://www.openmic.org/news/2019/1/16/halt-rekognition

Google
https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

COMPAS
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/

Data for Black Lives
Conference Notes: https://docs.google.com/document/d/1E1mfgTp73QFRmNBunl8cIpyUmDos28rekidux0voTsg/edit?ts=5c39f92e

Gender Shades Project
http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
https://www.youtube.com/watch?time_continue=1&v=TWWsW1w-BVo
https://www.ajlunited.org/fight

Other resources referenced in this talk:
https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html
https://www.vox.com/science-and-health/2017/4/17/15322378/how-artificial-intelligence-learns-how-to-be-racist
https://www.engadget.com/2019/01/24/pinterest-skin-tone-search-diversity/

Maureen McElaney

May 03, 2019
Tweet

More Decks by Maureen McElaney

Other Decks in Technology

Transcript

  1. Digital Discrimination:
    Cognitive Bias in
    Machine Learning
    Maureen McElaney
    Developer Advocate
    Twitter: @Mo_Mack
    Brendan Dwyer
    Developer
    Email: [email protected]
    Center for Open-Source Data and AI Technologies (CODAIT)
    December 19, 2018 / © 2018 IBM Corporation

    View Slide

  2. HELLO!
    Maureen
    McElaney
    You can contact me at:
    @Mo_Mack
    2
    Brendan
    Dwyer
    You can contact me at:
    [email protected]

    View Slide

  3. Digital Discrimination
    Cognitive Bias in Machine Learning
    3

    View Slide


  4. “A cognitive bias is a systematic pattern
    of deviation from norm or rationality in
    judgment. Individuals create their own
    "subjective social reality" from their
    perception of the input.”
    - Wikipedia
    4

    View Slide

  5. 5
    https://twitter.com/alexisohanian/status/1087973027055316994

    View Slide

  6. Examples of Cognitive
    Bias in Machine
    Learning

    View Slide

  7. Google’s Cloud
    Natural Language
    API
    7 Image Credit: #WOCinTech

    View Slide

  8. October 2017 - Google Natural Language API
    https://cloud.google.com/natural-language/
    8
    Source: https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

    View Slide

  9. October 2017 - Google Natural Language API
    https://cloud.google.com/natural-language/
    9
    Source: https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

    View Slide

  10. October 2017 - Google Natural Language API
    https://cloud.google.com/natural-language/
    10
    Source: https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

    View Slide


  11. “We will correct this specific case,
    and, more broadly, building more
    inclusive algorithms is crucial to
    bringing the benefits of machine
    learning to everyone.”
    11

    View Slide

  12. NorthPointe’s
    COMPAS
    Algorithm
    12 Image Credit: #WOCinTech

    View Slide

  13. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-classification
    13
    Source:
    https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

    View Slide

  14. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-classification
    14
    Source:
    https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

    View Slide

  15. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-classification
    15
    Source:
    https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

    View Slide

  16. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-classification
    16
    Source:
    https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

    View Slide

  17. 17
    Source:
    https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    Black Defendant’s Risk Scores

    View Slide

  18. 18
    Source:
    https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    White Defendant’s Risk Scores

    View Slide

  19. BLACK VS. WHITE
    DEFENDANTS
    ○ Falsely labeled black defendants as likely
    of future crime at twice the rate as white
    defendants.
    ○ White defendants mislabeled as low risk
    more than black defendants
    ○ Pegged Black defendants 77% more likely
    to be at risk of committing future violent
    crime
    19

    View Slide

  20. 20

    View Slide

  21. Amazon
    Rekognition
    21 Image Credit: #WOCinTech

    View Slide

  22. July 2018 - Amazon Rekognition
    https://aws.amazon.com/rekognition/
    22
    Source: https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28

    View Slide

  23. April 2019 - Amazon Rekognition
    https://aws.amazon.com/rekognition/
    23
    Source: https://www.openmic.org/news/2019/4/4/a-win-for-shareholders-amazon

    View Slide

  24. Joy Buolamwini,
    Algorithmic Justice League
    Gender Shades Project
    Released February 2018
    24

    View Slide

  25. View Slide

  26. “If we fail to make
    ethical and inclusive
    artificial intelligence
    we risk losing gains
    made in civil rights
    and gender equity
    under the guise of
    machine neutrality.”
    26
    - Joy Boulamwini
    @jovialjoy

    View Slide

  27. SOLUTIONS?
    27
    What can we do to combat bias in AI?

    View Slide

  28. 28
    https://art19.com/shows/the-ezra-klein-show/episodes/663fd0b7-ee60-4e3e-b2cb-4fcb4040eef1

    View Slide

  29. “Coders are the
    most empowered
    laborers that have
    ever existed.”
    29
    - Anil Dash
    @anildash

    View Slide

  30. EDUCATION IS
    KEY
    30 Image Credit: #WOCinTech

    View Slide

  31. 31
    https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html

    View Slide

  32. QUESTIONS POSED
    TO STUDENTS
    ○ Is the technology fair?
    ○ How do you make sure that the data is not
    biased?
    ○ Should machines be judging humans?
    32

    View Slide

  33. 33
    https://twitter.com/Neurosarda/status/1084198368526680064

    View Slide

  34. FIX THE
    PIPELINE?
    34 Image Credit: #WOCinTech

    View Slide

  35. “Cognitive bias in
    machine learning is
    human bias on
    steroids.”
    35
    - Rediet Abebe
    @red_abebe

    View Slide

  36. January 2019 - New Search Feature on
    https://www.pinterest.com/
    36
    Source: https://www.engadget.com/2019/01/24/pinterest-skin-tone-search-diversity/

    View Slide

  37. “By combining the
    latest in machine
    learning and inclusive
    product development,
    we're able to directly
    respond to Pinner
    feedback and build a
    more useful product.”
    37
    - Candice Morgan
    @Candice_MMorgan

    View Slide

  38. TOOLS TO
    COMBAT BIAS
    38 Image Credit: #WOCinTech

    View Slide

  39. TOOL #1:
    AI Fairness 360 Toolkit
    Open Source Library

    View Slide

  40. http://aif360.mybluemix.net/

    View Slide

  41. View Slide

  42. TYPES OF METRICS
    ○ Individual vs. Group Fairness, or Both
    ○ Group Fairness: Data vs Model
    ○ Group Fairness: We’re All Equal vs What
    You See is What You Get
    ○ Group Fairness: Ratios vs Differences
    42

    View Slide

  43. View Slide

  44. View Slide

  45. Machine Learning
    Pipeline
    In-
    Processing
    Pre-
    Processing
    Post-
    Processing
    45
    Modifying the
    training data.
    Modifying the
    learning
    algorithm.
    Modifying the
    predictions (or
    outcomes.)

    View Slide

  46. View Slide

  47. DEMO
    http://aif360.mybluemix.net/

    View Slide

  48. https://github.com
    /IBM/AIF360
    AI Fairness 360 Toolkit Public Repo
    48

    View Slide

  49. http://aif360.mybluemix.
    net/community
    AI Fairness 360 Slack Community
    49

    View Slide

  50. TOOL #2:
    Model Asset Exchange
    Open Source Pre-Trained Deep Learning
    Models

    View Slide

  51. Step 1: Find a model
    ...that does what you need
    ...that is free to use
    ...that is performant enough
    51

    View Slide

  52. Step 2: Get the code
    Is there a good implementation available?
    ...that does what you need
    ...that is free to use
    ...that is performant enough
    52

    View Slide

  53. Step 3: Verify the
    model
    ○ Does it do what you need?
    ○ Is it free to use (license)?
    ○ Is it performant enough?
    ○ Accuracy?
    53

    View Slide

  54. Step 4: Train the model
    54

    View Slide

  55. Step 4: Train the model
    55

    View Slide

  56. Step 5: Deploy your
    model
    ○ Adjust inference code (or write from
    scratch)
    ○ Package inference code, model code, and
    pre-trained weights together
    ○ Deploy your package
    56

    View Slide

  57. Step 6: Consume
    your model
    57

    View Slide

  58. Model Asset
    Exchange
    The Model Asset Exchange (MAX) is a one
    stop shop for developers/data scientists to
    find and use free and open source deep
    learning models
    ibm.biz/model-exchange
    58

    View Slide

  59. ○ Wide variety of domains (text, audio,
    images, etc)
    ○ Multiple deep learning frameworks
    ○ Vetted and tested code/IP
    ○ Build and deploy a model web service in
    seconds
    59
    Model Asset
    Exchange

    View Slide

  60. ibm.biz/model-exchange
    60

    View Slide

  61. http://ibm.biz/
    model-exchange
    Model Asset eXchange
    61

    View Slide

  62. ibm.biz/max-slack
    Model Asset eXchange Slack
    Community
    62

    View Slide

  63. TOOL #3:
    Diversity in Faces
    Dataset
    Diverse dataset available to the global
    research community.

    View Slide

  64. Diversity in Faces
    Dataset
    Studying diversity in faces is complex. The dataset
    provides a jumping off point for the global research
    community to further our collective knowledge.
    64

    View Slide

  65. http://ibm.biz/diversity-dataset
    Diversity in Faces Dataset
    65

    View Slide

  66. UPDATE TO THE
    GENDER SHADES
    PROJECT
    66 Image Credit: #WOCinTech

    View Slide

  67. 67
    http://www.aies-conference.com/wp-content/uploads/2019/01/AIES-19_paper_223.pdf

    View Slide

  68. 68
    https://www.ajlunited.org/fight

    View Slide

  69. 69
    Photo by rawpixel on Unsplash
    No matter what it is our
    responsibility to build
    systems that are fair.

    View Slide

  70. THANKS!
    Slides and Links:
    Any questions?
    You can contact Maureen on Twitter @Mo_Mack
    Or Brendan via email at [email protected]
    70

    View Slide