Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Cox Automotive - Digital Discrimination: Cognitive Bias in Machine Learning

Cox Automotive - Digital Discrimination: Cognitive Bias in Machine Learning

Tools/Communities:
AI Fairness 360 Toolkit: http://aif360.mybluemix.net/
Model Asset Exchange: http://ibm.biz/model-exchange
IBM's Public Call for Code Competition: https://callforcode.org/
Maureen's team: IBM's Center for Open Source Data and AI Technologies: http://ibm.biz/codait-projects

Talk Sources:

Podcasts/Tweets
https://leanin.org/podcast-episodes/siri-is-artificial-intelligence-biased
https://art19.com/shows/the-ezra-klein-show/episodes/663fd0b7-ee60-4e3e-b2cb-4fcb4040eef1
https://twitter.com/alexisohanian/status/1087973027055316994

Amazon
https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28
https://www.openmic.org/news/2019/1/16/halt-rekognition

Google
https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

COMPAS
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/

Data for Black Lives
http://d4bl.org/about.html
2019 Conference Notes: https://docs.google.com/document/d/1E1mfgTp73QFRmNBunl8cIpyUmDos28rekidux0voTsg/edit?ts=5c39f92e

Gender Shades Project
http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
https://www.youtube.com/watch?time_continue=1&v=TWWsW1w-BVo
https://www.ajlunited.org/fight

Other resources referenced in this talk:
https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html
https://www.vox.com/science-and-health/2017/4/17/15322378/how-artificial-intelligence-learns-how-to-be-racist
https://www.engadget.com/2019/01/24/pinterest-skin-tone-search-diversity/

Maureen McElaney

May 30, 2019
Tweet

More Decks by Maureen McElaney

Other Decks in Technology

Transcript

  1. Cox Automotive
    Digital Discrimination:
    Cognitive Bias in Machine
    Learning
    May 30th, 2019

    View Slide

  2. Digital Discrimination:
    Cognitive Bias in Machine
    Learning
    2
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  3. “A cognitive bias is a systematic pattern
    of deviation from norm or rationality in
    judgment. Individuals create their own
    "subjective social reality" from their
    perception of the input.”
    - Wikipedia
    3

    View Slide

  4. May 8, 2019 / © 2019 IBM Corporation 4

    View Slide

  5. Examples of bias in machine
    learning.
    5
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  6. Google’s Cloud
    Natural
    Language API
    6 Image Credit: #WOCinTech

    View Slide

  7. October 2017 - Google Natural Language API
    https://cloud.google.com/natural-language/
    7
    Source: https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

    View Slide

  8. October 2017 - Google Natural Language API
    https://cloud.google.com/natural-language/
    8
    Source: https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

    View Slide

  9. October 2017 - Google Natural Language API
    https://cloud.google.com/natural-language/
    9
    Source: https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

    View Slide

  10. “We will correct this
    specific case, and, more
    broadly, building more
    inclusive algorithms is
    crucial to bringing the
    benefits of machine
    learning to everyone.”
    10

    View Slide

  11. NorthPointe’s
    COMPAS
    Algorithm
    11 Image Credit: #WOCinTech

    View Slide

  12. Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-
    classification
    12
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  13. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-
    classification
    Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  14. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-
    classification
    Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  15. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-
    classification
    Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  16. Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    16
    Black Defendant’s Risk Scores
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  17. Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    17
    White Defendant’s Risk Scores
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  18. BLACK VS. WHITE
    DEFENDANTS
    ○ Falsely labeled black defendants as likely
    of future crime at twice the rate as white
    defendants.
    ○ White defendants mislabeled as low risk
    more than black defendants
    ○ Pegged Black defendants 77% more likely
    to be at risk of committing future violent
    crime
    18

    View Slide

  19. 19
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  20. Amazon
    Rekognition
    20 Image Credit: #WOCinTech

    View Slide

  21. July 2018 - Amazon Rekognition
    https://aws.amazon.com/rekognition/
    21
    Source: https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28

    View Slide

  22. April 2019 - Amazon Rekognition
    https://aws.amazon.com/rekognition/
    22
    Source: https://www.openmic.org/news/2019/4/4/a-win-for-shareholders-amazon

    View Slide

  23. Joy Buolamwini,
    Algorithmic Justice League
    Gender Shades Project
    Released February 2018
    23

    View Slide

  24. View Slide

  25. “If we fail to make
    ethical and inclusive
    artificial intelligence
    we risk losing gains
    made in civil rights
    and gender equity
    under the guise of
    machine neutrality.”
    25
    - Joy Boulamwini
    @jovialjoy

    View Slide

  26. Solutions?
    What can we do
    to combat bias
    in AI?
    26
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  27. 27
    https://www.vox.com/ezra-klein-show-podcast
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  28. “Coders are the
    most empowered
    laborers that have
    ever existed.”
    28
    - Anil Dash
    @anildash

    View Slide

  29. EDUCATION IS
    KEY
    29 Image Credit: #WOCinTech

    View Slide

  30. https://www.nytimes.com/2018/02/12/business/computer-science-
    ethics-courses.html

    View Slide

  31. Questions
    posed to
    students
    in these
    courses...
    Is the
    technology
    fair?
    How do you
    make sure
    that the
    data is not
    biased?
    Should
    machines
    be judging
    humans?
    31
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  32. 32
    https://twitter.com/Neurosarda/status/1084198368526680064

    View Slide

  33. FIX THE
    PIPELINE?
    33 Image Credit: #WOCinTech

    View Slide

  34. “Cognitive bias in
    machine learning is
    human bias on
    steroids.”
    34
    - Rediet Abebe
    @red_abebe

    View Slide

  35. January 2019 - New Search Feature on...
    https://www.pinterest.com
    Source:
    https://www.engadget.com/2019/01/24/pinterest-skin-tone-search-diversity/

    View Slide

  36. “By combining the
    latest in machine
    learning and inclusive
    product development,
    we're able to directly
    respond to Pinner
    feedback and build a
    more useful product.”
    36
    - Candice Morgan
    @Candice_MMorgan

    View Slide

  37. TOOLS TO
    COMBAT BIAS
    37 Image Credit: #WOCinTech

    View Slide

  38. Tool #1:
    AI Fairness
    360 Toolkit
    Open Source Library
    38
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  39. http://aif360.mybluemix.net/

    View Slide

  40. http://aif360.mybluemix.net/

    View Slide

  41. TYPES OF METRICS
    ○ Individual vs. Group Fairness, or Both
    ○ Group Fairness: Data vs Model
    ○ Group Fairness: We’re All Equal vs What
    You See is What You Get
    ○ Group Fairness: Ratios vs Differences
    41

    View Slide

  42. View Slide

  43. View Slide

  44. Machine Learning
    Pipeline
    In-
    Processing
    Pre-
    Processing
    Post-
    Processing
    44
    Modifying the
    training data.
    Modifying the
    learning
    algorithm.
    Modifying the
    predictions (or
    outcomes.)

    View Slide

  45. View Slide

  46. http://aif360.mybluemix.net/
    Demos

    View Slide

  47. https://github.com/IBM/AIF360
    AI Fairness 360 Toolkit Public Repo
    47
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  48. http://aif360.mybluemix.net/community
    AI Fairness 360 Toolkit Slack
    48
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  49. Tool #2:
    Model Asset
    Exchange
    Open Source Pre-Trained
    Deep Learning Models
    49
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  50. Step 1: Find a model
    ...that does what you need
    ...that is free to use
    ...that is performant enough
    50

    View Slide

  51. Step 2: Get the code
    Is there a good implementation available?
    ...that does what you need
    ...that is free to use
    ...that is performant enough
    51

    View Slide

  52. Step 3: Verify the
    model
    ○ Does it do what you need?
    ○ Is it free to use (license)?
    ○ Is it performant enough?
    ○ Accuracy?
    52

    View Slide

  53. Step 4: Train the model
    53

    View Slide

  54. Step 4: Train the model
    54

    View Slide

  55. Step 5: Deploy your
    model
    ○ Adjust inference code (or write from
    scratch)
    ○ Package inference code, model code, and
    pre-trained weights together
    ○ Deploy your package
    55

    View Slide

  56. Step 6: Consume your
    model
    56

    View Slide

  57. Model Asset
    Exchange
    The Model Asset Exchange (MAX) is a one
    stop shop for developers/data scientists to
    find and use free and open source deep
    learning models
    ibm.biz/model-exchange
    57

    View Slide

  58. ○ Wide variety of domains (text, audio,
    images, etc)
    ○ Multiple deep learning frameworks
    ○ Vetted and tested code/IP
    ○ Build and deploy a model web service in
    seconds
    58
    Model Asset
    Exchange

    View Slide

  59. ibm.biz/model-exchange
    59

    View Slide

  60. http://ibm.biz/model-exchange
    http://ibm.biz/max-slack
    Model Asset eXchange (MAX)
    60
    May 8, 2019 / © 2019 IBM Corporation

    View Slide

  61. UPDATE TO
    THE GENDER
    SHADES
    PROJECT
    61 Image Credit: #WOCinTech

    View Slide

  62. 62
    http://www.aies-conference.com/wp-content/uploads/2019/01/AIES-19_paper_223.pdf

    View Slide

  63. 63
    https://www.ajlunited.org/fight

    View Slide

  64. 64
    Photo by rawpixel on Unsplash
    No matter what it is our
    responsibility to build
    systems that are fair.

    View Slide

  65. 65
    https://callforcode.org/

    View Slide

  66. THANKS!
    Any questions?
    @Mo_Mack
    66

    View Slide