Upgrade to Pro — share decks privately, control downloads, hide ads and more …

UVM SWiCS 2019: Digital Discrimination: Cognitive Bias in Machine Learning

UVM SWiCS 2019: Digital Discrimination: Cognitive Bias in Machine Learning

Tools/Communities:
Center for Open Source Data and Ai Technologies:
https://ibm.biz/codait-trusted-ai
AI Fairness 360 Toolkit: http://aif360.mybluemix.net/
Watson OpenScale: https://www.ibm.com/cloud/watson-openscale/
Model Asset Exchange: http://ibm.biz/model-exchange
Data Asset Exchange: http://ibm.biz/data-exchange
LFAI Trusted AI Committee: https://wiki.lfai.foundation/display/DL/Trusted+AI+Committee
EU Guidelines for Trustworthy AI: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Talk Sources:

Cognitive Bias Definition:
https://adweb.clarkson.edu/~awilke/Research_files/EoHB_Wilke_12.pdf

House Oversight Committee on AI
https://oversight.house.gov/legislation/hearings/facial-recognition-technology-part-1-its-impact-on-our-civil-rights-and
https://oversight.house.gov/legislation/hearings/facial-recognition-technology-part-ii-ensuring-transparency-in-government-use

Podcasts/Tweets referenced/used:
https://leanin.org/podcast-episodes/siri-is-artificial-intelligence-biased
https://art19.com/shows/the-ezra-klein-show/episodes/663fd0b7-ee60-4e3e-b2cb-4fcb4040eef1
https://twitter.com/alexisohanian/status/1087973027055316994
https://twitter.com/MatthewBParksSr/status/1133435312921874432

Google
https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

COMPAS
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/

Data for Black Lives
http://d4bl.org/about.html
2019 Conference Notes: https://docs.google.com/document/d/1E1mfgTp73QFRmNBunl8cIpyUmDos28rekidux0voTsg/edit?ts=5c39f92e

Gender Shades Project
http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
MIT Media Lab Overview for the project: https://www.youtube.com/watch?time_continue=1&v=TWWsW1w-BVo
FAT* 2018 Talk about outcomes: https://www.youtube.com/watch?v=Af2VmR-iGkY
https://www.ajlunited.org/fight

Other resources referenced in this talk:
https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html
https://www.vox.com/science-and-health/2017/4/17/15322378/how-artificial-intelligence-learns-how-to-be-racist
https://www.engadget.com/2019/01/24/pinterest-skin-tone-search-diversity/

Maureen McElaney

November 07, 2019
Tweet

More Decks by Maureen McElaney

Other Decks in Technology

Transcript

  1. UVM Society of Women in
    Computer Science
    Digital Discrimination:
    Cognitive Bias in Machine
    Learning
    November 7, 2019

    View Slide

  2. 2
    Hi! My name is
    Maureen McElaney

    View Slide

  3. Digital Discrimination:
    Cognitive Bias in Machine
    Learning
    Tweet at me! @Mo_Mack
    3

    View Slide

  4. A cognitive bias is a systematic pattern of
    deviation from norm or rationality in
    judgment.
    People make decisions given their limited
    resources.
    Wilke A. and Mata R. (2012) “Cognitive Bias”, Clarkson University
    4
    @Mo_Mack

    View Slide

  5. 5
    @Mo_Mack

    View Slide

  6. Examples of bias in machine
    learning.
    6
    @Mo_Mack

    View Slide

  7. Google’s Cloud
    Natural
    Language API
    7 Image Credit: #WOCinTech
    @Mo_Mack

    View Slide

  8. October 2017 - Google Natural Language API
    https://cloud.google.com/natural-language/
    8
    Source: https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

    View Slide

  9. October 2017 - Google Natural Language API
    https://cloud.google.com/natural-language/
    9
    Source: https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

    View Slide

  10. October 2017 - Google Natural Language API
    https://cloud.google.com/natural-language/
    10
    Source: https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

    View Slide

  11. “We will correct this
    specific case, and, more
    broadly, building more
    inclusive algorithms is
    crucial to bringing the
    benefits of machine
    learning to everyone.”
    11

    View Slide

  12. NorthPointe’s
    COMPAS
    Algorithm
    12 Image Credit: #WOCinTech
    @Mo_Mack

    View Slide

  13. Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-
    classification
    13

    View Slide

  14. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-
    classification
    Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

    View Slide

  15. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-
    classification
    Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

    View Slide

  16. May 2016 - Northpointe’s COMPAS Algorithm
    http://www.equivant.com/solutions/inmate-
    classification
    Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

    View Slide

  17. Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    17
    Black Defendant’s Risk Scores

    View Slide

  18. Source: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    18
    White Defendant’s Risk Scores

    View Slide

  19. BLACK VS. WHITE
    DEFENDANTS
    ○ Falsely labeled black defendants as likely
    of future crime at twice the rate as white
    defendants.
    ○ White defendants mislabeled as low risk
    more than black defendants
    ○ Pegged Black defendants 77% more likely
    to be at risk of committing future violent
    crime
    19
    @Mo_Mack

    View Slide

  20. 20

    View Slide

  21. Gender
    Shades Project
    February 2018
    21 Image Credit: #WOCinTech
    @Mo_Mack

    View Slide

  22. http://gendershades.org/

    View Slide

  23. “If we fail to make
    ethical and inclusive
    artificial intelligence
    we risk losing gains
    made in civil rights
    and gender equity
    under the guise of
    machine neutrality.”
    23
    - Joy Boulamwini
    @jovialjoy

    View Slide

  24. 24
    http://www.aies-conference.com/wp-content/uploads/2019/01/AIES-19_paper_223.pdf

    View Slide

  25. 25
    https://www.youtube.com/watch?v=Af2VmR-iGkY
    @Mo_Mack

    View Slide

  26. Solutions?
    What can we do
    to combat bias
    in AI?
    26
    @Mo_Mack

    View Slide

  27. 27
    https://www.vox.com/ezra-klein-show-podcast

    View Slide

  28. “Coders are the
    most empowered
    laborers that have
    ever existed.”
    28
    - Anil Dash
    @anildash

    View Slide

  29. EDUCATION IS
    KEY
    29 Image Credit: #WOCinTech
    @Mo_Mack

    View Slide

  30. https://www.nytimes.com/2018/02/12/business/computer-science-
    ethics-courses.html

    View Slide

  31. Questions
    posed to
    students
    in these
    courses...
    Is the
    technology
    fair?
    How do you
    make sure
    that the
    data is not
    biased?
    Should
    machines
    be judging
    humans?
    31
    @Mo_Mack

    View Slide

  32. 32
    https://twitter.com/Neurosarda/status/1084198368526680064

    View Slide

  33. FIX THE
    PIPELINE?
    33 Image Credit: #WOCinTech
    @Mo_Mack

    View Slide

  34. “Cognitive bias in
    machine learning is
    human bias on
    steroids.”
    34
    - Rediet Abebe
    @red_abebe
    @Mo_Mack

    View Slide

  35. 35
    https://twitter.com/MatthewBParksSr/status/1133435312921874432

    View Slide

  36. January 2019 - New Search Feature on...
    https://www.pinterest.com
    Source:
    https://www.engadget.com/2019/01/24/pinterest-skin-tone-search-diversity/

    View Slide

  37. “By combining the
    latest in machine
    learning and inclusive
    product development,
    we're able to directly
    respond to Pinner
    feedback and build a
    more useful product.”
    37
    - Candice Morgan
    @Candice_MMorgan
    @Mo_Mack

    View Slide

  38. National and
    Industry
    Standards
    38 Image Credit: #WOCinTech
    @Mo_Mack

    View Slide

  39. EU Ethics Guidelines for Trustworthy
    Artificial Intelligence
    According to the Guidelines, trustworthy AI should be:
    (1) lawful - respecting all applicable laws and
    regulations
    (2) ethical - respecting ethical principles and values
    (3) robust - both from a technical perspective while
    taking into account its social environment
    Source:
    https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

    View Slide

  40. #1 -
    Human
    agency and
    oversight.
    #2 -
    Technical
    robustness
    and safety.
    #3 -
    Privacy and
    data
    governance.
    #4 -
    Transparency
    .
    40
    @Mo_Mack
    #5 -
    Diversity,
    non-
    discrimination
    and fairness.
    #6 -
    Societal and
    environmental
    well-being.
    #7 -
    Accountability

    View Slide

  41. 41
    https://wiki.lfai.foundation/display/DL/Trusted+AI+Committee

    View Slide

  42. TOOLS TO
    COMBAT BIAS
    42 Image Credit: #WOCinTech
    @Mo_Mack

    View Slide

  43. Tool #1:
    AI Fairness
    360 Toolkit
    Open Source Library
    43
    @Mo_Mack

    View Slide

  44. http://aif360.mybluemix.net/
    @Mo_Mack

    View Slide

  45. http://aif360.mybluemix.net/
    @Mo_Mack

    View Slide

  46. Machine Learning
    Pipeline
    In-
    Processing
    Pre-
    Processing
    Post-
    Processing
    46
    Modifying the
    training data.
    Modifying the
    learning
    algorithm.
    Modifying the
    predictions (or
    outcomes.)
    @Mo_Mack

    View Slide

  47. http://aif360.mybluemix.net/
    Demos
    @Mo_Mack

    View Slide

  48. https://github.com/IBM/AIF360
    AI Fairness 360 Toolkit Public Repo
    48
    @Mo_Mack

    View Slide

  49. Tool #2:
    Model Asset
    eXchange
    Open Source Pre-Trained
    Deep Learning Models
    49
    @Mo_Mack

    View Slide

  50. Step 1: Find a model
    ...that does what you need
    ...that is free to use
    ...that is performant enough
    50
    @Mo_Mack

    View Slide

  51. Step 2: Get the code
    Is there a good implementation available?
    ...that does what you need
    ...that is free to use
    ...that is performant enough
    51
    @Mo_Mack

    View Slide

  52. Step 3: Verify the
    model
    ○ Does it do what you need?
    ○ Is it free to use (license)?
    ○ Is it performant enough?
    ○ Accuracy?
    52
    @Mo_Mack

    View Slide

  53. Step 4: Train the model
    53
    @Mo_Mack

    View Slide

  54. Step 4: Train the model
    54
    @Mo_Mack

    View Slide

  55. Step 5: Deploy your
    model
    ○ Adjust inference code (or write from
    scratch)
    ○ Package inference code, model code, and
    pre-trained weights together
    ○ Deploy your package
    55
    @Mo_Mack

    View Slide

  56. Step 6: Consume your
    model
    56
    @Mo_Mack

    View Slide

  57. Model Asset
    Exchange
    The Model Asset Exchange (MAX) is a one
    stop shop for developers/data scientists to
    find and use free and open source deep
    learning models
    ibm.biz/model-exchange
    57
    @Mo_Mack

    View Slide

  58. ibm.biz/model-exchange
    58
    @Mo_Mack

    View Slide

  59. http://ibm.biz/model-exchange
    Model Asset eXchange (MAX)
    59
    @Mo_Mack

    View Slide

  60. Tool #3:
    Data Asset
    eXchange
    Open Source Data Sets
    60
    @Mo_Mack

    View Slide

  61. http://ibm.biz/data-exchange
    Data Asset eXchange (DAX)
    61
    @Mo_Mack

    View Slide

  62. http://ibm.biz/codait-trusted-ai
    IBM CODAIT Trusted AI Work
    62
    @Mo_Mack

    View Slide

  63. 63
    https://www.ajlunited.org/fight

    View Slide

  64. 64
    https://www.patreon.com/poetofcode

    View Slide

  65. 65
    Photo by rawpixel on Unsplash
    No matter what it is our
    responsibility to build
    systems that are fair.

    View Slide

  66. Thank you!
    Slides and sources from this talk...
    Any questions for me? @Mo_Mack
    66

    View Slide