Upgrade to Pro — share decks privately, control downloads, hide ads and more …

UVM SWiCS 2019: Digital Discrimination: Cogniti...

UVM SWiCS 2019: Digital Discrimination: Cognitive Bias in Machine Learning

Tools/Communities:
Center for Open Source Data and Ai Technologies:
https://ibm.biz/codait-trusted-ai
AI Fairness 360 Toolkit: http://aif360.mybluemix.net/
Watson OpenScale: https://www.ibm.com/cloud/watson-openscale/
Model Asset Exchange: http://ibm.biz/model-exchange
Data Asset Exchange: http://ibm.biz/data-exchange
LFAI Trusted AI Committee: https://wiki.lfai.foundation/display/DL/Trusted+AI+Committee
EU Guidelines for Trustworthy AI: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Talk Sources:

Cognitive Bias Definition:
https://adweb.clarkson.edu/~awilke/Research_files/EoHB_Wilke_12.pdf

House Oversight Committee on AI
https://oversight.house.gov/legislation/hearings/facial-recognition-technology-part-1-its-impact-on-our-civil-rights-and
https://oversight.house.gov/legislation/hearings/facial-recognition-technology-part-ii-ensuring-transparency-in-government-use

Podcasts/Tweets referenced/used:
https://leanin.org/podcast-episodes/siri-is-artificial-intelligence-biased
https://art19.com/shows/the-ezra-klein-show/episodes/663fd0b7-ee60-4e3e-b2cb-4fcb4040eef1
https://twitter.com/alexisohanian/status/1087973027055316994
https://twitter.com/MatthewBParksSr/status/1133435312921874432

Google
https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

COMPAS
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/

Data for Black Lives
http://d4bl.org/about.html
2019 Conference Notes: https://docs.google.com/document/d/1E1mfgTp73QFRmNBunl8cIpyUmDos28rekidux0voTsg/edit?ts=5c39f92e

Gender Shades Project
http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
MIT Media Lab Overview for the project: https://www.youtube.com/watch?time_continue=1&v=TWWsW1w-BVo
FAT* 2018 Talk about outcomes: https://www.youtube.com/watch?v=Af2VmR-iGkY
https://www.ajlunited.org/fight

Other resources referenced in this talk:
https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html
https://www.vox.com/science-and-health/2017/4/17/15322378/how-artificial-intelligence-learns-how-to-be-racist
https://www.engadget.com/2019/01/24/pinterest-skin-tone-search-diversity/

Maureen McElaney

November 07, 2019
Tweet

More Decks by Maureen McElaney

Other Decks in Technology

Transcript

  1. A cognitive bias is a systematic pattern of deviation from

    norm or rationality in judgment. People make decisions given their limited resources. Wilke A. and Mata R. (2012) “Cognitive Bias”, Clarkson University 4 @Mo_Mack
  2. October 2017 - Google Natural Language API https://cloud.google.com/natural-language/ 8 Source:

    https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias
  3. October 2017 - Google Natural Language API https://cloud.google.com/natural-language/ 9 Source:

    https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias
  4. October 2017 - Google Natural Language API https://cloud.google.com/natural-language/ 10 Source:

    https://motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias
  5. “We will correct this specific case, and, more broadly, building

    more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone.” 11
  6. BLACK VS. WHITE DEFENDANTS ◦ Falsely labeled black defendants as

    likely of future crime at twice the rate as white defendants. ◦ White defendants mislabeled as low risk more than black defendants ◦ Pegged Black defendants 77% more likely to be at risk of committing future violent crime 19 @Mo_Mack
  7. 20

  8. “If we fail to make ethical and inclusive artificial intelligence

    we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.” 23 - Joy Boulamwini @jovialjoy
  9. Questions posed to students in these courses... Is the technology

    fair? How do you make sure that the data is not biased? Should machines be judging humans? 31 @Mo_Mack
  10. “By combining the latest in machine learning and inclusive product

    development, we're able to directly respond to Pinner feedback and build a more useful product.” 37 - Candice Morgan @Candice_MMorgan @Mo_Mack
  11. EU Ethics Guidelines for Trustworthy Artificial Intelligence According to the

    Guidelines, trustworthy AI should be: (1) lawful - respecting all applicable laws and regulations (2) ethical - respecting ethical principles and values (3) robust - both from a technical perspective while taking into account its social environment Source: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  12. #1 - Human agency and oversight. #2 - Technical robustness

    and safety. #3 - Privacy and data governance. #4 - Transparency . 40 @Mo_Mack #5 - Diversity, non- discrimination and fairness. #6 - Societal and environmental well-being. #7 - Accountability
  13. Machine Learning Pipeline In- Processing Pre- Processing Post- Processing 46

    Modifying the training data. Modifying the learning algorithm. Modifying the predictions (or outcomes.) @Mo_Mack
  14. Step 1: Find a model ...that does what you need

    ...that is free to use ...that is performant enough 50 @Mo_Mack
  15. Step 2: Get the code Is there a good implementation

    available? ...that does what you need ...that is free to use ...that is performant enough 51 @Mo_Mack
  16. Step 3: Verify the model ◦ Does it do what

    you need? ◦ Is it free to use (license)? ◦ Is it performant enough? ◦ Accuracy? 52 @Mo_Mack
  17. Step 5: Deploy your model ◦ Adjust inference code (or

    write from scratch) ◦ Package inference code, model code, and pre-trained weights together ◦ Deploy your package 55 @Mo_Mack
  18. Model Asset Exchange The Model Asset Exchange (MAX) is a

    one stop shop for developers/data scientists to find and use free and open source deep learning models ibm.biz/model-exchange 57 @Mo_Mack
  19. 65 Photo by rawpixel on Unsplash No matter what it

    is our responsibility to build systems that are fair.