Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Red Hat DevNation - Digital Discrimination: Cognitive Bias in Machine Learning

Red Hat DevNation - Digital Discrimination: Cognitive Bias in Machine Learning

Enter IBM's Call for Code Competition: https://ibm.biz/BdzPJn
AI Fairness 360 Toolkit: http://aif360.mybluemix.net/
Model Asset Exchange: http://ibm.biz/model-exchange
IBM's Data Science and AI Elite Team: http://community.ibm.com/DSE
Watson Studio: https://www.ibm.com/cloud/watson-studio
Watson Machine Learning: https://www.ibm.com/cloud/machine-learning
Watson OpenScale: https://www.ibm.com/cloud/watson-openscale/

Talk Sources:

Cognitive Bias Definition:


Northpointe/Equivant's COMPAS - Recidivism Score Algorithm

Data for Black Lives
2019 Conference Notes: https://docs.google.com/document/d/1E1mfgTp73QFRmNBunl8cIpyUmDos28rekidux0voTsg/edit?ts=5c39f92e

Gender Shades Project
MIT Media Lab Overview for the project: https://www.youtube.com/watch?time_continue=1&v=TWWsW1w-BVo
FAT* 2018 Talk about outcomes: https://www.youtube.com/watch?v=Af2VmR-iGkY

Other articles referenced in this talk:

Maureen McElaney

June 25, 2019

More Decks by Maureen McElaney

Other Decks in Technology


  1. Digital Discrimination: Cognitive Bias in Machine Learning My name is:

    Maureen McElaney Tweet at me! @Mo_Mack 2 @Mo_Mack
  2. A cognitive bias is a systematic pattern of deviation from

    norm or rationality in judgment. People make decisions given their limited resources. Wilke A. and Mata R. (2012) “Cognitive Bias”, Clarkson University 3 @Mo_Mack
  3. BLACK VS. WHITE DEFENDANTS ◦ Falsely labeled black defendants as

    likely of future crime at twice the rate as white defendants. ◦ White defendants mislabeled as low risk more than black defendants ◦ Pegged Black defendants 77% more likely to be at risk of committing future violent crime 13 @Mo_Mack
  4. “If we fail to make ethical and inclusive artificial intelligence

    we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.” 18 - Joy Boulamwini @jovialjoy
  5. Questions posed to students in these courses... Is the technology

    fair? How do you make sure that the data is not biased? Should machines be judging humans? 24 @Mo_Mack
  6. TYPES OF METRICS ◦ Individual vs. Group Fairness, or Both

    ◦ Group Fairness: Data vs Model ◦ Group Fairness: We’re All Equal vs What You See is What You Get ◦ Group Fairness: Ratios vs Differences 33 @Mo_Mack
  7. Machine Learning Pipeline In- Processing Pre- Processing Post- Processing 36

    Modifying the training data. Modifying the learning algorithm. Modifying the predictions (or outcomes.) @Mo_Mack
  8. Step 1: Find a model ...that does what you need

    ...that is free to use ...that is performant enough 42 @Mo_Mack
  9. Step 2: Get the code Is there a good implementation

    available? ...that does what you need ...that is free to use ...that is performant enough 43 @Mo_Mack
  10. Step 3: Verify the model ◦ Does it do what

    you need? ◦ Is it free to use (license)? ◦ Is it performant enough? ◦ Accuracy? 44 @Mo_Mack
  11. Step 5: Deploy your model ◦ Adjust inference code (or

    write from scratch) ◦ Package inference code, model code, and pre-trained weights together ◦ Deploy your package 47 @Mo_Mack
  12. Model Asset Exchange The Model Asset Exchange (MAX) is a

    one stop shop for developers/data scientists to find and use free and open source deep learning models ibm.biz/model-exchange 49 @Mo_Mack
  13. ◦ Wide variety of domains (text, audio, images, etc) ◦

    Multiple deep learning frameworks ◦ Vetted and tested code/IP ◦ Build and deploy a model web service in seconds 50 Model Asset Exchange @Mo_Mack
  14. 54 IBM’s AI Portfolio Everything you need for Enterprise AI,

    on any cloud Watson Studio Watson Machine Learning Watson AI OpenScale Build Deploy Manage Interact with Pre-built AI Services Watson Application Services Unify on a Multicloud Data Platform IBM Cloud Private for Data AI Open Source Frameworks @Mo_Mack
  15. © 2018 IBM Corporation 30 April 2019 IBM Data Science

    Elite 55 Need help? IBM Data Science & AI Elite team. Find them: community.ibm.com/DSE @Mo_Mack
  16. 58 FAT* 2018: Joy Buolamwini - Intersectional Accuracy Disparities in

    Commercial Gender Classification https://www.youtube.com/watch?v=Af2VmR-iGkY @Mo_Mack
  17. 59 Photo by rawpixel on Unsplash No matter what it

    is our responsibility to build systems that are fair. @Mo_Mack