Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Skills Matter - Digital Discrimination: Cogniti...

Skills Matter - Digital Discrimination: Cognitive Bias in Machine Learning

Maureen McElaney

June 04, 2020
Tweet

More Decks by Maureen McElaney

Other Decks in Technology

Transcript

  1. Skills Matter Digital Discrimination: Cognitive Bias in Machine Learning Maureen

    McElaney, Developer Advocate Center for Open Source Data and AI Technologies (CODAIT) June 4, 2020
  2. 4 Agenda • Examples of Bias in Machine Learning. •

    Solutions to combat unwanted bias. • Tools to combat unwanted bias. • Resources and how to get involved.
  3. A cognitive bias is a systematic pattern of deviation from

    norm or rationality in judgment. People make decisions given their limited resources. Wilke A. and Mata R. (2012) “Cognitive Bias”, Clarkson University @ibmcodait
  4. @ibmcodait BLACK VS. WHITE DEFENDANTS ◦ Falsely labeled black defendants

    as likely of future crime at twice the rate as white defendants. ◦ White defendants mislabeled as low risk more than black defendants ◦ Pegged Black defendants 77% more likely to be at risk of committing future violent crime
  5. “If we fail to make ethical and inclusive artificial intelligence

    we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.” - Joy Boulamwini @jovialjoy
  6. 21 Agenda • Examples of Bias in Machine Learning. •

    Solutions to combat unwanted bias. • Tools to combat unwanted bias. • Resources and how to get involved.
  7. Questions posed to students in these courses... Is the technology

    fair? How do you make sure that the data is not biased? Should machines be judging humans? @ibmcodait
  8. “By combining the latest in machine learning and inclusive product

    development, we're able to directly respond to Pinner feedback and build a more useful product.” 31 - Candice Morgan @Candice_MMorgan @ibmcodait
  9. EU Ethics Guidelines for Trustworthy Artificial Intelligence According to the

    Guidelines, trustworthy AI should be: (1) lawful - respecting all applicable laws and regulations (2) ethical - respecting ethical principles and values (3) robust - both from a technical perspective while taking into account its social environment Source: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  10. #1 - Human agency and oversight. #2 - Technical robustness

    and safety. #3 - Privacy and data governance. #4 - Transparency . @Mo_Mack #5 - Diversity, non- discrimination and fairness. #6 - Societal and environmental well-being. #7 - Accountability
  11. 36 Agenda • Examples of Bias in Machine Learning. •

    Solutions to combat unwanted bias. • Tools to combat unwanted bias. • Resources and how to get involved.
  12. In the works! Adversarial Robustness 360 ↳ (ART360) AI Fairness

    360 ↳ (AIF360) AI Explainability 360 ↳ (AIX360) github.com/IBM/AIF360 aif360.mybluemix.net FAIRNESS EXPLAINABILITY ROBUSTNESS LINEAGE Trusted AI Lifecycle through Open Source Pillars of trust, woven into the lifecycle of an AI application github.com/IBM/adversari al-robustness-toolbox art-demo.mybluemix.net github.com/IBM/AIX360 aix360.mybluemix.net Is it fair? Is it easy to understand? Is it accountable? Did anyone tamper with it?
  13. @ibmcodait Machine Learning Pipeline In- Processing Pre- Processing Post- Processing

    Modifying the training data. Modifying the learning algorithm. Modifying the predictions (or outcomes.)
  14. AIX360 toolkit is an open-source library to help explain AI

    and machine learning models and their predictions. This includes three classes of algorithms: local post-hoc, global post-hoc, and directly interpretable explainers for models that use image, text, and structured/tabular data. The AI Explainability360 Python package includes a comprehensive set of explainers, both at global and local level. Toolbox Local post-hoc Global post-hoc Directly interpretable AI Explainability 360 ↳ (AIX360) https://github.com/IBM/AIX360 http://aix360.mybluemix.net THINK 2020 / © 2020 IBM Corporation
  15. Tackling different ways to explain Selected 2018 explainability innovations from

    IBM Research GLOBAL, POST-HOC Improving Simple Models with Confidence Profiles NEURIPS 2018 LOCAL, POST-HOC Explanations Based on the Missing: Towards Contrastive Explanations with Pertinent Negatives NEURIPS 2018 GLOBAL, DIRECTLY INTERPRETABLE Boolean Decision Rules via Column Generation NIPS 2018 Variational Inference of Disentangled Latent Concepts from Unlabeled Observations ICLR 2018 INTERACTIVE MODEL VISUALIZATION Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models IEEE VAST 2018 LOCAL, DIRECTLY INTERPRETABLE TED: Teaching AI to Explain its Decisions AIES 2019 THINK 2020 / © 2020 IBM Corporation
  16. Three dimensions of explainability One explanation does not fit all:

    There are many ways to explain things directly interpretable The oldest AI formats, such as decision rule sets, decision trees, and decision tables are simple enough for people to understand. Supervised learning of these models is directly interpretable. vs. post hoc interpretation Start with a black box model and probe into it with a companion model to create interpretations. The black box model continues to provide the actual prediction while interpretation improve human interactions. global (model-level) Show the entire predictive model to the user to help them understand it (e.g. a small decision tree, whether obtained directly or in a post hoc manner). vs. local (instance-level) Only show the explanations associated with individual predictions (i.e. what was it about the features of this particular person that made her loan denied). static The interpretation is simply presented to the user. vs. interactive (visual analytics) The user can interact with interpretation.
  17. Data explanation • ProtoDash (Gurumoorthy et al., 2019) • Disentangled

    Inferred Prior VAE (Kumar et al., 2018) Supported explainability algorithms Local post-hoc explanation • ProtoDash (Gurumoorthy et al., 2019) • Contrastive Explanations Method (Dhurandhar et al., 2018) • Contrastive Explanations Method with Monotonic Attribute Functions (Luss et al., 2019) • LIME (Ribeiro et al. 2016, Github) • SHAP (Lundberg, et al. 2017, Github) Local direct explanation • Teaching AI to Explain its Decisions (Hind et al., 2019) Global direct explanation • Boolean Decision Rules via Column Generation (Light Edition) (Dash et al., 2018) • Generalized Linear Rule Models (Wei et al., 2019) Global post-hoc explanation • ProfWeight (Dhurandhar et al., 2018) Supported explainability metrics • Faithfulness (Alvarez-Melis and Jaakkola, 2018) • Monotonicity (Luss et al., 2019)
  18. Explain how AI arrived at a prediction • Uses contrastive

    techniques to explain model behavior in the vicinity of the target data point. • Identifies feature weighting of most and least important features • Displays factors that influence a prediction in simple terms. • Explanation in terms of the top-K features which played a key role in the prediction. E.g., The loan was rejected because: (1) Credit score=average, (2) Loan Amount>$2M and (3) Area=Downtown. Explainability THINK 2020 / © 2020 IBM Corporation
  19. Prediction: Partially Granted 1 2 3 4 5 6 7

    8 9 Input Data Point PP PN Partially Approved Approved Most Frequent Least Frequent Number of Married Years Contrastive Explanation: • PP: If Number of married years was 7 and salary was in the range $190-210K, then outcome would have changed to Loan=Approved • PN: Even if Number of married years = 3 and salary was in the range $110-130K, outcome would have been Loan=Partially Granted [90, 110] [70, 90] [110,13 0] Most Frequent Least Frequent Input Data Point PP Salary PN Partially Approved Approved [>210] [130,15 0] [150,17 0] [170,19 0] [190,21 0] THINK 2020 / © 2020 IBM Corporation
  20. @ibmcodait Step 1: Find a model ...that does what you

    need ...that is free to use ...that is performant enough
  21. @ibmcodait Step 2: Get the code Is there a good

    implementation available? ...that does what you need ...that is free to use ...that is performant enough
  22. @ibmcodait Step 3: Verify the model ◦ Does it do

    what you need? ◦ Is it free to use (license)? ◦ Is it performant enough? ◦ Accuracy?
  23. @ibmcodait Step 5: Deploy your model ◦ Adjust inference code

    (or write from scratch) ◦ Package inference code, model code, and pre-trained weights together ◦ Deploy your package
  24. @ibmcodait Model Asset Exchange The Model Asset Exchange (MAX) is

    a one stop shop for developers/data scientists to find and use free and open source deep learning models ibm.biz/model-exchange
  25. 67 Agenda • Examples of Bias in Machine Learning. •

    Solutions to combat unwanted bias. • Tools to combat unwanted bias. • Resources and how to get involved.
  26. Photo by rawpixel on Unsplash No matter what it is

    our responsibility to build systems that are fair.
  27. Thank you! Resources from this talk: My team’s work: Codait.org

    twitter.com/ibmcodait http://ibm.biz/codait-trusted-ai Criminal Recidivism Scoring http://www.equivant.com/solutions/inmate-classific ation https://www.propublica.org/article/machine-bias-ris k-assessments-in-criminal-sentencing Gender Shades/Algorithmic Justice League http://gendershades.org/ http://www.aies-conference.com/wp-content/uploa ds/2019/01/AIES-19_paper_223.pdf https://www.youtube.com/watch?v=Af2VmR-iGkY https://www.ajlunited.org/fight Anil Dash on the Biases of Tech on the Ezra Klein Podcast https://www.vox.com/ezra-klein-show-podcast https://www.youtube.com/watch?v=-lupS5SkSk0 Ethics in Computer Science https://www.nytimes.com/2018/02/12/business/co mputer-science-ethics-courses.html https://twitter.com/Neurosarda/status/1084198368 526680064 Pinterest Skin Tone Search https://www.engadget.com/2019/01/24/pinterest-s kin-tone-search-diversity/ Industry/Government Definitions of Trustworthy AI https://ec.europa.eu/digital-single-market/en/news/ ethics-guidelines-trustworthy-ai https://wiki.lfai.foundation/display/DL/Trusted+AI+C ommittee Any questions for us? @ibmcodait Learn more: codait.org