Upgrade to Pro — share decks privately, control downloads, hide ads and more …



I know, the title of this talk is like saying the only way to stop a bad Terminator is to use a good Terminator but hear me out. Human biases influence the outputs of an Al model. Al amplifies bias and socio- technical harms impact fairness, adoption, safety, and the well being, These harms disproportionately affect legally protected classes of individuals and groups in the United States.
It's so fitting that this year's theme for International Women's day was #BreakTheBias so join Noble as he returns to Strangeloop to expand on the topic of bias, deconstructs techniques to de-bias datasets by example for building intelligent systems that are fair and equitable while increasing trust and adoption.

Noble Ackerson

September 23, 2022

More Decks by Noble Ackerson

Other Decks in Technology


  1. Components of Responsible use of AI 3 HCML Fairness &

    Bias XAI Privacy & Data Risk Security
  2. Alexa not recognizing female voices or accents from a group

    of people Quantifying Fairness in AI Quality of service Opportunities withheld Job/loan screening systems excluding individuals or groups of individuals Traumatized data Trauma from collective psychological, emotional and cognitive distress experienced by an unprivileged class via IBM Research AI Fairness 360 aif360.mybluemix.net
  3. We propagate our bias in the intelligent solutions we build

    twitter.com/nobleackerson medium.com/@nobleackerson
  4. Runaway Feedback Loops in Predictive Policing PredPol algorithms are already

    used by police departments in California, Florida, Maryland, and a few other States. Source: Conference on Fairness, Accountability, and Transparency
  5. Bias in a Machine Learning context Adapted from: Cornell University

    ARXIV research 12 Algorithm 02 User Interaction 03 01 Data Behavioral Bias Presentation Bias Linking Bias Selection Bias Historical Bias Aggregation Bias Temporal Bias Social Bias Popularity Bias Ranking Bias Emergent Bias Resource: Catalog of Biases
  6. Weʼre all impacted Data team, Researchers & Engineers Increase Data

    Understanding Create Better Algorithms Improve Performance Produce Robust Models Policy & Compliance Increase Trust Bias & Transparency Compliance and Regulation Reporting Consumers/End Users Calibrated Trust Bias & Transparency Understand Impact Reporting & Analyses Increase adoption 13
  7. Policy & Compliance 14 ❏ Increase trust by being responsible

    ❏ Explaining bias for Compliance and regulation ❏ Reporting
  8. Gaining regulatory trust increases innovation Demonstrating human-centered stewardship of user

    data, responsible use of production machine learning reduces overregulation.
  9. What is Explainable AI ? Input Feature 1. Classification 2.

    Prediction 3. Recommendation Black Box Input Feature Explained model 1. Classification 2. Prediction 3. Recommendation 1. Which features dominant? 2. How do features relate? 3. Why the model predicts this? DiffusionBee prompt “cyberpunk, synthwave, oracle explaining her prediction to an empty void”
  10. You can’t debias that what you can’t explain Regulators are

    increasingly requiring model explanations when things go wrong with deployed learned models. 19 SHapley Additive exPlanations (SHAP)
  11. Race Skin color Biometrics Religion Sexual Orientation Socioeconomic status Income

    Country of origin Questions to consider when looking for bias 1. Does your use-case or product specifically use protected classes? 2. Does your use-case use data correlated with protected classes? 3. Could your use-case negatively impact an individual's economic or other important life opportunities? 20
  12. Policy and compliance ✓ Increase trust by being responsible ✓

    Explaining for Compliance and regulation ✓ Reporting twitter.com/nobleackerson medium.com/@nobleackerson
  13. ML team, Researchers & Engineers ❏ Increase data & model

    understanding ❏ Create better algorithms ❏ Produce robust models
  14. Model behavior understanding is critical to fighting bias Debug and

    refine model behavior Pre-processing algorithms Deploy and Monitor Evaluate Build and Train Model Construct and Prepare Data Define Problem What-if Analysis TF 23 Explain Predictions Post-processing algorithms In-processing algorithms Github IBM AIF360 pre-process in-process post-process
  15. vs Midjourney.ai prompt “A face-off between two fighters, one fighting

    on accuracy and the other fighting for fairness.” 28 Accurate Fair
  16. Data Scientists, ML engineering, Research ✓ Increase data understanding ✓

    Create better algorithms ✓ Produce robust, interpretable models twitter.com/nobleackerson medium.com/@nobleackerson
  17. Consumers/End Users ❏ Increase adoption through calibrated trust ❏ Feedback:

    Context, Choice, and Control ❏ Designing for context to understand impact ❏ End-user Reporting & Analysis
  18. Increase adoption through calibrated trust Not enough trust Too much

    trust Calibrated trust User wastes time searching for additional stores in an area Trust factor Trust factor Trust factor User doesn't double check before driving halfway across town User keeps away from a store confidently identified as busy 31
  19. Recommendations when managing end-user feedback 1. Design in UX mechanisms

    to request feedback. 2. Build in functionality to respond to user feedback. 3. Interpret and use both implicit and explicit user feedback. DiffusionBee - Stable Diffusion prompt “depiction of the when things go very wrong with AI”
  20. Consumers and End-Users ✓ Increase adoption through calibrated trust ✓

    Feedback: Context, Choice, and Control ✓ Designing for context to understand impact ✓ End-user Reporting & Analysis twitter.com/nobleackerson medium.com/@nobleackerson
  21. Fighting bias…with bias is a balance of what was, what

    is, and what should be twitter.com/nobleackerson medium.com/@nobleackerson