Upgrade to Pro — share decks privately, control downloads, hide ads and more …

FIGHT AI BIAS WITH… BIAS

FIGHT AI BIAS WITH… BIAS

I know, the title of this talk is like saying the only way to stop a bad Terminator is to use a good Terminator but hear me out. Human biases influence the outputs of an Al model. Al amplifies bias and socio- technical harms impact fairness, adoption, safety, and the well being, These harms disproportionately affect legally protected classes of individuals and groups in the United States.
It's so fitting that this year's theme for International Women's day was #BreakTheBias so join Noble as he returns to Strangeloop to expand on the topic of bias, deconstructs techniques to de-bias datasets by example for building intelligent systems that are fair and equitable while increasing trust and adoption.

Noble Ackerson

September 23, 2022
Tweet

More Decks by Noble Ackerson

Other Decks in Technology

Transcript

  1. Fighting bias…with bias
    Noble Ackerson
    Dir. of Product (AI/ML) @ Ventera Corporation

    View Slide

  2. View Slide

  3. Components of Responsible use of AI
    3
    HCML
    Fairness &
    Bias
    XAI Privacy &
    Data Risk
    Security

    View Slide

  4. If AI is to be ambient,
    it better be fair
    Fairness in AI

    View Slide

  5. Alexa not recognizing female voices
    or accents from a group of people
    Quantifying Fairness in AI
    Quality of service Opportunities withheld
    Job/loan screening systems excluding
    individuals or groups of individuals
    Traumatized data
    Trauma from collective
    psychological, emotional and
    cognitive distress experienced by an
    unprivileged class
    via IBM Research
    AI Fairness 360
    aif360.mybluemix.net

    View Slide

  6. We propagate our
    bias in the intelligent
    solutions we build
    twitter.com/nobleackerson medium.com/@nobleackerson

    View Slide

  7. Photo by Noble Ackerson

    View Slide

  8. Credit: Twitter | Starbucks

    View Slide

  9. Credit: New York Times

    View Slide

  10. Photo by Steve Helber | Credit: AP

    View Slide

  11. Runaway Feedback Loops in Predictive Policing
    PredPol algorithms are already used by
    police departments in California, Florida,
    Maryland, and a few other States.
    Source: Conference on Fairness, Accountability, and Transparency

    View Slide

  12. Bias in a Machine Learning context
    Adapted from: Cornell University ARXIV research 12
    Algorithm
    02
    User Interaction
    03
    01
    Data
    Behavioral Bias
    Presentation Bias
    Linking Bias
    Selection Bias
    Historical Bias
    Aggregation Bias
    Temporal Bias
    Social Bias
    Popularity Bias
    Ranking Bias
    Emergent Bias
    Resource: Catalog of Biases

    View Slide

  13. Weʼre all impacted
    Data team, Researchers &
    Engineers
    Increase Data Understanding
    Create Better Algorithms
    Improve Performance
    Produce Robust Models
    Policy & Compliance
    Increase Trust
    Bias & Transparency
    Compliance and Regulation
    Reporting
    Consumers/End Users
    Calibrated Trust
    Bias & Transparency
    Understand Impact
    Reporting & Analyses
    Increase adoption
    13

    View Slide

  14. Policy & Compliance
    14
    ❏ Increase trust by being responsible
    ❏ Explaining bias for Compliance and regulation
    ❏ Reporting

    View Slide

  15. Gaining regulatory trust
    increases innovation
    Demonstrating human-centered
    stewardship of user data, responsible
    use of production machine learning
    reduces overregulation.

    View Slide

  16. Example: Amazon same-day delivery coverage
    Source: https://www.bloomberg.com/graphics/2016-amazon-same-day/
    16

    View Slide

  17. 17
    Roxbury, Boston
    Redlining

    View Slide

  18. What is Explainable AI ?
    Input Feature
    1. Classification
    2. Prediction
    3. Recommendation
    Black Box
    Input Feature Explained
    model
    1. Classification
    2. Prediction
    3. Recommendation
    1. Which features dominant?
    2. How do features relate?
    3. Why the model predicts this?
    DiffusionBee prompt “cyberpunk, synthwave, oracle explaining her
    prediction to an empty void”

    View Slide

  19. You can’t debias
    that what you
    can’t explain
    Regulators are increasingly
    requiring model explanations
    when things go wrong with
    deployed learned models.
    19
    SHapley Additive exPlanations (SHAP)

    View Slide

  20. Race
    Skin color
    Biometrics
    Religion
    Sexual Orientation
    Socioeconomic
    status
    Income
    Country of origin
    Questions to consider when looking for bias
    1. Does your use-case or product
    specifically use protected classes?
    2. Does your use-case use data
    correlated with protected classes?
    3. Could your use-case negatively
    impact an individual's economic or
    other important life opportunities?
    20

    View Slide

  21. Policy and compliance
    ✓ Increase trust by being responsible
    ✓ Explaining for Compliance and regulation
    ✓ Reporting
    twitter.com/nobleackerson medium.com/@nobleackerson

    View Slide

  22. ML team, Researchers & Engineers
    ❏ Increase data & model understanding
    ❏ Create better algorithms
    ❏ Produce robust models

    View Slide

  23. Model behavior understanding is critical to fighting bias
    Debug and refine
    model behavior
    Pre-processing
    algorithms
    Deploy and
    Monitor
    Evaluate
    Build and
    Train Model
    Construct and
    Prepare Data
    Define
    Problem
    What-if Analysis
    TF
    23
    Explain Predictions
    Post-processing
    algorithms
    In-processing
    algorithms
    Github IBM AIF360
    pre-process in-process post-process

    View Slide

  24. Configure and Load dataset
    Set bias detection options, load dataset, and split between train and test

    View Slide

  25. Test Fairness Metric
    Compute fairness metric on original training dataset

    View Slide

  26. Fix what's wrong
    Mitigate bias by transforming the original dataset

    View Slide

  27. Test Fairness Metric
    Compute fairness metric on transformed training dataset

    View Slide

  28. vs
    Midjourney.ai prompt “A face-off between two fighters, one fighting on accuracy and the other fighting for fairness.”
    28
    Accurate Fair

    View Slide

  29. Data Scientists, ML engineering, Research
    ✓ Increase data understanding
    ✓ Create better algorithms
    ✓ Produce robust, interpretable models
    twitter.com/nobleackerson medium.com/@nobleackerson

    View Slide

  30. Consumers/End Users
    ❏ Increase adoption through calibrated trust
    ❏ Feedback: Context, Choice, and Control
    ❏ Designing for context to understand impact
    ❏ End-user Reporting & Analysis

    View Slide

  31. Increase adoption through calibrated trust
    Not enough trust Too much trust
    Calibrated trust
    User wastes time
    searching for additional
    stores in an area
    Trust factor Trust factor
    Trust factor
    User doesn't double
    check before driving
    halfway across town
    User keeps away from a
    store confidently
    identified as busy
    31

    View Slide

  32. Recommendations when
    managing end-user feedback
    1. Design in UX mechanisms to
    request feedback.
    2. Build in functionality to respond
    to user feedback.
    3. Interpret and use both implicit
    and explicit user feedback.
    DiffusionBee - Stable Diffusion prompt “depiction of the when things go very wrong with AI”

    View Slide

  33. Consumers and End-Users
    ✓ Increase adoption through calibrated trust
    ✓ Feedback: Context, Choice, and Control
    ✓ Designing for context to understand impact
    ✓ End-user Reporting & Analysis
    twitter.com/nobleackerson
    medium.com/@nobleackerson

    View Slide

  34. Debiasing techniques
    benefit non-protected
    classes
    Takeaways

    View Slide

  35. Some packages lack
    adequate support
    Takeaways

    View Slide

  36. Fighting bias…with bias is a balance of
    what was,
    what is,
    and
    what should be
    twitter.com/nobleackerson medium.com/@nobleackerson

    View Slide