Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI Ethics for Software Engineers:

AI Ethics for Software Engineers:

When you were five you were very quick to call out when things weren’t fair. You asked why… a lot. You had to learn to share. You didn’t have preconceived notions of what was or wasn’t possible. Ethics isn’t just for philosophers – it’s something that everyone has a responsibility to think about. In this session we’ll walk through practical examples and advice of how you can start to apply ethical principles to your own AI projects today.

Gillian Armstrong

May 22, 2020
Tweet

More Decks by Gillian Armstrong

Other Decks in Technology

Transcript

  1. Liberty IT
    AI Ethics for Software Engineers:
    Embrace your inner 5-year old
    Gillian Armstrong
    @virtualgill

    View Slide

  2. Liberty IT
    Gillian Armstrong
    @virtualgill
    Solutions Engineer, AWS ML Hero
    Liberty IT, Belfast

    View Slide

  3. Nancy Jason

    View Slide

  4. Gillian Armstrong // @virtualgill
    Why?
    Ask

    View Slide

  5. Gillian Armstrong // @virtualgill
    Why are you
    building this?

    View Slide

  6. Gillian Armstrong // @virtualgill
    Human Impact
    Statement
    Owning what you are doing:

    View Slide

  7. Gillian Armstrong // @virtualgill
    First things first,
    what are you creating?

    View Slide

  8. Gillian Armstrong // @virtualgill
    • Business Requirements
    • Business Question
    • Machine Learning Question

    View Slide

  9. Gillian Armstrong // @virtualgill
    • Business Requirements
    • Business Question
    • Machine Learning Question
    Keep in mind that you get nothing
    for free – the machine will only
    answer the question asked

    View Slide

  10. Gillian Armstrong // @virtualgill
    Who is involved?

    View Slide

  11. Gillian Armstrong // @virtualgill
    • Stakeholders
    • Implementers
    • Everyone Impacted (positively
    and negatively)

    View Slide

  12. Gillian Armstrong // @virtualgill
    • Stakeholders
    • Implementers
    • Everyone Impacted (positively
    and negatively)
    Are we getting input from
    all of these people?

    View Slide

  13. Gillian Armstrong // @virtualgill
    Why are you
    creating this?

    View Slide

  14. Gillian Armstrong // @virtualgill
    • What will the owners / implementers gain?
    • What are the risks to them?
    • What will users of the system gain?
    • What are the risks to them?

    View Slide

  15. Gillian Armstrong // @virtualgill
    • What will the owners / implementers gain?
    • What are the risks to them?
    • What will users of the system gain?
    • What are the risks to them?
    Keep these in mind as you
    go along and ensure you
    understand where Business
    Goals might start to be the
    driving factor in any ethical
    compromise

    View Slide

  16. Gillian Armstrong // @virtualgill
    How are you creating it?

    View Slide

  17. Gillian Armstrong // @virtualgill
    • Who is implementing? Why?
    • How will they make decisions
    on what data or algorithms?
    • Is it auditable?

    View Slide

  18. Gillian Armstrong // @virtualgill
    • Where will you get the data
    from? Why?
    • Why did it get collected?
    • How did it get collected?

    View Slide

  19. Gillian Armstrong // @virtualgill
    What happens next?

    View Slide

  20. Gillian Armstrong // @virtualgill
    • How will we monitor?
    • What will we do if we find
    issues or unexpected impacts?
    • How often will we update?

    View Slide

  21. Examples of Guidelines (lots more out there!)
    • European Commission Ethics guidelines for trustworthy AI
    https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-
    trustworthy-ai
    • Social Impact Statement for Algorithms
    https://www.fatml.org/resources/principles-for-accountable-algorithms
    • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
    https://standards.ieee.org/industry-connections/ec/autonomous-systems.html
    • Artificial Intelligence Impact Assessment https://ecp.nl/wp-
    content/uploads/2019/01/Artificial-Intelligence-Impact-Assessment-English.pdf
    Gillian Armstrong // @virtualgill

    View Slide

  22. Gillian Armstrong // @virtualgill
    That’s
    not
    fair!

    View Slide

  23. Gillian Armstrong // @virtualgill
    Bias In, Bias Out

    View Slide

  24. Gillian Armstrong // @virtualgill
    • Poor collection techniques (selection bias)
    • Incomplete or Incorrect Data
    • Unbalanced Data that is not representative of
    entire population
    • Over-simplifying reality
    • A “Get it Done” attitude / Pressure to succeed
    Some things that cause Bias in Data:

    View Slide

  25. Gillian Armstrong // @virtualgill
    “If you torture the data long enough
    it will confess to anything.”
    - Ronald Coase (Nobel Prize-winning British economist)

    View Slide

  26. Gillian Armstrong // @virtualgill
    How do we
    measure
    Fairness?

    View Slide

  27. Gillian Armstrong // @virtualgill
    How do we even
    define Fairness?

    View Slide

  28. Gillian Armstrong // @virtualgill
    fairness
    impartial and just treatment or behaviour
    without favouritism or discrimination.
    Dictionary Definition

    View Slide

  29. Gillian Armstrong // @virtualgill
    fairness
    equal false negative rates across groups
    Statistical Definition – Group Based
    Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. [Chouldechova] https://arxiv.org/abs/1610.07524
    Inherent Trade-Offs in the Fair Determination of Risk Scores. [Kleninberg, Mullainathan, Raghavan] https://arxiv.org/abs/1609.05807
    Algorithmic Fairness [Kleninberg, Ludwig, Mullainathan, Rambachan] https://www.cs.cornell.edu/home/kleinber/aer18-fairness.pdf
    Equality of Opportunity in Supervised Learning [Hardt, Price, Srebro] https://arxiv.org/abs/1610.02413
    Attacking discrimination with smarter machine learning. http://research.google.com/bigpicture/attacking-discrimination-in-ml/

    View Slide

  30. Gillian Armstrong // @virtualgill
    Assign 4 prizes randomly!
    Ensure there is
    no discrimination
    based on colour
    (red vs yellow)
    or shape
    (stars vs circles)

    View Slide

  31. Gillian Armstrong // @virtualgill
    Remember models give
    you nothing for free…
    these will specifically need
    added as constraints.
    Note that also means that
    removing the ”sensitive” data
    (colour, shape) is very unlikely
    to result in the type of “fair”
    model you want.

    View Slide

  32. CIRCLE
    STAR
    RED YELLOW
    Gillian Armstrong // @virtualgill

    View Slide

  33. CIRCLE
    STAR
    RED YELLOW
    Gillian Armstrong // @virtualgill

    View Slide

  34. Gillian Armstrong // @virtualgill
    Equal Circle and Stars
    Equal Red
    and Yellow
    FAIR?

    View Slide

  35. Gillian Armstrong // @virtualgill
    fairness
    similar individuals should be treated similarly
    Statistical Definition – Individual Based
    Fairness Through Awareness. [Dwork, Hardt, Pitassi, Reingold, Zemel] https://arxiv.org/abs/1104.3913
    Fairness in Learning: Classic and Contextual Bandits. [Joseph, Kearns, Morgenstern, Roth, 2016] https://arxiv.org/abs/1605.07139
    Individual Fairness in Hindsight. [Gupta, Kamble] https://arxiv.org/abs/1812.04069
    Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. [Kearns, Neel, Roth, Wu] https://arxiv.org/abs/1711.05144

    View Slide

  36. Gillian Armstrong // @virtualgill
    Fairness in AI is always
    (unfortunately) going to be
    about trade-offs

    View Slide

  37. Gillian Armstrong // @virtualgill
    “…the data may themselves be
    accurate but the disparities they
    reflect may themselves be caused
    by prior injustice.”
    - Deborah Hellman, Discrimination Law Expert

    View Slide

  38. Gillian Armstrong // @virtualgill
    Note that Bias encoded in the model can be:
    - Reflective of previous bias
    e.g.https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/
    - Exacerbating future bias
    e.g.https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/639261/bame-
    disproportionality-in-the-cjs.pdf

    View Slide

  39. Gillian Armstrong // @virtualgill
    “Definitions of fairness, privacy,
    transparency, interpretability, and
    morality should remain firmly in the
    human domain.”
    - Aaron Roth, The Ethical Algorithm

    View Slide

  40. Gillian Armstrong // @virtualgill
    Remember
    to Share

    View Slide

  41. Gillian Armstrong // @virtualgill
    Transparency and
    Explainability

    View Slide

  42. Examples of Tools Available
    • AI Explainability 360 https://github.com/IBM/AIX360
    • What-If Tool https://pair-code.github.io/what-if-tool/
    • SHAP https://github.com/slundberg/shap
    • Skater https://github.com/oracle/Skater
    • Interpret https://github.com/interpretml/interpret
    • Fairlearn https://github.com/fairlearn/fairlearn
    • Lime https://github.com/marcotcr/lime
    • Facets https://github.com/pair-code/facets
    Gillian Armstrong // @virtualgill

    View Slide

  43. Gillian Armstrong // @virtualgill
    Accountability

    View Slide

  44. Would you be ok with
    your project ending up
    on the front page of a
    newspaper?
    Gillian Armstrong // @virtualgill
    Headline Test:

    View Slide

  45. Gillian Armstrong // @virtualgill
    Positive Intent
    is not enough
    Impact is what matters

    View Slide

  46. Gillian Armstrong // @virtualgill
    Move Fast and Break Things
    isn’t always ok….
    Sometimes we need to
    Move Slow and Make Things
    Better

    View Slide

  47. Gillian Armstrong // @virtualgill
    Have
    unlimited
    imagination

    View Slide

  48. Gillian Armstrong // @virtualgill
    “That’s just the way
    things are”
    - Someone who hasn’t listened to this talk

    View Slide

  49. Gillian Armstrong // @virtualgill
    If you are working in AI and Software Development,
    you are already a natural Problem Solver
    Keep an Open Mind,
    Find Solutions, Innovate

    View Slide

  50. Gillian Armstrong // @virtualgill
    Choose
    Joy

    View Slide

  51. AI Ethics is not about…
    Gillian Armstrong // @virtualgill
    Deciding how many
    people you are going to
    kill with your trolley
    Blaming Technology for
    all the evil in the world
    Bashing Developers
    Making you feel guilty

    View Slide

  52. AI Ethics is about…
    Gillian Armstrong // @virtualgill
    Creating a world that
    is better for everyone

    View Slide

  53. Gillian Armstrong // @virtualgill
    Thank
    you! Get in touch!
    @virtualgill

    View Slide

  54. @Liberty Information Technology
    @Liberty IT
    @Liberty_IT

    View Slide