Upgrade to Pro — share decks privately, control downloads, hide ads and more …

AI Ethics for Software Engineers:

AI Ethics for Software Engineers:

When you were five you were very quick to call out when things weren’t fair. You asked why… a lot. You had to learn to share. You didn’t have preconceived notions of what was or wasn’t possible. Ethics isn’t just for philosophers – it’s something that everyone has a responsibility to think about. In this session we’ll walk through practical examples and advice of how you can start to apply ethical principles to your own AI projects today.

Gillian Armstrong

May 22, 2020
Tweet

More Decks by Gillian Armstrong

Other Decks in Technology

Transcript

  1. Liberty IT AI Ethics for Software Engineers: Embrace your inner

    5-year old Gillian Armstrong @virtualgill
  2. Gillian Armstrong // @virtualgill • Business Requirements • Business Question

    • Machine Learning Question Keep in mind that you get nothing for free – the machine will only answer the question asked
  3. Gillian Armstrong // @virtualgill • Stakeholders • Implementers • Everyone

    Impacted (positively and negatively) Are we getting input from all of these people?
  4. Gillian Armstrong // @virtualgill • What will the owners /

    implementers gain? • What are the risks to them? • What will users of the system gain? • What are the risks to them?
  5. Gillian Armstrong // @virtualgill • What will the owners /

    implementers gain? • What are the risks to them? • What will users of the system gain? • What are the risks to them? Keep these in mind as you go along and ensure you understand where Business Goals might start to be the driving factor in any ethical compromise
  6. Gillian Armstrong // @virtualgill • Who is implementing? Why? •

    How will they make decisions on what data or algorithms? • Is it auditable?
  7. Gillian Armstrong // @virtualgill • Where will you get the

    data from? Why? • Why did it get collected? • How did it get collected?
  8. Gillian Armstrong // @virtualgill • How will we monitor? •

    What will we do if we find issues or unexpected impacts? • How often will we update?
  9. Examples of Guidelines (lots more out there!) • European Commission

    Ethics guidelines for trustworthy AI https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines- trustworthy-ai • Social Impact Statement for Algorithms https://www.fatml.org/resources/principles-for-accountable-algorithms • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems https://standards.ieee.org/industry-connections/ec/autonomous-systems.html • Artificial Intelligence Impact Assessment https://ecp.nl/wp- content/uploads/2019/01/Artificial-Intelligence-Impact-Assessment-English.pdf Gillian Armstrong // @virtualgill
  10. Gillian Armstrong // @virtualgill • Poor collection techniques (selection bias)

    • Incomplete or Incorrect Data • Unbalanced Data that is not representative of entire population • Over-simplifying reality • A “Get it Done” attitude / Pressure to succeed Some things that cause Bias in Data:
  11. Gillian Armstrong // @virtualgill “If you torture the data long

    enough it will confess to anything.” - Ronald Coase (Nobel Prize-winning British economist)
  12. Gillian Armstrong // @virtualgill fairness impartial and just treatment or

    behaviour without favouritism or discrimination. Dictionary Definition
  13. Gillian Armstrong // @virtualgill fairness equal false negative rates across

    groups Statistical Definition – Group Based Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. [Chouldechova] https://arxiv.org/abs/1610.07524 Inherent Trade-Offs in the Fair Determination of Risk Scores. [Kleninberg, Mullainathan, Raghavan] https://arxiv.org/abs/1609.05807 Algorithmic Fairness [Kleninberg, Ludwig, Mullainathan, Rambachan] https://www.cs.cornell.edu/home/kleinber/aer18-fairness.pdf Equality of Opportunity in Supervised Learning [Hardt, Price, Srebro] https://arxiv.org/abs/1610.02413 Attacking discrimination with smarter machine learning. http://research.google.com/bigpicture/attacking-discrimination-in-ml/
  14. Gillian Armstrong // @virtualgill Assign 4 prizes randomly! Ensure there

    is no discrimination based on colour (red vs yellow) or shape (stars vs circles)
  15. Gillian Armstrong // @virtualgill Remember models give you nothing for

    free… these will specifically need added as constraints. Note that also means that removing the ”sensitive” data (colour, shape) is very unlikely to result in the type of “fair” model you want.
  16. Gillian Armstrong // @virtualgill fairness similar individuals should be treated

    similarly Statistical Definition – Individual Based Fairness Through Awareness. [Dwork, Hardt, Pitassi, Reingold, Zemel] https://arxiv.org/abs/1104.3913 Fairness in Learning: Classic and Contextual Bandits. [Joseph, Kearns, Morgenstern, Roth, 2016] https://arxiv.org/abs/1605.07139 Individual Fairness in Hindsight. [Gupta, Kamble] https://arxiv.org/abs/1812.04069 Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. [Kearns, Neel, Roth, Wu] https://arxiv.org/abs/1711.05144
  17. Gillian Armstrong // @virtualgill “…the data may themselves be accurate

    but the disparities they reflect may themselves be caused by prior injustice.” - Deborah Hellman, Discrimination Law Expert
  18. Gillian Armstrong // @virtualgill Note that Bias encoded in the

    model can be: - Reflective of previous bias e.g.https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/ - Exacerbating future bias e.g.https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/639261/bame- disproportionality-in-the-cjs.pdf
  19. Gillian Armstrong // @virtualgill “Definitions of fairness, privacy, transparency, interpretability,

    and morality should remain firmly in the human domain.” - Aaron Roth, The Ethical Algorithm
  20. Examples of Tools Available • AI Explainability 360 https://github.com/IBM/AIX360 •

    What-If Tool https://pair-code.github.io/what-if-tool/ • SHAP https://github.com/slundberg/shap • Skater https://github.com/oracle/Skater • Interpret https://github.com/interpretml/interpret • Fairlearn https://github.com/fairlearn/fairlearn • Lime https://github.com/marcotcr/lime • Facets https://github.com/pair-code/facets Gillian Armstrong // @virtualgill
  21. Would you be ok with your project ending up on

    the front page of a newspaper? Gillian Armstrong // @virtualgill Headline Test:
  22. Gillian Armstrong // @virtualgill Move Fast and Break Things isn’t

    always ok…. Sometimes we need to Move Slow and Make Things Better
  23. Gillian Armstrong // @virtualgill If you are working in AI

    and Software Development, you are already a natural Problem Solver Keep an Open Mind, Find Solutions, Innovate
  24. AI Ethics is not about… Gillian Armstrong // @virtualgill Deciding

    how many people you are going to kill with your trolley Blaming Technology for all the evil in the world Bashing Developers Making you feel guilty