.NET Day 19 - The ethical implications and risks of Artificial Intelligence and Deep Learning by Laurent Bugnion

.NET Day 19 - The ethical implications and risks of Artificial Intelligence and Deep Learning by Laurent Bugnion

There is no question that Artificial Intelligence and Deep Learning will play an important role in the future (and the present!) of humanity. Taking advantage of faster and faster computers, larger and larger databases, we are able to run very complex algorithms against humongous amounts of data. This allows the creation of tools that can help us in complex areas of our lives. From autonomous vehicles to image and speech recognition, from assisting impaired humans to saving lives in critical situations, from inspecting industrial installations to sending machines in deep space or deep waters, the possibilities are amazing. But this power comes with big responsibilities. How do we take steps to minimize flaws in the data we use for our models? How do we build machines that act for the greater good? What are the risks? In this session, Laurent Bugnion, a Senior Cloud Developer Advocate for Microsoft will talk about what could happen, and what we can do to prevent it.

E6cffbf3b7a5fbfee4707033ef1636f5?s=128

dotnetday

May 28, 2019
Tweet

Transcript

  1. None
  2. 1. First Law: A robot may not injure a human

    being or, through inaction, allow a human being to come to harm. 2. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  3. None
  4. None
  5. It’s not easy…

  6. Algorithm Data Answers

  7. Algorithm Data Answers Model

  8. Data Answers Predictions New data Model

  9. None
  10. None
  11. None
  12. How, can AI be biased? http://gslb.ch/c332s-dotnetday19 2+2 = ? It’s

    just math!
  13. Pneumonia Fact: Asthma patients are less at risk to get

    Pneumonia How is that even possible?? We can fix it in the data but… WHAT ELSE DID WE MISS?? → Question the whole model
  14. Language AI models are often trained with existing lexicons These

    lexicons are often biased → AI follows our own bias and societal issues → It’s great at “picking up” our flaws “In the span of 15 hours Tay referred to feminism as a "cult" and a "cancer," as well as noting "gender equality = feminism" and "i love feminism now".” http://gslb.ch/c334s-dotnetday19
  15. Race bias (from http://gslb.ch/c335s-dotnetday19) “Back in 2015, software engineer Jacky

    Alciné pointed out that the image recognition algorithms in Google Photos were classifying his black friends as “gorillas.” Google said it was “appalled” at the mistake, apologized to Alciné, and promised to fix the problem.”
  16. Race bias http://gslb.ch/c336s-dotnetday19

  17. Gender bias Machine learning workforce is 85% male We need

    emotional intelligence Here too, training datasets are biased “[The AI] would see a picture of a kitchen and more often than not associate it with women, not men,” http://gslb.ch/c344s-dotnetday19
  18. None
  19. AI can be used to “improve” weapon systems AI can

    be used to spy on people • Face recognition • Classification • Detection of “suspect” behaviors Deep fake
  20. http://gslb.ch/c359s-dotnetday19

  21. • Respecting the customer choices • Using personal data only

    as approved • Don’t leak personal data, directly or as inferences • Make sure AI complies with policy and regulations • of the enterprise • of the governments
  22. “Millions of people uploaded photos to the Ever app. Then

    the company used them to develop facial recognition tools. “ Doug Aley, Ever’s CEO, told NBC News that Ever AI does not share the photos or any identifying information about users with its facial recognition customers. Rather, the billions of images are used to instruct an algorithm how to identify faces. Every time Ever users enable facial recognition on their photos to group together images of the same people, Ever’s facial recognition technology learns from the matches and trains itself. That knowledge, in turn, powers the company’s commercial facial recognition products. http://gslb.ch/c358s-dotnetday19
  23. • Respect the right of customers to be left alone

    • Provide them with ways to control how AI handles data • Stop the nagging • If anything, Privacy is becoming more important than in the past AI has to comply
  24. None
  25. It’s not an easy problem to solve • It cannot

    be solved by maths • No “One solution fits all” • Leaving the hard problems (gender, race etc) from the dataset is also not a solution • Treating the bias adds a new kind of bias  • Think about how the bias affects society
  26. The key is in the data • Being fair is

    important. Don’t exclude one class of population • Humans must remain able to understand the model • Validate assumptions and results • Assume that something will go wrong and keep watching the running system
  27. Ethics in the enterprise • Look beyond the next quarterly

    profit • Enable collaboration • Human insertion benefits AI systems
  28. earn trust assist efficiencies dignity transparent accountability

  29. http://gslb.ch/c341s-dotnetday19 Webinar: http://gslb.ch/c339s-dotnetday19

  30. None
  31. AI for Good Global Summit The AI for Good Global

    Summit is THE leading United Nations platform for global and inclusive dialogue on AI. The Summit is hosted each year in Geneva by the ITU in partnership with UN Sister agencies, XPRIZE Foundation and ACM. https://aiforgood.itu.int
  32. AI powered captioning http://gslb.ch/c342s-dotnetday19

  33. Seeing AI http://gslb.ch/c343s-dotnetday19

  34. Our values and principles Enable people Inclusive design Build trust

    in technology
  35. THANK YOU LBugnion@Microsoft.com @LBugnion http://gslb.ch/mycda All you need: http://gslb.ch/dotnetday19

  36. Advances in vision technology http://gslb.ch/c338s-dotnetday19