Upgrade to Pro — share decks privately, control downloads, hide ads and more …

.NET Day 19 - The ethical implications and risks of Artificial Intelligence and Deep Learning by Laurent Bugnion

.NET Day 19 - The ethical implications and risks of Artificial Intelligence and Deep Learning by Laurent Bugnion

There is no question that Artificial Intelligence and Deep Learning will play an important role in the future (and the present!) of humanity. Taking advantage of faster and faster computers, larger and larger databases, we are able to run very complex algorithms against humongous amounts of data. This allows the creation of tools that can help us in complex areas of our lives. From autonomous vehicles to image and speech recognition, from assisting impaired humans to saving lives in critical situations, from inspecting industrial installations to sending machines in deep space or deep waters, the possibilities are amazing. But this power comes with big responsibilities. How do we take steps to minimize flaws in the data we use for our models? How do we build machines that act for the greater good? What are the risks? In this session, Laurent Bugnion, a Senior Cloud Developer Advocate for Microsoft will talk about what could happen, and what we can do to prevent it.

dotnetday

May 28, 2019
Tweet

More Decks by dotnetday

Other Decks in Technology

Transcript

  1. 1. First Law: A robot may not injure a human

    being or, through inaction, allow a human being to come to harm. 2. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  2. Pneumonia Fact: Asthma patients are less at risk to get

    Pneumonia How is that even possible?? We can fix it in the data but… WHAT ELSE DID WE MISS?? → Question the whole model
  3. Language AI models are often trained with existing lexicons These

    lexicons are often biased → AI follows our own bias and societal issues → It’s great at “picking up” our flaws “In the span of 15 hours Tay referred to feminism as a "cult" and a "cancer," as well as noting "gender equality = feminism" and "i love feminism now".” http://gslb.ch/c334s-dotnetday19
  4. Race bias (from http://gslb.ch/c335s-dotnetday19) “Back in 2015, software engineer Jacky

    Alciné pointed out that the image recognition algorithms in Google Photos were classifying his black friends as “gorillas.” Google said it was “appalled” at the mistake, apologized to Alciné, and promised to fix the problem.”
  5. Gender bias Machine learning workforce is 85% male We need

    emotional intelligence Here too, training datasets are biased “[The AI] would see a picture of a kitchen and more often than not associate it with women, not men,” http://gslb.ch/c344s-dotnetday19
  6. AI can be used to “improve” weapon systems AI can

    be used to spy on people • Face recognition • Classification • Detection of “suspect” behaviors Deep fake
  7. • Respecting the customer choices • Using personal data only

    as approved • Don’t leak personal data, directly or as inferences • Make sure AI complies with policy and regulations • of the enterprise • of the governments
  8. “Millions of people uploaded photos to the Ever app. Then

    the company used them to develop facial recognition tools. “ Doug Aley, Ever’s CEO, told NBC News that Ever AI does not share the photos or any identifying information about users with its facial recognition customers. Rather, the billions of images are used to instruct an algorithm how to identify faces. Every time Ever users enable facial recognition on their photos to group together images of the same people, Ever’s facial recognition technology learns from the matches and trains itself. That knowledge, in turn, powers the company’s commercial facial recognition products. http://gslb.ch/c358s-dotnetday19
  9. • Respect the right of customers to be left alone

    • Provide them with ways to control how AI handles data • Stop the nagging • If anything, Privacy is becoming more important than in the past AI has to comply
  10. It’s not an easy problem to solve • It cannot

    be solved by maths • No “One solution fits all” • Leaving the hard problems (gender, race etc) from the dataset is also not a solution • Treating the bias adds a new kind of bias  • Think about how the bias affects society
  11. The key is in the data • Being fair is

    important. Don’t exclude one class of population • Humans must remain able to understand the model • Validate assumptions and results • Assume that something will go wrong and keep watching the running system
  12. Ethics in the enterprise • Look beyond the next quarterly

    profit • Enable collaboration • Human insertion benefits AI systems
  13. AI for Good Global Summit The AI for Good Global

    Summit is THE leading United Nations platform for global and inclusive dialogue on AI. The Summit is hosted each year in Geneva by the ITU in partnership with UN Sister agencies, XPRIZE Foundation and ACM. https://aiforgood.itu.int