As Artificial Intelligence becomes a bigger part of our lives, it’s time to examine the dangers. It is easy to see the negative impacts of failing to evaluate ethical consequences in retrospect, but often much harder when initially developing intelligent systems. In this talk we will examine three separate examples of the application of AI to various systems. We will look at humorous example(s) where the lack of effective human supervision prevented the system from doing its job altogether, examples where failing to correct early problems led to unintended (and icky) consequences, and effectively human-led AI process(es) where expertise enabled the engineers involved to correct the system before it got off track.
It can be easy to blame artificial intelligence when outcomes are bad, but ultimately we are responsible for guiding these systems according to what is valued. I will provide key questions that we can ask as engineers to avoid nasty surprises: what are the situations where this system will be forced to make hard decisions? What should its priorities be? How can we best validate these results along the way? Attendees will leave with a better understanding of the limitations and strengths of machine learning and how when left unattended intelligent systems can reach unforeseen conclusions.