Computer scientists at Harvard built Artificial Intelligence (AI) which could be used to identify gang crimes. This development in the practice of predictive policing has immense potential to "ignite an ethical firestorm" as stated by Science Magazine. Do developers have an ethical responsibility when it comes to developing tech that ultimately becomes a tool of state violence? Should they have a say in how the tech they build should be used once it’s delivered to the client?
In this talk, I will examine how efforts to move AI forward are built on statistical biases, and why those who build these things should care. I will also discuss what a code of ethics could look like when approached by developers.
Further reading & resources: http://bit.ly/DevEthicsResources