Machine learning and artificial intelligence continue to become ever more central to every aspect of our lives and the pace of adoption is only continuing to accelerate. AI should be a force for good and has already delivered innumerable benefits. However, as AI starts to decide everything from whether we get a home loan to whether our resume is considered by a company, it is critical to ensure that these decisions are fair, equitable and explainable. Unfortunately, it is becoming increasingly clear that, much like humans, AI can be biased, and there have been many very public incidents where projects had to be abandoned due to catastrophic biases.
In this presentation, we start by considering the ramifications of bias, discuss how fairness is defined, and consider regulated domains and protected classes. We continue by highlighting how bias can be introduced into AI solutions, with significant focus on NLP, where models trained on large public data corpora can assume many of the explicit and implicit biases that are unfortunately present in humankind’s communications. We subsequently discuss how this bias can be measured, tracked and even minimized. We present best practices for ensuring that bias doesn’t creep into models over time, discuss open-source toolkits and highlight how explainability can be used to perform real-time checks on predictions.