This are slides from my talk at Self.Conference 2017. Machine learning techniques rely on some assumptions, like that the future will resemble the past, and that data is objective. Those assumptions have held up well in machine learning applications like advertising and self driving cars. But what about applications that predict a person’s future actions and use that prediction to make a big decision about that person’s life? What if we train our machine learning systems on data containing human biases that we do not want to reinforce in the future? This talk first dives into how Google's Word2Vec learns gender biases from input data, and promising work from MIT on how we can use math to 'unteach' the system these biases. It then looks at statistics-based prediction techniques used to make decisions in criminal sentencing - how racial bias comes into these systems, the risks and consequences of exacerbating that bias, and the possibility of accounting for it in a way that the systems can 'unlearn' the bias. Throughout, we consider a set of questions we must ask when applying machine learning to make decisions about one another. The audience is invited to take apply these questions to other human-focused applications such as in health, hiring, insurance, finance, education, and media.