Voice assistants are arguably the hottest consumer technology of 2018; if your employer hasn't already asked your product team to investigate building an application for voice assistants such as Amazon Alexa, Google Home, Apple HomePod or Samsung's Bixby, they likely will soon. The tech is also finally reaching a place where this is a more realistic goal; the initial launches of the Alexa and Google Home platforms were primarily RSS feed parsers with a text-to-speech engine tacked on, but the technology has finally matured and is now able to support full-fledged custom "skills", or voice-based apps. But as with any new technology, we need to be responsible and consider the morality of what we build as we enter this new space. This talk will discuss the ethics of developing applications for voice assistants and voice-based interfaces, based on the firsthand experiences of a software engineer at the United States-based National Public Radio, as her team has spent the past year doing a deep-dive into voice UI development. The audience will learn practical strategies for respecting privacy, dealing responsibly with user data, and designing for an audience that could include children in an unsupervised setting, as well as ponder the thought-provoking implications of issues such as perpetuating gender stereotypes through voice.