Large language models are only as good as the data we feed into them. Unfortunately, we haven't quite dismantled racism, sexism, and all the other -isms just yet. AI isn't going away, so let's apply a harm reduction lens. Given the imperfect tools that we have, how can we write LLM prompts that are less likely to reflect our own biases? In this session, Tilde will review current literature about LLM prompting and social justice. They'll compare how different models perform in this context, since they're trained on different datasets. You'll leave with some ideas that you can apply as both users and builders of LLM applications, to iterate towards a more equitable world.