I began my research journey in Natural Language Processing (NLP) by developing algorithms to summarise news articles. At the time, I never imagined that, within less than two decades, we would have AI assistants capable of not only summarising but also generating a wide range of texts—from creative writing to technical reports and news. Tools like ChatGPT, Gemini, and other AI assistants have become an integral part of our daily lives.
For the first time in human history, we are sharing our world with entities that can process and generate information faster than we can. As an NLP researcher, I am thrilled by the immense progress our field has made and the transformative impact on society that I have witnessed within my career. Yet, I find myself increasingly concerned about some unintended consequences: the presence of social biases, lack of diversity in AI-generated content, and the ethical responsibilities we, as researchers, must shoulder.
This brings us to the question: What kind of future do we envision with our intelligent counterparts? More importantly, how can we, as researchers, ensure that these AI assistants act in ways that promote fairness, inclusivity, and ethical decision-making? In this lecture, I will explore our responsibility to build diverse and fair language generation systems and discuss the steps we can take to ‘teach’ AI assistants to be responsible, equitable collaborators in the human-AI future.