Large Language Models (LLMs) like GPT-4, LLaMA, and Gemini have demonstrated remarkable capabilities across a wide range of tasks, often exceeding human-level performance in specific domains. For instance, GPT-4 achieved a top 10% score on the U.S. Unified Bar Exam, highlighting its potential in the legal domain. However, the use of LLMs in legal decision-making introduces critical challenges, including the need to detect and mitigate social biases and ensure transparent, human-understandable explanations. In this talk, Danushka Bollegala will present ongoing research from my lab at the University of Liverpool, focusing on the intersection of AI and law. Specifically, he will discuss methods for identifying and addressing social biases in legal contexts and the role of LLMs in promoting equitable and explainable legal outcomes.