Upgrade to Pro — share decks privately, control downloads, hide ads and more …

JConf Peru 2024 - Decoding the Mind of AI: Expl...

JConf Peru 2024 - Decoding the Mind of AI: Explainable AI (xAI) in LLMs

Manai Mortadha
AI/XAI Engineer @Netflix |AI Expert |XAI Researcher @Saint Mary's University |International AI Speaker

As Large Language Models (LLMs) like ChatGPT, Bard, and Gemini reshape industries with their ability to generate human-like text, the demand for transparency and trust in their outputs grows. This session delves into the transformative role of Explainable AI (XAI) in making these advanced systems more interpretable. Learn how XAI techniques reveal the reasoning behind LLM decisions, detect biases, and enhance user trust. Through practical examples and cutting-edge research, we will explore how XAI bridges the gap between complex AI models and human understanding, paving the way for more ethical and reliable AI applications.

Carlos Zela Bueno

December 11, 2024
Tweet

More Decks by Carlos Zela Bueno

Other Decks in Programming

Transcript

  1. DECODING THE MIND OF AI: EXPLAINABLE AI (XAI) IN LARGE

    LANGUAGE MODELS (LLMS) 07th December 2024 Present by Mr.MANAI MORTADHA. https://taplink.cc/manaimortadha [email protected] [email protected] [email protected] Halifax,Nova Scotia,Canada
  2. TODAY'S AGENDA Introduction to LLMs The Importance of Explainable AI

    (XAI) XAI Techniques for LLMs Bias Detection and Mitigation Practical Examples and Research Ethical and Reliable AI Q&A
  3. AI Enginner | AI Expert |Senior XAI Engineer @Netflix |XAI

    Researcher @Saint Mary's university |AI Consultant @Tegus and @WIVENN | CEO and Founder @Man.Ai |Professional Technical Reviewer @Packt |2024 AI Apprentice @Google | International AI Speaker ( Linkedin Top Voice) Emails: [email protected] | [email protected] | [email protected] Linkedin : https://www.linkedin.com/in/mannai-mortadha/ Leetcode : https://leetcode.com/u/mannaimortadha898/ Github: https://github.com/MortadhaMannai Medium Blog : https://manaimortadha.medium.com/ My last Scientific Papers: https://zenodo.org/record/8274725 schedule a 1:1 Meeting at topmate.io : https://topmate.io/manai_mortadha ABOUT ME
  4. Definition INTRODUCTION TO LLMS Key Features: Natural Language Understanding Contextual

    Predictions Human-like Text Generation Applications: Customer support automation Content generation Code assistance Visual: A flowchart showing how LLMs process inputs (text) and generate outputs. Large Language Models (LLMs) are deep learning models trained on vast amounts of text data to understand, generate, and translate human-like text. Examples: ChatGPT, Bard, Gemini.
  5. Explainable AI (XAI): Benefits and Use Cases CHALLENGES IN LLMS

    WHY EXPLAINABLE AI (XAI) MATTERS DEFINITION OF XAI Techniques and tools to interpret and explain AI model outputs to humans. OBJECTIVES OF XAI FOR LLMS Improve transparency Build user trust Enable debugging and fairness Visual: Graph showing the relationship between XAI, transparency, and user trust. Lack of transparency Bias and ethical concerns Trust issues with end-users
  6. Attention Visualization: Visualizes the parts of input text that LLMs

    focus on during processing. Example tool: BERTViz for Transformers. XAI TECHNIQUES FOR LLMS (PART 1) CODE EXAMPLE JCONF PERU 2024
  7. Feature Importance Analysis: Identifies which features (words or tokens) contribute

    most to the output. Tools: SHAP (SHapley Additive exPlanations) XAI TECHNIQUES FOR LLMS (PART 2) CODE EXAMPLE JCONF PERU 2024
  8. Problem: LLMs may reflect biases present in their training data.

    Role of XAI in Bias Detection: Analyzing model outputs for gender, racial, or cultural bias. Example: Identifying biased responses in hiring-related queries. BIAS DETECTION IN LLMS
  9. PRACTICAL EXAMPLE: DEBUGGING GPT-LIKE MODELS Scenario: A chatbot generates inappropriate

    or irrelevant responses. Use Attention Visualization to understand focus points. Apply SHAP to evaluate feature contributions. Fine-tune model on balanced datasets. Code Walkthrough:
  10. Layer-wise relevance propagation (LRP) for interpreting deep neural networks. Explainability

    frameworks tailored for LLMs, such as AI Explainability 360 by IBM. Use of surrogate models for interpretable approximations of LLMs. Visual: Timeline of key research milestones.
  11. Ensuring Transparency: Clear documentation and user guidelines. ETHICAL AND RELIABLE

    AI Fairness Checks: Regular audits using XAI tools. Continuous Improvement: Adopting feedback to refine models.
  12. Summary: XAI is essential for trust and reliability in LLMs.

    Practical tools like SHAP and saliency maps make models interpretable. Bridging the gap between AI complexity and human understanding ensures ethical applications. Call to Action: Explore XAI tools. Advocate for transparency in AI applications. CLOSING THOUGHTS
  13. GITHUB LINKEDIN NETFLIX TOP RANKED ENGINNER Q& A LEETCODE MEDUIM

    JUST SCAN ! TO ASK MORE AUESTION LATER ! CONNECT HERE