Upgrade to Pro — share decks privately, control downloads, hide ads and more …

PyData Vermont - Digital Discrimination: Cognit...

PyData Vermont - Digital Discrimination: Cognitive Bias in Machine Learning (and LLMs!)

Resources from this talk:

Tools to combat AI Bias
https://landscape.lfai.foundation/card-mode?grouping=category
https://huggingface.co/ibm-granite
https://instructlab.ai/

“Foundation models: Opportunities, risks and mitigations” https://www.ibm.com/downloads/cas/E5KE5KRZ

Racist/Sexist AI Generated Imagery
https://arxiv.org/pdf/2110.01963
https://www.businessinsider.com/lensa-ai-raises-serious-concerns-sexualization-art-theft-data-2023-1
https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/

Healthcare Inequality
https://www.forbes.com/sites/adigaskell/2022/12/02/minority-patients-often-left-behind-by-health-ai/?sh=31d28a225b41
https://medicine.yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offers-guidelines/
https://www.nejm.org/doi/full/10.1056/NEJMms2004740
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11228769/

Gender Shades/Algorithmic Justice League
http://gendershades.org/
https://www.youtube.com/watch?v=Af2VmR-iGkY
https://www.ajl.org/take-action
https://www.netflix.com/title/81328723

Education
https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html
https://d4bl.org/

National and Industry Standards
https://lists.lfaidata.foundation/g/gac-responsible-ai-workstream
https://genaicommons.org/
https://www.nist.gov/aisi
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
https://artificialintelligenceact.eu/
https://ai.gov/
https://digitalservices.vermont.gov/ai

Algorithmic Justice League
https://www.ajl.org/
https://secure.actblue.com/donate/algorithmic-justice-league

Maureen McElaney

July 29, 2024
Tweet

More Decks by Maureen McElaney

Other Decks in Technology

Transcript

  1. Digital Discrimination: Cognitive Bias in Machine Learning (and LLMs!) Mo

    McElaney Open Source Developer Programs at IBM PyData Vermont July 29, 2024
  2. Agenda • Examples of Bias in Machine Learning. • Solutions

    to combat unwanted bias. • Tools to combat unwanted bias. • Resources and how to get involved.
  3. A cognitive bias is a systematic pattern of deviation from

    norm or rationality in judgment. People make decisions given their limited resources. Wilke A. and Mata R. (2012) “Cognitive Bias”, Clarkson University
  4. “In 1,000 years, when we look back as we are

    generating the thumbprint of our society and culture right now through these images, is this how we want to see women?” - Melissa Heikkilä Senior reporter at MIT Technology Review, covering artificial intelligence and how it is changing our society.
  5. Machine Learning Pipeline In- Processing Pre- Processing Post- Processing Modifying

    the training data. Modifying the learning algorithm. Modifying the predictions (or outcomes.)
  6. Yale Panel’s Guidelines on Eliminating Racial Bias in Health Care

    AI Mitigating algorithmic bias, must take place across all stages of an algorithm’s life cycle. The experts defined this life cycle in five stages: 1. Identification of the problem that the algorithm will address 2. Selection and management of data to be used by the algorithm 3. Development, training, and validation of the algorithm 4. Deployment of the algorithm 5. Ongoing evaluation of performance and outcomes of the algorithm
  7. Yale Panel’s Guidelines on Eliminating Racial Bias in Health Care

    AI Five guiding principles for preventing algorithmic bias: 1. Promote health and health care equity during all phases of the health care algorithm life cycle 2. Ensure that health care algorithms and their use are transparent and explainable 3. Authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trust 4. Explicitly identify health care algorithmic fairness issues and tradeoffs 5. Ensure accountability for equity and fairness in outcomes from health care algorithms
  8. “As the use of new AI techniques grows and grows,

    it will be important to watch out for these biases to make sure we do no harm to specific groups while advancing health for others. We need to develop strategies for AI to advance health for all.” Lucila Ohno-Machado, MD, PhD, MBA
  9. “If we fail to make ethical and inclusive artificial intelligence

    we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.” - Joy Boulamwini @jovialjoy
  10. Agenda • Examples of Bias in Machine Learning. • Solutions

    to combat unwanted bias. • Tools to combat unwanted bias. • Resources and how to get involved.
  11. Questions posed to students in these courses... Is the technology

    fair? How do you make sure that the data is not biased? Should machines be judging humans?
  12. Responsible AI Workstream at the Generative AI Commons The Responsible

    AI Framework We are a community of industry professionals, students, academics, experts, practitioners, and enthusiasts. We meet every other Thursday at 4pm Central Europe. Your contributions are most welcome. Join us on Slack Aiming for a document draft to share at Ai_dev Hong Kong 21-23 August, 2024 https://genaicommons.org/
  13. NIST AI Safety Institute Consortium (AISIC) The Consortium brings together

    more than 200 organizations to develop science-based and empirically backed guidelines and standards for AI measurement and policy. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies. LF AI and Data is active in NIST AISIC. Workgroups: •wg #1 - Risk Management for Generative AI •wg #2 - Synthetic Data •wg #3 - Capability Evaluations •wg #4 - Red Teaming •wg #5 - Safety & Security https://www.nist.gov/aisi
  14. EU Ethics Guidelines for Trustworthy Artificial Intelligence According to the

    Guidelines, trustworthy AI should be: (1) lawful - respecting all applicable laws and regulations (2) ethical - respecting ethical principles and values (3) robust - both from a technical perspective while taking into account its social environment Source: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  15. EU Artificial Intelligence Act #1 - Human agency and oversight.

    #2 - Technical robustness and safety. #3 - Privacy and data governance. #4 - Transparency #5 - Diversity, non- discrimination and fairness. #6 - Societal and environmental well-being. #7 - Accountability US Executive Order on AI #1 - Ensure responsible and effective government use of AI #2 - Ensure safety and security #3 - Protect Americans’ privacy #4 - Transparency #5 - Advance equity and civil rights #6 - Stand up for consumers and workers #7 - Accountability Different: Promote innovation and competition Advance American leadership abroad
  16. Highlights of Vermont’s State of AI and Data Privacy February

    2024 Council on Artificial Intelligence recommendation to Legislature on potential Deepfake statute. https://digitalservices.ver mont.gov/ai June 2024 Governer Scott vetoes data privacy bill considered strongest in the nation. https://apnews.com/articl e/data-privacy-vermont-v eto-2ab84d8705fa38cf89 c428daa1dbfc54 2022 Legislation from 2022 created an AI Commission, hiring the first AI Director, and released a code of AI Ethics. https://www.wcax.com/2 023/10/19/an-update-arti ficial-intelligence-use-stat e-government/ 2018-2020 Vermont Artificial Intelligence Task Force investigation. https://legislature.vermon t.gov/assets/Legislative-R eports/Artificial-Intelligen ce-Task-Force-Final-Repo rt-1.15.2020.pdf
  17. Agenda • Examples of Bias in Machine Learning. • Solutions

    to combat unwanted bias. • Tools to combat unwanted bias. • Resources and how to get involved.
  18. Adversarial Robustness 360 ↳ (ART) AI Fairness 360 ↳ (AIF360)

    AI Explainability 360 ↳ (AIX360) https://github.com/Truste d-AI/AIF360 FAIRNESS EXPLAINABILITY ROBUSTNESS Trusted AI Lifecycle through Open Source Pillars of trust, woven into the lifecycle of an AI application https://github.com/Truste d-AI/adversarial-robustne ss-toolbox https://github.com/Trust ed-AI/AIX360 Is it fair? Is it easy to understand? Did anyone tamper with it?
  19. IBM Granite 61% of CEOs identify concerns about data lineage

    and provenance as a barrier to adopting generative AI. • Intellectual property (IP) indemnity protection • Built for the enterprise • Built to minimize hateful and profane content, or “HAP.” • Strong focus on data governance, risk and compliance • Open Source and available now on Hugging Face! • https://huggingface.co/ibm-granite
  20. What is InstructLab? InstructLab is a model-agnostic open source AI

    project that facilitates contributions to Large Language Models (LLMs). It enables anyone to shape generative AI through contributing updates to existing LLMs in an accessible way. What makes it special? InstructLab's model-agnostic technology gives model upstreams the ability to create regular builds of their open source licensed models not by rebuilding and retraining the entire model but by composing new skills into it. What is it? https://instructlab.ai/
  21. Key Projects… InstructLab is made up of several projects that

    are defined as codebases and services with different release cycles. Collectively, these enable large-model development. Key InstructLab projects: 1. taxonomy tree of knowledge and skills. 2. ilab command-line interface (CLI) tool for model fine-tuning. https://instructlab.ai/
  22. Agenda • Examples of Bias in Machine Learning. • Solutions

    to combat unwanted bias. • Tools to combat unwanted bias. • Resources and how to get involved.
  23. Photo by rawpixel on Unsplash No matter what it is

    your responsibility to build systems that are fair.
  24. Resources from this talk: “Foundation models: Opportunities, risks and mitigations”

    https://www.ibm.com/downloads/cas/E5KE5KRZ Racist/Sexist AI Generated Imagery https://arxiv.org/pdf/2110.01963 https://www.businessinsider.com/lensa-ai-raises-serious-co ncerns-sexualization-art-theft-data-2023-1 https://www.technologyreview.com/2022/12/12/1064751/t he-viral-ai-avatar-app-lensa-undressed-me-without-my-cons ent/ Healthcare Inequality https://www.forbes.com/sites/adigaskell/2022/12/02/minori ty-patients-often-left-behind-by-health-ai/?sh=31d28a225b 41 https://medicine.yale.edu/news-article/eliminating-racial-bia s-in-health-care-ai-expert-panel-offers-guidelines/ https://www.nejm.org/doi/full/10.1056/NEJMms2004740 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11228769/ Gender Shades/Algorithmic Justice League http://gendershades.org/ https://www.youtube.com/watch?v=Af2VmR-iGkY https://www.ajl.org/take-action https://www.netflix.com/title/81328723 Education https://www.nytimes.com/2018/02/12/business/computer-scien ce-ethics-courses.html https://d4bl.org/ National and Industry Standards https://lists.lfaidata.foundation/g/gac-responsible-ai-workstrea m https://genaicommons.org/ https://www.nist.gov/aisi https://ec.europa.eu/digital-single-market/en/news/ethics-guide lines-trustworthy-ai https://artificialintelligenceact.eu/ https://ai.gov/ https://digitalservices.vermont.gov/ai Tools to combat AI Bias https://landscape.lfai.foundation/card-mode?grouping=categor y https://huggingface.co/ibm-granite https://instructlab.ai/ Algorithmic Justice League https://www.ajl.org/ https://secure.actblue.com/donate/algorithmic-justice-league