Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Advocating for #DeafSafeAI Regulations

Advocating for #DeafSafeAI Regulations

Join us for an engaging workshop on the pivotal role of the Deaf community in shaping Automated Interpreting by Artificial Intelligence (AIxAI) technology. Our session will focus on key insights from the #DeafSafeAI report, underscoring how collective Deaf experiences uniquely position us to lead in developing policies for AI interpreting use.

Learn about our exploration into sociotechnical systems and why Deaf leadership is crucial in crafting guidelines for AI interpreting. We’ll delve into significant findings from two influential reports aimed at the Interpreting SAFE-AI Task Force, highlighting real-world user experiences and industry perspectives on ethical AI in interpreting.

Discover the Task Force’s journey and its dedication to advocating for comprehensive policies that ensure the fair and responsible integration of AI interpreting across all languages and settings, reflecting our progress since August 2023.

3Play Media

May 20, 2024
Tweet

More Decks by 3Play Media

Other Decks in Technology

Transcript

  1. #DeafSafeAI Advisory Group on AI and Sign Language Interpreting April

    30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations
  2. Advisory Group on AI and Sign Language Interpreting April 30,

    2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  3. AnnMarie Killian, CEO, TDIforAccess (TDI) Star Grieser, CEO, Registry of

    Interpreters for the Deaf Jeff Shaul, Head of Tech, GoSign.AI LLC Tim Riker, CDI, Senior Lecturer, Brown University April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Thanks to Aashaka Desai for enhancing the accessibility of this presentation. Advisory Group on AI and Sign Language Interpreting
  4. We did a study. • Method • Findings • Discussion

    • Conclusion https://safeaitf.org/deafsafeai/ April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  5. GOAL: Co-Designing Accountable AIxAI How the Grassroots and Government can

    work together to regulate safe, fair and ethical Automatic Interpreting by Artificial Intelligence April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  6. Motivation AI will transform society for good or ill. We’re

    aiming for the good. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  7. Conceptual Lens This emerged from qualitative data, it was not

    pre-imposed. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  8. Sociotechnical Systems “Sociotechnical refers to the interrelatedness of social and

    technical aspects of an organization. The cornerstone of the sociotechnical approach is the design process that leads to optimization of the two subsystems” (Botla and Kondur, 2018, p. 26). April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  9. Deaf Wisdom Deaf people have collective life experiences reflecting the

    two dimensions of sociotechnical systems The KEY is attending to how the social behaviors of humans combine with, influence, and are shaped by the structures of technology, and vice versa. ASL interpretation of Deaf Wisdom April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  10. Sociotechnical Systems Readiness Results and Outcomes Technological Quality Social Technica

    l April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  11. Results and Outcomes Controls at the level of cultural groups

    Individual (Consumer) Authority and Independence April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  12. Technological Quality Data Modeling (machine learning) Safeguards: Safety and Security

    Informed Consent April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  13. Readiness Sign Language Recognition (SLR) Readiness of American Deaf communities

    Accountability April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  14. Sociotechnical Systems Readiness Results and Outcomes Technological Quality Social Technica

    l April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  15. The Usual Pipeline in [Healthcare ] AI Model Development Chen

    IY, et al. 2021 Annu. Rev. Biomed. Data Sci. 4:123-44 April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  16. Disparities in funding and problem selection priorities are an ethical

    violation of principles of justice. Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  17. A focus on convenient samples can exacerbate existing disparities in

    marginalized and underserved populations, violating do-no- harm principles. Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  18. Biased clinical knowledge, implicit power differentials, and social disparities of

    the healthcare system encode bias in outcomes that violate justice principles. Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  19. Default practices like evaluating performance on large populations, violate beneficence

    and justice principles when algorithms do not work for subpopulations Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  20. Targeted, spot- check audits, and a lack of model documentation

    ignore systematic shifts in populations risks and patient safety, furthering risk to underserved groups. Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  21. SOLUTIONS Deaf-Safe AI: A Legal Foundation for Ubiquitous Automatic Interpreting

    April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  22. It is possible to design for justice. April 30, 2024

    ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  23. Deaf people need to be decision makers at every step

    of the design process in order to create an ethical pipeline for AIxAI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  24. Sociotechnical Systems Readiness Results and Outcomes Technological Quality Social Technica

    l April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  25. Where do Sociotechnical Systems start? Readiness Tech Quality April 30,

    2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  26. How do we get to Sociotechnical Quality? • Technology quality

    enables or prevents • AI (and any) software affords or disaffords Tech Quality April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  27. How do we get to Sociotechnical Quality? How much readiness

    is necessary to design for safety, fairness, and ethics? Tech Experiments Readiness April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  28. How do we get to Sociotechnical Quality? How do you

    design affordances for people who are Deaf/HH/Deafblind? Tech Experiments Results & Outcomes Readiness April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  29. How do we get to Sociotechnical Quality? • Design happens

    in loops with continual evaluation • Results are short-term measures of a defined risk Tech Experiments Results & Outcomes Tech Experiments Results & Outcomes April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  30. How do we get to Sociotechnical Quality? • Design happens

    in loops with continual evaluation • Defined tasks set the limits (disaffordances) on outcomes Tech Experiments Results & Outcomes April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  31. Sociotechnical Systems Tech Quality Results & Outcomes Readiness April 30,

    2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  32. Where do you intervene to influence design outcomes? • Social

    decision-making (conscious and unconscious) creates the tech we get! • People choose what to measure, and why to measure that instead of this. Tech Quality Results & Outcomes Readiness April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  33. Pick “tasks” that lead to desired outcomes* *immediate results matter

    but ultimate outcomes guide task selection April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  34. Pick “tasks” that lead to desired outcomes* *immediate results matter

    but ultimate outcomes guide task selection Results & Outcomes Disparities in funding and problem selection priorities are an ethical violation of principles of justice. A focus on convenient samples can exacerbate existing disparities in marginalized and underserved populations, violating do-no- harm principles. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  35. To STOP inequality and discrimination: 1. Follow design justice principles1

    2. Understand how “the social” influences technology (sociotechnical systems) and 3. Build intersectional benchmarks2 1Costanza-Chosk, Sasha. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. Cambridge, MA: The MIT Press. 2Buolamwini, J. and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81:1-15. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  36. How do you make intersectional benchmarks? You iterate through key

    questions:3 • What are the underlying assumptions about “inclusion”? • What are the underlying assumptions about “fairness”? • How does “symmetrical treatment” as a goal differ from goals of “algorithmic democracy” and “algorithmic justice”? 3Costanza-Chock, pp. 61-63 April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  37. How do you make intersectional benchmarks? • Involve deaf people

    in every stage4 • Involve deaf people in selection of outcomes • Repeat • Repeat • Repeat until the definition of desired outcomes satisfies Deaf autonomy and independence • Involve deaf people in task definition and evaluation • Repeat • Repeat until the algorithms no longer perpetuate systemic discriminatory effects • Repeat until post-deployment considerations involve real continued improvements rather than trying to un-do bias and harm 4 merging Chen et al. 2021 and Costanza-Chock 2020 April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  38. Call to Action Learn #DeafSafeAI report Design justice principles Community

    needs and values Inform New technologies New regulations Case studies Discuss Standards Governance Accreditation April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting
  39. Thank you for watching! Please fill out the survey. https://3playmarketing.typeform.com/to/a8DtpGUc

    April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting