Slide 1

Slide 1 text

#DeafSafeAI Advisory Group on AI and Sign Language Interpreting April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations

Slide 2

Slide 2 text

Advisory Group on AI and Sign Language Interpreting April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 3

Slide 3 text

AnnMarie Killian, CEO, TDIforAccess (TDI) Star Grieser, CEO, Registry of Interpreters for the Deaf Jeff Shaul, Head of Tech, GoSign.AI LLC Tim Riker, CDI, Senior Lecturer, Brown University April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Thanks to Aashaka Desai for enhancing the accessibility of this presentation. Advisory Group on AI and Sign Language Interpreting

Slide 4

Slide 4 text

We did a study. • Method • Findings • Discussion • Conclusion https://safeaitf.org/deafsafeai/ April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 5

Slide 5 text

GOAL: Co-Designing Accountable AIxAI How the Grassroots and Government can work together to regulate safe, fair and ethical Automatic Interpreting by Artificial Intelligence April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 6

Slide 6 text

Motivation AI will transform society for good or ill. We’re aiming for the good. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 7

Slide 7 text

Conceptual Lens This emerged from qualitative data, it was not pre-imposed. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 8

Slide 8 text

Sociotechnical Systems “Sociotechnical refers to the interrelatedness of social and technical aspects of an organization. The cornerstone of the sociotechnical approach is the design process that leads to optimization of the two subsystems” (Botla and Kondur, 2018, p. 26). April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 9

Slide 9 text

Deaf Wisdom Deaf people have collective life experiences reflecting the two dimensions of sociotechnical systems The KEY is attending to how the social behaviors of humans combine with, influence, and are shaped by the structures of technology, and vice versa. ASL interpretation of Deaf Wisdom April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 10

Slide 10 text

Sociotechnical Systems Readiness Results and Outcomes Technological Quality Social Technica l April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 11

Slide 11 text

Results and Outcomes Controls at the level of cultural groups Individual (Consumer) Authority and Independence April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 12

Slide 12 text

Technological Quality Data Modeling (machine learning) Safeguards: Safety and Security Informed Consent April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 13

Slide 13 text

Readiness Sign Language Recognition (SLR) Readiness of American Deaf communities Accountability April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 14

Slide 14 text

Sociotechnical Systems Readiness Results and Outcomes Technological Quality Social Technica l April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 15

Slide 15 text

The Usual Pipeline in [Healthcare ] AI Model Development Chen IY, et al. 2021 Annu. Rev. Biomed. Data Sci. 4:123-44 April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 16

Slide 16 text

Disparities in funding and problem selection priorities are an ethical violation of principles of justice. Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 17

Slide 17 text

A focus on convenient samples can exacerbate existing disparities in marginalized and underserved populations, violating do-no- harm principles. Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 18

Slide 18 text

Biased clinical knowledge, implicit power differentials, and social disparities of the healthcare system encode bias in outcomes that violate justice principles. Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 19

Slide 19 text

Default practices like evaluating performance on large populations, violate beneficence and justice principles when algorithms do not work for subpopulations Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 20

Slide 20 text

Targeted, spot- check audits, and a lack of model documentation ignore systematic shifts in populations risks and patient safety, furthering risk to underserved groups. Chen et al (2021) critique the typical ‘business as usual’ pipeline of AI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 21

Slide 21 text

SOLUTIONS Deaf-Safe AI: A Legal Foundation for Ubiquitous Automatic Interpreting April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 22

Slide 22 text

It is possible to design for justice. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 23

Slide 23 text

Deaf people need to be decision makers at every step of the design process in order to create an ethical pipeline for AIxAI model development. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 24

Slide 24 text

Sociotechnical Systems Readiness Results and Outcomes Technological Quality Social Technica l April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 25

Slide 25 text

Where do Sociotechnical Systems start? Readiness Tech Quality April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 26

Slide 26 text

How do we get to Sociotechnical Quality? • Technology quality enables or prevents • AI (and any) software affords or disaffords Tech Quality April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 27

Slide 27 text

How do we get to Sociotechnical Quality? How much readiness is necessary to design for safety, fairness, and ethics? Tech Experiments Readiness April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 28

Slide 28 text

How do we get to Sociotechnical Quality? How do you design affordances for people who are Deaf/HH/Deafblind? Tech Experiments Results & Outcomes Readiness April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 29

Slide 29 text

How do we get to Sociotechnical Quality? • Design happens in loops with continual evaluation • Results are short-term measures of a defined risk Tech Experiments Results & Outcomes Tech Experiments Results & Outcomes April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 30

Slide 30 text

How do we get to Sociotechnical Quality? • Design happens in loops with continual evaluation • Defined tasks set the limits (disaffordances) on outcomes Tech Experiments Results & Outcomes April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 31

Slide 31 text

Sociotechnical Systems Tech Quality Results & Outcomes Readiness April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 32

Slide 32 text

Where do you intervene to influence design outcomes? • Social decision-making (conscious and unconscious) creates the tech we get! • People choose what to measure, and why to measure that instead of this. Tech Quality Results & Outcomes Readiness April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 33

Slide 33 text

Pick “tasks” that lead to desired outcomes* *immediate results matter but ultimate outcomes guide task selection April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 34

Slide 34 text

Pick “tasks” that lead to desired outcomes* *immediate results matter but ultimate outcomes guide task selection Results & Outcomes Disparities in funding and problem selection priorities are an ethical violation of principles of justice. A focus on convenient samples can exacerbate existing disparities in marginalized and underserved populations, violating do-no- harm principles. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 35

Slide 35 text

To STOP inequality and discrimination: 1. Follow design justice principles1 2. Understand how “the social” influences technology (sociotechnical systems) and 3. Build intersectional benchmarks2 1Costanza-Chosk, Sasha. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. Cambridge, MA: The MIT Press. 2Buolamwini, J. and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81:1-15. April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 36

Slide 36 text

How do you make intersectional benchmarks? You iterate through key questions:3 • What are the underlying assumptions about “inclusion”? • What are the underlying assumptions about “fairness”? • How does “symmetrical treatment” as a goal differ from goals of “algorithmic democracy” and “algorithmic justice”? 3Costanza-Chock, pp. 61-63 April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 37

Slide 37 text

How do you make intersectional benchmarks? • Involve deaf people in every stage4 • Involve deaf people in selection of outcomes • Repeat • Repeat • Repeat until the definition of desired outcomes satisfies Deaf autonomy and independence • Involve deaf people in task definition and evaluation • Repeat • Repeat until the algorithms no longer perpetuate systemic discriminatory effects • Repeat until post-deployment considerations involve real continued improvements rather than trying to un-do bias and harm 4 merging Chen et al. 2021 and Costanza-Chock 2020 April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 38

Slide 38 text

Call to Action Learn #DeafSafeAI report Design justice principles Community needs and values Inform New technologies New regulations Case studies Discuss Standards Governance Accreditation April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting

Slide 39

Slide 39 text

Thank you for watching! Please fill out the survey. https://3playmarketing.typeform.com/to/a8DtpGUc April 30, 2024 ACCESS – Advocating for #DeafSafeAI Regulations Advisory Group on AI and Sign Language Interpreting