Slide 1

Slide 1 text

Responsible ML Daron Yöndem http://daron.me @daronyondem

Slide 2

Slide 2 text

The What and the Why? • Fairness: ML models may behave unfairly by negatively impacting groups of people, such as those defined in terms of race, gender, or age. • Interpretability: Ability to explain what parameters are used and how the models “think” to explain the outcome for regulatory oversight. • Differential Privacy: Monitoring applications’ use of personal data without accessing or knowing the identities of individuals

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

Fairlearn • Fairness Assessment • Fairness Mitigation (in Classification and Regression Models) • During or after model building. • Open Source https://github.com/fairlearn • Integrated into Azure Machine Learning

Slide 8

Slide 8 text

Fairlearn Demo

Slide 9

Slide 9 text

InterpretML • Glassbox Models (Decision trees, rule lists, linear models, Explainable Boosting Machine) • Blackbox Models (Existing model) • Explanations are approximations • Open Source https://github.com/interpretml

Slide 10

Slide 10 text

InterpretML Demo

Slide 11

Slide 11 text

Differential Privacy The guarantee of a differentially private algorithm is that its behavior hardly changes when a single individual joins or leaves the dataset. • Smart Noise https://smartnoise.org/ This toolkit uses state-of-the-art differential privacy (DP) techniques to inject noise into data, to prevent disclosure of sensitive information and manage exposure risk.

Slide 12

Slide 12 text

Links worth sharing Microsoft Learn : Explore differential privacy • https://drn.fyi/2QRL3V1 Capgemini “AI and the ethical conundrum” Report • https://drn.fyi/3gBsz5G IDC report: Empowering your organization with Responsible AI • https://drn.fyi/3sKilCx

Slide 13

Slide 13 text

Thanks http://daron.me | @daronyondem Download slides here; http://daron.me/decks