Slide 19
Slide 19 text
Mitigating the Risk of Bias
• Contains metrics to test for “unwarranted associations between an algorithm's
outputs and certain user subpopulations identified by protected features”
• Identifies subpopulations with disproportionately high error rates, assesses
offensive labeling, and detects uneven rates of algorithmic error
FairTest by Columbia University
Some included metrics: Normalized Mutual Information, Normalized
Conditional Mutual Information, Binary Ratio, Binary Difference