Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Fairness in Learning: Classic and Contextual Ba...

fatml
November 18, 2016
600

Fairness in Learning: Classic and Contextual Bandits

fatml

November 18, 2016
Tweet

Transcript

  1. Fairness in Learning: Classic and Contextual Bandits Jamie Morgenstern joint

    w. Mathew Joseph, Michael Kearns, Aaron Roth University of Pennsylvania 1
  2. Automated decisions of consequence 2 Hiring Lending Policing/ sentencing/ parole

    [Miller, 2015],[Byrnes, 2016] [Rudin, 2013], [Barry-Jester et al., 2015]
  3. Each individual has inherent ‘quality’ (expected revenue for giving a

    loan) entitling them to access to a resource (high-revenue individuals deserve loans)
  4. Each individual has inherent ‘quality’ (expected revenue for giving a

    loan) entitling them to access to a resource (high-revenue individuals deserve loans)
  5. Each individual has inherent ‘quality’ (expected revenue for giving a

    loan) entitling them to access to a resource (high-revenue individuals deserve loans) Observe features, not qualities directly,
  6. Each individual has inherent ‘quality’ (expected revenue for giving a

    loan) entitling them to access to a resource (high-revenue individuals deserve loans) Observe features, not qualities directly, must learn relationship
  7. Each individual has inherent ‘quality’ (expected revenue for giving a

    loan) entitling them to access to a resource (high-revenue individuals deserve loans) Observe features, not qualities directly, must learn relationship (observe loan application)
  8. Each individual has inherent ‘quality’ (expected revenue for giving a

    loan) entitling them to access to a resource (high-revenue individuals deserve loans) Observe features, not qualities directly, must learn relationship (observe loan application) (assume different for different groups!)
  9. A source of bias in ML 4 • Data feedback

    loops: only observe/update estimates for individuals current model believes are high- qualilty
  10. A source of bias in ML 4 • Data feedback

    loops: only observe/update estimates for individuals current model believes are high- qualilty
  11. We study 5 A new notion of fairness: high-quality individuals

    must be treated as well as lower-quality individuals
  12. We study 5 A new notion of fairness: high-quality individuals

    must be treated as well as lower-quality individuals And the “cost” of this constraint wrt learning rate R(T) (regret minimization)
  13. We study 5 A new notion of fairness: high-quality individuals

    must be treated as well as lower-quality individuals And the “cost” of this constraint wrt learning rate R(T) (regret minimization) Fair learning rather than finding a fair model
  14. Assumptions k groups each group has a function mapping features

    to ’qualities’ (initially unknown, belonging to C) (can be different for different groups)
  15. Information/decision model Each day t [T] Observe feature vector from

    each group Choose one individual based on features Observe noisy estimate of quality of chosen Goal: maximize expected average quality
  16. Fairness Definition 8 with probability 1 An algorithm A( )

    is fair if, for all (0, 1] For any sequence x1, . . . , xT
  17. Fairness Definition 8 for all rounds with probability 1 An

    algorithm A( ) is fair if, for all (0, 1] For any sequence x1, . . . , xT
  18. Fairness Definition 8 for all rounds with probability 1 An

    algorithm A( ) is fair if, for all (0, 1] and all pairs of groups i, j For any sequence x1, . . . , xT
  19. Fairness Definition 8 for all rounds with probability 1 An

    algorithm A( ) is fair if, for all (0, 1] and all pairs of groups i, j For any sequence x1, . . . , xT If E[quality of i at t] ≥ E[quality of j at t] then
  20. Fairness Definition 8 for all rounds with probability 1 An

    algorithm A( ) is fair if, for all (0, 1] and all pairs of groups i, j For any sequence x1, . . . , xT If E[quality of i at t] ≥ E[quality of j at t] then A favors i over j in round t
  21. Fairness Definition 8 for all rounds with probability 1 An

    algorithm A( ) is fair if, for all (0, 1] and all pairs of groups i, j For any sequence x1, . . . , xT If E[quality of i at t] ≥ E[quality of j at t] then P A chooses i at t | x1, . . . , xt P A chooses j at t | x1, . . . , xt
  22. Separation between fair and unfair learning without features 9 Without

    fairness one can achieve regret For any fair algo, R(T) = ˜ k3T . Theorem 1
  23. Separation between fair and unfair learning without features 9 Without

    fairness one can achieve regret ˜ O( kT) For any fair algo, R(T) = ˜ k3T . Theorem 1
  24. Feature-based fair learning 10 [Strehl and Littman, 2008] [ Li

    et al., 2011] Theorem 2 implications: There is a fair algorithm for d-dimensional linear mappings from features to qualities w. regret R(T) = O T1 c · poly(k, d, ln 1 )
  25. Feature-based fair learning 10 [Strehl and Littman, 2008] [ Li

    et al., 2011] Theorem 2 implications: There is a fair algorithm for d-dimensional linear mappings from features to qualities w. regret R(T) = O T1 c · poly(k, d, ln 1 ) Any fair algorithm for d-dimensional conjunction mappings must have regret R(T) = Ω 2d
  26. Fairness through Datamining - Pedreshi et al, ’08 - HDF

    ’13, HDFMB ’11, ZKC ’11, …. “Group” Fairness - CV ’10, FKL ’16, JL’15, KC ’11, KKZ ’12, KAAS ’12… “Individual” Fairness - Dwork et al, 2012 - Johnson, Foster, Stine ’16 - Hardt, Price, Srebro ’16 - Kleinberg, Mullainathan, Raghavan ’16 11 Related Work
  27. Fairness through Datamining - Pedreshi et al, ’08 - HDF

    ’13, HDFMB ’11, ZKC ’11, …. “Group” Fairness - CV ’10, FKL ’16, JL’15, KC ’11, KKZ ’12, KAAS ’12… “Individual” Fairness - Dwork et al, 2012 - Johnson, Foster, Stine ’16 - Hardt, Price, Srebro ’16 - Kleinberg, Mullainathan, Raghavan ’16 11 Related Work Our work focuses on fair learning rather than finding a fair model
  28. Conclusions 12 Must be *confident* about relative qualities before preferential

    treatment ensues New notion of fairness: higher-quality ⇒ better treatment
  29. Conclusions 12 There’s a cost to fairness Must be *confident*

    about relative qualities before preferential treatment ensues New notion of fairness: higher-quality ⇒ better treatment
  30. Conclusions 12 There’s a cost to fairness in some cases,

    this cost is mild, in others, great Must be *confident* about relative qualities before preferential treatment ensues New notion of fairness: higher-quality ⇒ better treatment
  31. Conclusions 12 There’s a cost to fairness in some cases,

    this cost is mild, in others, great Must be *confident* about relative qualities before preferential treatment ensues New notion of fairness: higher-quality ⇒ better treatment Thanks!