Why is my classifier discriminatory?

© 2018 Curran Associates Inc..All rights reserved. Recent attempts to achieve fairness in predictive models focus on the balance between fairness and accuracy. In sensitive applications such as healthcare or criminal justice, this trade-off is often undesirable as any increase in prediction error co...

Full description

Bibliographic Details
Main Authors: Sontag, David (Author), Johansson, Fredrik D. (Author)
Format: Article
Language:English
Published: 2021-11-04T11:57:10Z.
Subjects:
Online Access:Get fulltext
LEADER 01417 am a22001573u 4500
001 137319
042 |a dc 
100 1 0 |a Sontag, David  |e author 
700 1 0 |a Johansson, Fredrik D.  |e author 
245 0 0 |a Why is my classifier discriminatory? 
260 |c 2021-11-04T11:57:10Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/137319 
520 |a © 2018 Curran Associates Inc..All rights reserved. Recent attempts to achieve fairness in predictive models focus on the balance between fairness and accuracy. In sensitive applications such as healthcare or criminal justice, this trade-off is often undesirable as any increase in prediction error could have devastating consequences. In this work, we argue that the fairness of predictions should be evaluated in context of the data, and that unfairness induced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection, rather than by constraining the model. We decompose cost-based metrics of discrimination into bias, variance, and noise, and propose actions aimed at estimating and reducing each term. Finally, we perform case-studies on prediction of income, mortality, and review ratings, confirming the value of this analysis. We find that data collection is often a means to reduce discrimination without sacrificing accuracy. 
546 |a en 
655 7 |a Article 
773 |t Advances in Neural Information Processing Systems