Recently the concept of algorithmic fairness has garnered attention in the ML community, especially with respect to criminal justice and healthcare. Has this topic already been studied in the predictive modeling community? Eg in residual analysis? Are there papers specifically on this topic?
1 Answers
I'm outside my area of expertise here, but I'll give it a shot. Comments welcome. I can't find much early interest in algorithmic fairness from criminology or credit risk prediction, but there is some in health risk prediction, and I know of some nice modern reviews.
Berk et al give an overview of fairness criteria that they say "draw[s] on the exiting literature in criminology, computer science, and statistics."
Mitchell et al give a more recent recap of the ML literature, with a more thorough framing of the problem.
In health risk prediction, Van De Ven and Ellis give a very sophisticated discussion of health insurance pricing that centers around an efficiency-fairness tradeoff. They argue that better risk adjustment allows more efficient operation at a given level of fairness or vice versa, with perfect risk adjustment eliminating the need for a tradeoff. They discuss some of the tradeoffs for different policies regarding race, age, and sex.
https://www.researchgate.net/publication/4952935_Risk_Adjustment_In_Competitive_Health_Plan_Markets
In criminology, traditional policy-oriented work often explicitly uses age, sex, and race as predictors, sometime without even mentioning the potential for discrimination. Examples:
https://www.justice.gc.ca/eng/rp-pr/csj-sjc/jsp-sjp/rr02_7/rr02_7.pdf https://www.ncjrs.gov/pdffiles1/nij/grants/237988.pdf
In retail credit risk prediction, Allen et al's wide-ranging work from 2004 claims "the body of research on retail credit risk measurement is quite sparse". This suggests that in the early 2000's, people were only beginning to predict individual credit risk, and I doubt anyone had gone past that to begin de-biasing predictions. Allen et al discusses racial and gender discrimination only briefly in the context of "relationship lending", where new clients pay more interest than long-term clients. Discrimination only comes up in one other spot, and the authors seem to subscribe to an outmoded "fairness through unawareness" criterion, writing "[Fair Isaac's credit score] evaluation does not include characteristics that could bias a lender such as race, religion, national origin, gender, or marital status."
Edit
I just ran across https://fairmlbook.org, which gives a lovely introduction to the field. It seems the earliest reference they give is "Bias in Computer Systems" by Friedman and Nussenbaum.

- 4,828
- 1
- 16
- 41