There is new research to support a racial bias challenge to COMPAS. You may recall that last spring Pro Publica studied COMPAS scores for some 10,000 people arrested for crimes in Broward County, Florida and published its results. It found that black defendants were twice as likely to be incorrectly labeled as higher risk to reoffend than white defendants. And white defendants labeled low risk were far more likely to end up being charged with new offenses than blacks with comparably low COMPAS risk scores.
Northpointe denied the charge, noting that it had designed COMPAS so that its scores were about 60% accurate for blacks and whites. This caught the attention of several groups of researchers at Google, Stanford, the University of Chicago, Harvard and Cornell. They independently studied the question:
Since blacks are re-arrested more often than whites, is it possible to create a formula that is equally predictive for all races without disparities in who suffers the harm of incorrect predictions?
All of the researchers concluded “no.” The new research, summarized and linked in Pro Publica’s latest post, found that an algorithm crafted to achieve predictive parity “inevitably leads to disparities in what sorts of people are classified as high risk when two groups have different arrest rates.”