While we’re waiting for a decision in State v. Loomis to tell us whether we can have access to the inner workings of the de facto Deus ex machina of Wisconsin sentencing proceedings, we thought you’d be interested in this investigative report on the COMPAS by the investigative reporters at ProPublica.
ProPublica obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the COMPAS algorithm. Their conclusions:
The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.
When a full range of crimes were taken into account—including misdemeanors such as driving with an expired license—the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.
We also turned up significant racial disparities, just as [former U.S. Attorney General Eric] Holder feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.
- The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
- White defendants were mislabeled as low risk more often than black defendants.
Could this disparity be explained by defendants’ prior crimes or the type of crimes they were arrested for? No. We ran a statistical test that isolated the effect of race from criminal history and recidivism, as well as from defendants’ age and gender. Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.
The piece includes reference to a Wisconsin case in which the judge’s use of COMPAS was challenged and links to various documents about Northpointe, an explanation to how it analyzed COMPAS, and the full data it used in its analysis. Not surprisingly, Northpointe, the for-profit company that created COMPAS, disputes ProPublica’s analysis.
For more on “science-based” sentencing using risk assessments, see this 2015 article from The Marshall Project.
UPDATE (5/30/16): Milwaukee Journal-Sentinel coverage here.