This week Wired ran an op-ed arguing that courts should stop using algorithms to set bail and sentence defendants until some ground rules are set. Yes, it discusses Compas and State v. Loomis. But beyond that it describes what could happen if courts move from using simple algorithms to using deep learning algorithms known as neural networks to sentence someone. Here is an excerpt from the article:
Consider a scenario in which the defense attorney calls a developer of a neural-network-based risk assessment tool to the witness stand to challenge the “high risk” score that could affect her client’s sentence. On the stand, the engineer could tell the court how the neural network was designed, what inputs were entered, and what outputs were created in a specific case. However, the engineer could not explain the software’s decision-making process. With these facts, or lack thereof, how does a judge weigh the validity of a risk-assessment tool if she cannot understand its decision-making process? How could an appeals court know if the tool decided that socioeconomic factors, a constitutionally dubious input, determined a defendant’s risk to society? Following the reasoning in Loomis, the court would have no choice but to abdicate a part of its responsibility to a hidden decision-making process.