U.S. News

Computer Program That Calculates Prison Sentences Is Even More Racist Than Humans, Study Finds

BLIND JUSTICE

A computer program used to calculate sentencing is less accurate and more racist than random humans assigned to the same task, and extremely expensive.

180121-weill-prison-lede_hzlgz6
Photo Illustration by The Daily Beast

A computer program used to calculate people’s risk of committing crimes is less accurate and more racist than random humans assigned to the same task, a new Dartmouth study finds.

Before they’re sentenced, people who commit crimes in some U.S. states are required to take a 137-question quiz. The questions, which range from queries about a person’s criminal history, to their parents’ substance use, to “do you feel discouraged at times?” are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS. Using a proprietary algorithm, COMPAS is meant to crunch the numbers on a person’s life, determine their risk for reoffending, and help a judge determine a sentence based on that risk assessment.

Rather than making objective decisions, COMPAS actually plays up racial biases in the criminal justice system, activists allege. And a study released last week from Dartmouth researchers found that random, untrained people on the internet could make more accurate predictions about a person’s criminal future than the expensive software could.

ADVERTISEMENT

A privately held software, COMPAS’s algorithms are a trade secret. Its conclusions baffle some of the people it evaluates. Take Eric Loomis, a Michigan man arrested in 2013, who pled guilty to attempting to flee a police officer, and no contest to driving a vehicle without its owner’s permission.

While neither offense was violent, COMPAS assessed Loomis’s history and reported him as having “a high risk of violence, high risk of recidivism, high pretrial risk.” Loomis was sentenced to six years in prison based on the finding.

COMPAS came to its conclusion through its 137-question quiz, which asks questions about the person’s criminal history, family history, social life, and opinions. The questionnaire does not ask a person’s race. But the questions — including those about parents’ arrest history, neighborhood crime, and a person’s economic stability — appear unfavorably biased against black defendants, who are disproportionately impoverished or incarcerated in the U.S.

A 2016 ProPublica investigation analyzed the software’s results across 7,000 cases in Broward County, Florida, and found that COMPAS often overestimated a person’s risk for committing future crimes. These incorrect assessments nearly doubled among black defendants, who frequently received higher risk ratings than white defendants who had committed more serious crimes.

But COMPAS isn’t just frequently wrong, the new Dartmouth study found: random humans can do a better job, with less information.

The Dartmouth research group hired 462 participants through Mechanical Turk, a crowdsourcing platform. The participants, who had no background or training in criminal justice, were given a brief description of a real criminal’s age and sex, as well as the crime they committed and their previous criminal history. The person’s race was not given.

“Do you think this person will commit another crime within 2 years,” the researchers asked participants.

The untrained group correctly predicted whether a person would commit another crime with 68.2 percent accuracy for black defendants and 67.6 percent accuracy for white defendants. That’s slightly better than COMPAS, which reports 64.9 percent accuracy for black defendants and 65.7 percent accuracy for white defendants.

In a statement, COMPAS’s parent company Equivalent argued that the Dartmouth findings were actually good.

“Instead of being a criticism of the COMPAS assessment, [the study] actually adds to a growing number of independent studies that have confirmed that COMPAS achieves good predictability and matches the increasingly accepted AUC standard of 0.70 for well-designed risk assessment tools used in criminal justice,” Equivalent said in the statement.

What it didn’t add was that the humans who had slightly outperformed COMPAS were untrained — whereas COMPAS is a massively expensive and secretive program.

In 2015, Wisconsin signed a contract with COMPAS for $1,765,334, documents obtained by the Electronic Privacy Information Center reveal. The largest chunk of the cash — $776,475 — went to licensing and maintenance fees for the software company. By contrast, the Dartmouth researchers paid each study participant $1 for completing the task, and a $5 bonus if they answered correctly more than 65 percent of the time.

And for all that money, defendants still aren’t sure COMPAS is doing its job.

After COMPAS helped sentence him to six years in prison, Loomis attempted to overturn the ruling, claiming the ruling by algorithm violated his right to due process. The secretive nature of the software meant it could not be trusted, he claimed.

His bid failed last summer when the U.S. Supreme Court refused to take up his case, allowing the COMPAS-based sentence to remain.

Instead of throwing himself at the mercy of the court, Loomis was at the mercy of the machine.

He might have had better luck at the hands of random internet users.

Got a tip? Send it to The Daily Beast here.