Nav: Home

Court software may be no more accurate than web survey takers in predicting criminal risk

January 17, 2018

HANOVER, N.H. - January 17, 2018 - A widely-used computer software tool may be no more accurate or fair at predicting repeat criminal behavior than people with no criminal justice experience, according to a Dartmouth College study.

The Dartmouth analysis showed that non-experts who responded to an online survey performed equally as well as the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software system used by courts to help determine the risk of recidivism.

The paper also demonstrates that although COMPAS uses over one hundred pieces of information to make a prediction, the same level of accuracy may be achieved with only two variables - a defendant's age and number of prior convictions.

According to the research paper, COMPAS has been used to assess over one million offenders since it was developed in 1998, with its recidivism prediction component in use since 2000.

The analysis, published in the journal Science Advances, was carried out by the student-faculty research team of Julia Dressel and Hany Farid.

"It is troubling that untrained internet workers can perform as well as a computer program used to make life-altering decisions about criminal defendants," said Farid, the Albert Bradley 1915 Third Century Professor of Computer Science at Dartmouth College. "The use of such software may be doing nothing to help people who could be denied a second chance by black-box algorithms."

According to the paper, software tools are used in pretrial, parole, and sentencing decisions to predict criminal behavior, including who is likely to fail to appear at a court hearing and who is likely to reoffend at some point in the future. Supporters of such systems argue that big data and advanced machine learning make these analyses more accurate and less biased than predictions made by humans.

"Claims that secretive and seemingly sophisticated data tools are more accurate and fair than humans are simply not supported by our research findings," said Dressel, who performed the research as part of her undergraduate thesis in computer science at Dartmouth.

The research paper compares the commercial COMPAS software against workers contracted through Amazon's online Mechanical Turk crowd-sourcing marketplace to see which approach is more accurate and fair when judging the possibility of recidivism. For the purposes of the study, recidivism was defined as committing a misdemeanor or felony within two years of a defendant's last arrest.

Groups of internet workers saw short descriptions that included a defendant's sex, age, and previous criminal history. The human results were then compared to results from the COMPAS system that utilizes 137 variables for each individual.

Overall accuracy was based on the rate at which a defendant was correctly predicted to recidivate or not. The research also reported on false positives--when a defendant is predicted to recidivate but doesn't--and false negatives--when a defendant is predicted not to recidivate but does.

With considerably less information than COMPAS--seven features compared to 137--when results were pooled to determine the "wisdom of the crowd," the humans with no presumed criminal justice experience were accurate in 67 percent of the cases presented, statistically the same as the 65.2 percent accuracy of COMPAS. Study participants and COMPAS were in agreement for 69.2 percent of the 1000 defendants when predicting who would repeat their crimes.

According to the study, the question of accurate prediction of recidivism is not limited to COMPAS. A separate review cited in the study found that eight of nine software programs failed to make accurate predictions.

"The entire use of recidivism prediction instruments in courtrooms should be called into question," Dressel said. "Along with previous work on the fairness of criminal justice algorithms, these combined results cast significant doubt on the entire effort of predicting recidivism."

In contrast to other analyses that focus on whether algorithms are racially biased, the Dartmouth study considers the more fundamental issue of whether the COMPAS algorithm is any better than untrained humans at predicting recidivism in an accurate and fair way.

However, when race was considered, the research found that results from both the human respondents and the software showed significant disparities between how black and white defendants are judged.

According to the paper, it is valuable to ask if we would put these decisions in the hands of untrained people who respond to an online survey, because, in the end, "the results from these two approaches appear to be indistinguishable."
A copy of the complete research paper is available upon request.

About Dartmouth

Founded in 1769, Dartmouth is a member of the Ivy League and offers the world's premier liberal arts education, combining its deep commitment to outstanding undergraduate and graduate teaching with distinguished research and scholarship in the arts and sciences and its three leading professional schools: the Geisel School of Medicine, Thayer School of Engineering and Tuck School of Business.

Dartmouth College

Related Decisions Articles:

How neurons use crowdsourcing to make decisions
When many individual neurons collect data, how do they reach a unanimous decision?
Diverse populations make rational collective decisions
Yes/no binary decisions by individual ants can lead to a rational decision as a collective when the individuals have differing preferences to the subject, according to research recently published in the journal Royal Society Open Science.
Understanding decisions: The power of combining psychology and economics
A new paper published in the Proceedings of the National Academy of Sciences shows how collaborations between psychologists and economists lead to better understanding of such decisions than either discipline can on its own.
Trading changes how brain processes selling decisions
Experience in trading changes how the human brain evaluates the sale of goods, muting an economic bias known as the endowment effect in which people demand a higher price to sell a good than they're willing to pay for it.
Modelling how the brain makes complex decisions
Researchers have built the first biologically realistic mathematical model of how the brain plans and learns when faced with a complex decision-making process.
Focus on treatment decisions: Doctor and patient should decide together
This edition of Deutsches Ă„rzteblatt International, which focuses on patient involvement, contains two original articles investigating the following questions: do patients benefit from shared decision making?
Surprise: Your visual cortex is making decisions
The part of the brain responsible for seeing is more powerful than previously believed.
Guam research reveals complications of conservation decisions
A Guam native insect impacts a native tree, posing a conundrum for conservationists.
Researchers determine how groups make decisions
Researchers from Carnegie Mellon University have developed a model that explains how groups make collective decisions when no single member of the group has access to all possible information or the ability to make and communicate a final decision.
Physicians should help families with decisions about end-of-life care
About 20 percent of Americans spend time in an intensive care unit around the time of their death, and most deaths follow a decision to limit life-sustaining therapies.

Related Decisions Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Climate Crisis
There's no greater threat to humanity than climate change. What can we do to stop the worst consequences? This hour, TED speakers explore how we can save our planet and whether we can do it in time. Guests include climate activist Greta Thunberg, chemical engineer Jennifer Wilcox, research scientist Sean Davis, food innovator Bruce Friedrich, and psychologist Per Espen Stoknes.
Now Playing: Science for the People

#527 Honey I CRISPR'd the Kids
This week we're coming to you from Awesome Con in Washington, D.C. There, host Bethany Brookshire led a panel of three amazing guests to talk about the promise and perils of CRISPR, and what happens now that CRISPR babies have (maybe?) been born. Featuring science writer Tina Saey, molecular biologist Anne Simon, and bioethicist Alan Regenberg. A Nobel Prize winner argues banning CRISPR babies won’t work Geneticists push for a 5-year global ban on gene-edited babies A CRISPR spin-off causes unintended typos in DNA News of the first gene-edited babies ignited a firestorm The researcher who created CRISPR twins defends...