Skip to main content
You are the owner of this article.
You have permission to edit this article.
UW software aims to find and fix biased computer programs
topical top story

UW software aims to find and fix biased computer programs

  • 0
Algorithmic bias researcher

Researchers at UW-Madison have won a $1 million grant to develop a tool to find and fix algorithmic bias. Above, left to right, computer science professors Aws Albarghouthi and Loris D’Antoni, and graduate student David Merrell.

UW-Madison researchers are trying to root out race bias and other unfairness that has surfaced in computer programs used increasingly by private companies and government offices to decide if you are hired, approved for a bank loan or sent to prison.

Computers are designed to be logical, but mounting evidence shows they can be programmed by people in ways that deliver decisions that are prejudiced and incorrect, said computer science professor Aws Albarghouthi.

“As we move deeper into the 21st century, the question of correctness becomes fuzzier and fuzzier because computer programs are doing sensitive tasks whose correctness is not well-defined, but is a debatable ethical, philosophical or moral question,” Albarghouthi said.

He is part of a UW-Madison team that has been awarded a three-year, $1 million grant from the National Science Foundation to lead development of a tool called FairSquare that can detect biases in software and algorithms and fix them automatically.

The team has already created a prototype. The grant is expected to accelerate their work toward a finished product.

Other researchers around the country have also developed rudimentary tools to detect algorithmic bias, but the UW-Madison project may be the first to incorporate an automated fix, said computer science professor Loris D’Antoni, another team member.

The use of decision-making software has spread rapidly. But because many users don’t have specific knowledge of the complex algorithms that are at work, they can’t evaluate if they are working properly, Albarghouthi said.

Disclaimer offered for judges

One such program has been the subject of litigation in Wisconsin.

The state Department of Corrections uses an algorithm called COMPAS — Correctional Offender Management Profiling for Alternative Sanctions — to evaluate the risk that convicts will become repeat offenders by examining records of convicts with similar characteristics and histories.

Prisons use the assessments in classifying inmates, planning their releases and in pre-sentencing reports sent to judges.

Last year, the state Supreme Court ruled against an inmate, Eric Loomis, who said he was sentenced unfairly. Loomis said he was denied due process rights because the company that sold COMPAS to the state keeps the algorithm workings secret, making it impossible to challenge the way it weighs various factors and determines risk scores.

The judge who originally sentenced Loomis said the COMPAS report wasn’t the only factor he considered, and the Supreme Court ruled that COMPAS assessments could be used by judges in sentencing, as long as they weren’t the sole determining factor.

However, the Supreme Court also ordered the corrections department to add a disclaimer to its pre-sentencing reports.

The disclaimer notes that COMPAS wasn’t designed for use in sentencing and the company that sells it, Northpointe Inc., wouldn’t disclose how the program weighs data and calculates risk.

Replicating past bias

Albarghouthi said bias may show up in an algorithm because of the way a programmer wrote it, or because the algorithm automates decisions.

A computer program designed to choose a few finalists from thousands of job applicants could be written to favor applicants who matched the characteristics of existing employees who have been judged successful by managers.

And that could be fair.

But if there is bias in the managers’ previous hiring decisions or in their judgment of current employees, then the algorithm would replicate and perpetuate that bias, Albarghouthi said.

One of the earliest references in academic research to algorithmic bias was in research published about eight years ago, and scientists have been trying to solve it ever since, he said.

“The first problem is how do you define fairness or unfairness,” Albarghouthi said. “It has to take into account what we value as a society and what is within the rule of law and so on, and what an employer might value.”

It’s not simple. Questions about what constitutes fairness and justice have been debated for thousands of years.

“So one direction of research that people have pursued across the computer science field is how do we formalize notions of fairness in decision making. How do we mathematically capture them,” Albarghouthi said.

There is increasing recognition that algorithmic bias is one of the dangers posed by society’s growing reliance on artificial intelligence.

A July 2015 New York Times column cited research that found, for example, Google ads for high-income jobs were directed more toward men than women or that ads related to arrest records were more likely targeted toward people with recognizably black names or members of a historically black fraternity.

The European Union has established some regulations aimed at protecting people, but the problems are far from solved, Albarghouthi said.

In addition to Albarghouthi and D’Antoni, two other UW-Madison computer science professors are on the team — Shuchi Chawla, who specializes in machine learning, and Jerry Zhu, a computer science theorist.


Get Government & Politics updates in your inbox!

* I understand and agree that registration on or use of this site constitutes agreement to its user agreement and privacy policy.

Steven Verburg is a reporter for the Wisconsin State Journal covering state politics with a focus on science and the environment as well as military and veterans issues.

Related to this story

Get up-to-the-minute news sent straight to your device.


News Alerts

Badger Sports

Breaking News