Connect with us
Image by Microsoft Bing Image creator, based on a prompt by Gadget.

Artificial Intelligence

Algorithms will fight AI bias

A University of Iowa researcher has developed a new way to reduce AI bias in criminal justice and finance.

A University of Iowa researcher has developed two algorithms that can be used to reduce racial and gender bias in artificial intelligence.

Qihang Lin, associate professor of business analytics in the Tippie College of Business, developed the algorithms with the help of an $800,000 grant received in 2022 from the National Science Foundation (NSF) and Amazon.

“Machine learning is used to make many high stakes decisions,” said Lin, the co-lead primary investigator on the grant. “We want to help make sure those decisions won’t be discriminatory against people who have protected characteristics.”

Machine learning is the process of programming an algorithm to analyse enormous amounts of data to complete a task. As more data is added, the algorithm discovers more about its task and changes how it does things as a result, “learning” as it goes, just as a human would respond to learning new things.

However, Lin said algorithms can learn discriminatory things based on the data. For instance, an algorithm might conclude that Black criminal defendants are more likely to re-offend while on bail. But what the algorithm wasn’t told was that judges are more apt to require higher bails for Black defendants than for white defendants who commit the same class of crimes, so it makes higher prediction errors.

To counter this, Lin and Yang developed a machine learning model to predict if a criminal defendant will reoffend that reduces the likelihood of racial bias.

“Our method effectively reduces the gap of the error rates between all racial groups without compromising the accuracy of the model,” said Lin. 

Similarly, AI algorithms that market financial services often discriminate against women because they incorrectly predict more men have incomes higher than $50,000 than women. As a result, they recommend more products more frequently to men than to women.

Lin developed an algorithm that successfully reduces that gender bias so female users can receive more financial product recommendations.

Tianbao Yang of Texas A&M University is Lin’s co-lead investigator and algorithm developer.

Subscribe to our free newsletter
To Top