Is Artificial Intelligence Biased?

The world is moving more and more towards utilizing AI and machine learning for making decisions.

Many business sectors, such as the financial sector, benefit greatly from AI to help in areas like assessing credit risk, underwriting insurance, etc.

New Jersey has been using AI for their bail system for the last 2 years (with great success): https://www.economist.com/united-states/2017/11/23/replacing-bail-with-an-algorithm

And I'm sure more courts and jurisdictions will follow suit in the future because we know that humans are inherently biased in their decision making. For example:

In the famous study on judges making decisions about parole requests, researchers found that decisions are affected by things like hunger and fatigue: https://www.scientificamerican.com/article/lunchtime-leniency/
"Judges granted 65 percent of requests they heard at the beginning of the day’s session and almost none at the end. Right after a snack break, approvals jumped back to 65 percent again...the judges could just be grumpy from hunger. But they probably also suffer from mental fatigue."

In another article that looked at immigration courts and asylum seekers, it was found that decisions by judges were all over the map: https://www.reuters.com/investigates/special-report/usa-immigration-asylum/
"'It is clearly troubling when you have these kinds of gross disparities,' said Karen Musalo, director of the Center for Gender & Refugee Studies at the University of California Hastings School of the Law in San Francisco. 'These are life or death matters. ... Whether you win or whether you lose shouldn’t depend on the roll of the dice of which judge gets your case.'"

But figuring out how to implement AI is still a work in progress. Last year, Amazon shut down their AI program that they were using for hiring: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
It was found that the tool was biased against females in selecting potential candidates for interview and hire.

Because of biases like this, some people may argue that AI is not ready for prime-time. But we shouldn't give machines too much credit. Looking closer at this Amazon example, we read that "Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry."

So in reality, the Amazon AI recruiting tool was not coming up with bias all by itself, but simply mirroring a pattern that had been set by humans in the first place.

I'm optimistic about the future of AI and this type of technology. It's exciting to watch how it will continue to rolled out.

Comments