AI and Bias: It Reproduces Our Prejudices (But Worse)
An AI decides if you get a loan. Another AI picks candidates for a job. A third AI determines if you receive healthcare. And then you discover something chilling: the AI is biased. It doesn’t like people like you.
“But it’s just numbers,” you say. “Numbers are objective, right?”
Wrong. Numbers are just prejudice in numeric form.
How bias gets into AI
AI learns from historical data. If historically, banks gave more loans to white men than to Black women, the AI will learn that. It’ll reproduce the bias. Not because it’s evil. Because it learns from what happened before.
It’s like learning to cook by watching your mom your whole life. If your mom didn’t salt enough, you won’t salt enough. Not your fault. You’re just copying what you saw.
Except with AI, it’s worse. Because AI reproduces biases, and then it amplifies them. If historical data shows women are less often hired in tech, the AI will “learn” that’s normal. And when it picks candidates, it’ll favor men. More than humans did.
And nobody notices, because it’s an algorithm. It’s “objective.” It’s numbers. It’s scientific.
The real consequences
It seems abstract until it affects you.
There have been cases where a hiring AI systematically rejected women. Another AI rejected people with names that “sounded” immigrant. Another offered more expensive loans to Black people for the same applications as white people.
What’s really wild? The people who built the AI thought it was fair. They looked at the numbers. Not the context. Not the history. Just the numbers.
And imagine how many cases we haven’t discovered yet. An insurance company using a biased AI means thousands of people paying more. A biased hiring app means thousands losing a job. And nobody knows why. The AI decided.
It’s more than just discrimination
There’s another bias: who’s “worth” serving. If an AI sees that historically, young teens used an app, it’ll show more ads to teens. It’ll invest in teens. But if it sees seniors used it less, it’ll “learn” seniors aren’t important. And the more it learns that, the more it ignores them.
It’s a circularity bias. The present becomes the past. Today’s prejudice becomes tomorrow’s “truth.”
What we can do
First, be aware. If an AI makes a decision that affects you — a refused loan, a rejected job, denied insurance — you have the right to ask why. And not just “because the AI said no.” Really why.
Second, support regulation. Governments are starting to require companies to explain their AIs. That’s good. It doesn’t mean banning AI. It just means: “Show us what you’re doing.” That’s basic justice.
Third, ask the company using AI if they’ve tested for bias. A good company can tell you. “Yes, we looked. Here’s what we found. Here’s how we fixed it.” If they can’t tell you that, it’s a red flag.
And finally, stay human. AI can help you make a decision. But not the decision itself. Not on something that really affects you.
Want to understand how bias gets into tech? Sherpa (free) explains it simply. Or dig into Laeka Research for the details that really matter.