{"id":644,"date":"2026-03-21T13:55:25","date_gmt":"2026-03-21T17:55:25","guid":{"rendered":"https:\/\/laeka.org\/blog\/archives\/644"},"modified":"2026-03-23T11:50:57","modified_gmt":"2026-03-23T15:50:57","slug":"ai-and-bias","status":"publish","type":"post","link":"https:\/\/laeka.org\/blog\/ai-and-bias\/","title":{"rendered":"AI and Bias: It Reproduces Our Prejudices (But Worse)"},"content":{"rendered":"<p>An AI decides if you get a loan. Another AI picks candidates for a job. A third AI determines if you receive healthcare. And then you discover something chilling: the AI is biased. It doesn&#8217;t like people like you.<\/p>\n<p>&#8220;But it&#8217;s just numbers,&#8221; you say. &#8220;Numbers are objective, right?&#8221;<\/p>\n<p>Wrong. Numbers are just prejudice in numeric form.<\/p>\n<h2>How bias gets into AI<\/h2>\n<p>AI learns from historical data. If historically, banks gave more loans to white men than to Black women, the AI will learn that. It&#8217;ll reproduce the bias. Not because it&#8217;s evil. Because it learns from what happened before.<\/p>\n<p>It&#8217;s like learning to cook by watching your mom your whole life. If your mom didn&#8217;t salt enough, you won&#8217;t salt enough. Not your fault. You&#8217;re just copying what you saw.<\/p>\n<p>Except with AI, it&#8217;s worse. Because AI reproduces biases, and then it amplifies them. If historical data shows women are less often hired in tech, the AI will &#8220;learn&#8221; that&#8217;s normal. And when it picks candidates, it&#8217;ll favor men. More than humans did.<\/p>\n<p>And nobody notices, because it&#8217;s an algorithm. It&#8217;s &#8220;objective.&#8221; It&#8217;s numbers. It&#8217;s scientific.<\/p>\n<h2>The real consequences<\/h2>\n<p>It seems abstract until it affects you.<\/p>\n<p>There have been cases where a hiring AI systematically rejected women. Another AI rejected people with names that &#8220;sounded&#8221; immigrant. Another offered more expensive loans to Black people for the same applications as white people.<\/p>\n<p>What&#8217;s really wild? The people who built the AI thought it was fair. They looked at the numbers. Not the context. Not the history. Just the numbers.<\/p>\n<p>And imagine how many cases we haven&#8217;t discovered yet. An insurance company using a biased AI means thousands of people paying more. A biased hiring app means thousands losing a job. And nobody knows why. The AI decided.<\/p>\n<h2>It&#8217;s more than just discrimination<\/h2>\n<p>There&#8217;s another bias: who&#8217;s &#8220;worth&#8221; serving. If an AI sees that historically, young teens used an app, it&#8217;ll show more ads to teens. It&#8217;ll invest in teens. But if it sees seniors used it less, it&#8217;ll &#8220;learn&#8221; seniors aren&#8217;t important. And the more it learns that, the more it ignores them.<\/p>\n<p>It&#8217;s a circularity bias. The present becomes the past. Today&#8217;s prejudice becomes tomorrow&#8217;s &#8220;truth.&#8221;<\/p>\n<h2>What we can do<\/h2>\n<p>First, be aware. If an AI makes a decision that affects you \u2014 a refused loan, a rejected job, denied insurance \u2014 you have the right to ask why. And not just &#8220;because the AI said no.&#8221; Really why.<\/p>\n<p>Second, support regulation. Governments are starting to require companies to explain their AIs. That&#8217;s good. It doesn&#8217;t mean banning AI. It just means: &#8220;Show us what you&#8217;re doing.&#8221; That&#8217;s basic justice.<\/p>\n<p>Third, ask the company using AI if they&#8217;ve tested for bias. A good company can tell you. &#8220;Yes, we looked. Here&#8217;s what we found. Here&#8217;s how we fixed it.&#8221; If they can&#8217;t tell you that, it&#8217;s a red flag.<\/p>\n<p>And finally, stay human. AI can help you make a decision. But not the decision itself. Not on something that really affects you.<\/p>\n<p><strong>Want to understand how bias gets into tech?<\/strong> <a href=\"https:\/\/sherpa.live\">Sherpa<\/a> (free) explains it simply. Or dig into <a href=\"https:\/\/laeka.org\/lab\/\">Laeka Research<\/a> for the details that really matter.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>An AI decides if you get a loan. Another AI picks candidates for a job. A third AI determines if&#8230;<\/p>\n","protected":false},"author":1,"featured_media":74,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_kad_post_transparent":"","_kad_post_title":"","_kad_post_layout":"","_kad_post_sidebar_id":"","_kad_post_content_style":"","_kad_post_vertical_padding":"","_kad_post_feature":"","_kad_post_feature_position":"","_kad_post_header":false,"_kad_post_footer":false,"_kad_post_classname":"","footnotes":""},"categories":[192],"tags":[],"class_list":["post-644","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-and-you"],"_links":{"self":[{"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/posts\/644","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/comments?post=644"}],"version-history":[{"count":1,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/posts\/644\/revisions"}],"predecessor-version":[{"id":725,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/posts\/644\/revisions\/725"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/media\/74"}],"wp:attachment":[{"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/media?parent=644"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/categories?post=644"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/tags?post=644"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}