{"id":843,"date":"2026-03-24T21:00:00","date_gmt":"2026-03-24T21:00:00","guid":{"rendered":"https:\/\/laeka.org\/blog\/?p=843"},"modified":"2026-03-24T21:00:00","modified_gmt":"2026-03-24T21:00:00","slug":"if-ai-so-smart-why-fails-simple-things","status":"publish","type":"post","link":"https:\/\/laeka.org\/blog\/if-ai-so-smart-why-fails-simple-things\/","title":{"rendered":"If AI Is So Smart, Why Does It Fail at Simple Things?"},"content":{"rendered":"<p>AI can write a 2000-word essay in 30 seconds, but it can&#8217;t count the number of letters in &#8220;strawberry&#8221;. It can solve complex equations, but it fails at &#8220;what weighs more, a kilogram of feathers or a kilogram of bricks?&#8221; How is that possible?<\/p>\n<h2>Moravec&#8217;s Paradox<\/h2>\n<p>In 1988, researcher Hans Moravec observed something fascinating: what&#8217;s easy for a human is difficult for a machine, and vice versa. Walking, seeing, and understanding language are trivial for humans \u2014 we do them effortlessly from childhood. But for machines, these are incredibly hard problems.<\/p>\n<p>Meanwhile, calculating a million-digit number? Trivial for a computer. Recognizing a pattern in a massive dataset? Easy. But a child can do both of these too \u2014 they just use intelligence, not raw computation.<\/p>\n<p>AI inverts this. It&#8217;s brilliant at tasks that seem complex to humans but are actually just pattern-matching at scale. It&#8217;s terrible at tasks that seem simple but actually require genuine understanding.<\/p>\n<h2>Why Counting Letters is Hard<\/h2>\n<p>When you ask an AI to count the letters in &#8220;strawberry&#8221;, it doesn&#8217;t actually count. It generates text token by token based on patterns it learned. The pattern it learned is mostly about language meaning, not about letter-by-letter counting.<\/p>\n<p>It&#8217;s like asking a language translator to give you a precise technical measurement. The translator is brilliant at their job, but that job isn&#8217;t measurement.<\/p>\n<h2>Why the Trick Question is Tricky<\/h2>\n<p>&#8220;What&#8217;s heavier, a kilogram of feathers or a kilogram of bricks?&#8221; is a trick question \u2014 the answer is they weigh the same. But to get this right, the AI has to understand a very specific gotcha: people expect bricks to be heavier because bricks are individually heavier, and the AI has to notice that the weight specification (kilogram) overrides the intuitive expectation.<\/p>\n<p>That&#8217;s a form of reasoning that requires the AI to catch a subtle logical trap. It&#8217;s not impossible for AI \u2014 but it&#8217;s not what AI was optimized to do.<\/p>\n<h2>The Distinction Matters<\/h2>\n<p>For legal work, this is important to understand. AI is brilliant at:<\/p>\n<ul>\n<li>Finding patterns in case law<\/li>\n<li>Extracting key terms from contracts<\/li>\n<li>Summarizing long documents<\/li>\n<li>Identifying precedents that might apply<\/li>\n<\/ul>\n<p>AI is weak at:<\/p>\n<ul>\n<li>Understanding genuine ambiguity (does this clause mean X or Y?)<\/li>\n<li>Catching logical contradictions<\/li>\n<li>Reasoning about edge cases it hasn&#8217;t seen before<\/li>\n<li>Understanding what a contract truly means to the parties involved<\/li>\n<\/ul>\n<p>This is why AI is such a good research assistant but a terrible decision-maker. Use it to gather the information, not to decide what it means.<\/p>\n<h2>The Future<\/h2>\n<p>Newer AI models (GPT-4, Claude 3) are getting better at reasoning tasks that require actual understanding rather than pattern-matching. We might eventually have AI that&#8217;s equally brilliant at both \u2014 but we&#8217;re not there yet.<\/p>\n<p>Understanding these limitations isn&#8217;t a reason to dismiss AI. It&#8217;s a reason to use it correctly: as a tool for specific tasks, not as a replacement for judgment.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI can write a 2000-word essay in 30 seconds, but it can&#8217;t count the number of letters in &#8220;strawberry&#8221;. It&#8230;<\/p>\n","protected":false},"author":1,"featured_media":291,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_kad_post_transparent":"","_kad_post_title":"","_kad_post_layout":"","_kad_post_sidebar_id":"","_kad_post_content_style":"","_kad_post_vertical_padding":"","_kad_post_feature":"","_kad_post_feature_position":"","_kad_post_header":false,"_kad_post_footer":false,"_kad_post_classname":"","footnotes":""},"categories":[190],"tags":[],"class_list":["post-843","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-understanding-ai"],"_links":{"self":[{"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/posts\/843","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/comments?post=843"}],"version-history":[{"count":1,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/posts\/843\/revisions"}],"predecessor-version":[{"id":920,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/posts\/843\/revisions\/920"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/media\/291"}],"wp:attachment":[{"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/media?parent=843"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/categories?post=843"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/laeka.org\/blog\/wp-json\/wp\/v2\/tags?post=843"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}