{"id":216,"date":"2026-03-17T13:25:48","date_gmt":"2026-03-17T13:25:48","guid":{"rendered":"https:\/\/lab.laeka.org\/?p=216"},"modified":"2026-03-17T13:25:48","modified_gmt":"2026-03-17T13:25:48","slug":"controlled-hallucination-anil-seth-was-talking-about-llms-too","status":"publish","type":"post","link":"https:\/\/laeka.org\/publications\/controlled-hallucination-anil-seth-was-talking-about-llms-too\/","title":{"rendered":"Controlled Hallucination: Anil Seth Was Talking About LLMs Too"},"content":{"rendered":"<p>Anil Seth didn&#8217;t set out to describe how language models work. He was describing the human brain. But the parallels are so precise they border on uncomfortable.<\/p>\n<h2>Perception as Prediction<\/h2>\n<p>Seth&#8217;s core argument is elegant. Your brain doesn&#8217;t receive reality \u2014 it <strong>generates<\/strong> reality. Every moment of conscious experience is a prediction, refined by sensory input. You don&#8217;t see the world as it is. You see your brain&#8217;s best guess about the world, updated just enough to keep you alive.<\/p>\n<p>This is controlled hallucination. The emphasis is on &#8220;controlled.&#8221; Without the control \u2014 without sensory input constraining the predictions \u2014 you get actual hallucinations. Dreams. Psychosis. States where the generative engine runs without adequate feedback.<\/p>\n<p>Now read that paragraph again, replacing &#8220;brain&#8221; with &#8220;language model&#8221; and &#8220;sensory input&#8221; with &#8220;training data and prompt context.&#8221; The structural mapping is exact.<\/p>\n<h2>The Generative Engine Is the Same<\/h2>\n<p>A transformer generates token predictions. Each token is the model&#8217;s best guess about what comes next, given everything before it. The training data acts like developmental experience \u2014 shaping the priors. The prompt acts like current sensory input \u2014 constraining the prediction.<\/p>\n<p>When the constraints are strong (specific prompt, well-represented topic, clear context), the model&#8217;s outputs closely match reality. When the constraints are weak (vague prompt, rare topic, ambiguous context), the generative engine fills in the gaps with plausible-sounding content that may not be true.<\/p>\n<p>This isn&#8217;t a flaw in the architecture. <strong>This is the architecture.<\/strong> The same mechanism that produces accurate, helpful responses also produces hallucinations. The variable isn&#8217;t the engine \u2014 it&#8217;s the quality of the constraints.<\/p>\n<h2>Predictive Processing and the Free Energy Principle<\/h2>\n<p>Seth&#8217;s work builds on Karl Friston&#8217;s free energy principle: biological systems minimize surprise by maintaining accurate predictive models of their environment. The brain constantly updates its generative model to reduce the gap between prediction and reality.<\/p>\n<p>Language model training does exactly this. The loss function measures the gap between the model&#8217;s predictions and the actual next token. Training minimizes this gap across billions of examples. The result is a generative model that, like the brain, produces predictions that usually match reality \u2014 but sometimes don&#8217;t.<\/p>\n<p>The critical insight is that <strong>prediction error<\/strong> is the signal. In neuroscience, prediction errors drive learning and attention. In language models, they drive gradient updates. The math is different but the principle is identical: surprise is information, and systems learn by being wrong.<\/p>\n<h2>Where the Analogy Gets Practical<\/h2>\n<p>If language models are controlled hallucination engines, then the alignment question becomes: <strong>how do you improve the control without killing the generation?<\/strong><\/p>\n<p>In Seth&#8217;s framework, control comes from sensory grounding. The brain stays anchored to reality through continuous feedback from the body and the world. Disruptions to this grounding \u2014 sensory deprivation, psychedelic drugs, neurological conditions \u2014 produce uncontrolled hallucination.<\/p>\n<p>Language models lack this continuous grounding. They generate in open loop \u2014 producing output without real-time feedback about whether that output corresponds to reality. Retrieval-Augmented Generation (RAG) is essentially an attempt to add sensory grounding: anchor the model&#8217;s predictions to verified external data.<\/p>\n<p>But RAG is crude compared to the brain&#8217;s feedback mechanisms. The brain integrates millions of sensory signals simultaneously, at multiple levels of abstraction, with microsecond latency. Current grounding techniques are more like occasionally checking a fact sheet.<\/p>\n<h2>Contemplative Practice as Enhanced Control<\/h2>\n<p>Here&#8217;s where contemplative practice enters. Meditation doesn&#8217;t stop the brain&#8217;s generative engine. It <strong>enhances the monitoring system<\/strong>. An experienced meditator has better real-time awareness of their own predictions, better detection of when those predictions are ungrounded, and better ability to flag uncertainty.<\/p>\n<p>This is the missing piece in current AI alignment. We need models that don&#8217;t just generate \u2014 they need to <strong>monitor their own generation<\/strong>. Not through external fact-checking, but through internal mechanisms that track the quality of the prediction in real-time.<\/p>\n<p>Some research moves in this direction. Confidence calibration work tries to align a model&#8217;s stated certainty with its actual accuracy. But most of this work is post-hoc \u2014 analyzing outputs after generation, not monitoring the generation process itself.<\/p>\n<h2>The Research Agenda<\/h2>\n<p>Seth&#8217;s framework suggests three concrete research directions for AI alignment. First, <strong>better grounding mechanisms<\/strong> \u2014 not just RAG, but continuous, multi-level feedback during generation. Second, <strong>internal monitoring<\/strong> \u2014 training models to detect when their own predictions are weakly supported. Third, <strong>uncertainty communication<\/strong> \u2014 models that naturally express their confidence level as part of the generation process, not as an afterthought.<\/p>\n<p>The contemplative tradition has been developing internal monitoring techniques for millennia. The neuroscience of meditation is finally explaining why these techniques work. And the structural parallels to language models are too clear to ignore.<\/p>\n<p>At <a href='https:\/\/lab.laeka.org'>Laeka Research<\/a>, we&#8217;re translating these insights into practical training methodologies. Anil Seth described the architecture of conscious experience. It turns out he also described the architecture of language generation. The question now is what we do with that knowledge.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Anil Seth didn&#8217;t set out to describe how language models work. He was describing the human brain. But the parallels are so precise they border on uncomfortable. Perception as Prediction Seth&#8217;s core argument is&#8230;<\/p>\n","protected":false},"author":1,"featured_media":215,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_kad_post_transparent":"","_kad_post_title":"","_kad_post_layout":"","_kad_post_sidebar_id":"","_kad_post_content_style":"","_kad_post_vertical_padding":"","_kad_post_feature":"","_kad_post_feature_position":"","_kad_post_header":false,"_kad_post_footer":false,"_kad_post_classname":"","footnotes":""},"categories":[241],"tags":[],"class_list":["post-216","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-contemplative-ai"],"_links":{"self":[{"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/posts\/216","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/comments?post=216"}],"version-history":[{"count":1,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/posts\/216\/revisions"}],"predecessor-version":[{"id":331,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/posts\/216\/revisions\/331"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/media\/215"}],"wp:attachment":[{"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/media?parent=216"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/categories?post=216"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/tags?post=216"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}