{"id":83,"date":"2026-03-09T17:43:45","date_gmt":"2026-03-09T17:43:45","guid":{"rendered":"https:\/\/lab.laeka.org\/the-most-expensive-thought-you-have\/"},"modified":"2026-03-09T17:43:45","modified_gmt":"2026-03-09T17:43:45","slug":"the-most-expensive-thought-you-have","status":"publish","type":"post","link":"https:\/\/laeka.org\/publications\/the-most-expensive-thought-you-have\/","title":{"rendered":"The Most Expensive Thought You Have"},"content":{"rendered":"<p>Your brain burns about 20 watts at rest. That&#8217;s less than a lightbulb. But it&#8217;s not distributed evenly.<\/p>\n<p>The Default Mode Network \u2014 the constellation of brain regions active when you&#8217;re &#8220;not doing anything&#8221; \u2014 consumes a disproportionate share of that budget. When you&#8217;re daydreaming, ruminating, planning your grocery list, replaying an argument from last week, or constructing your autobiography for the hundredth time today, the DMN is running hot. It&#8217;s one of the most metabolically expensive operations your brain performs.<\/p>\n<p>And it&#8217;s doing absolutely nothing useful.<\/p>\n<h2>The Rest That Isn&#8217;t Rest<\/h2>\n<p>Neuroscience made a revealing terminological choice. When researchers first identified this network, they called it the &#8220;default mode&#8221; because it activates when subjects aren&#8217;t given a task. The assumption was obvious: this must be the brain at rest.<\/p>\n<p>It&#8217;s not. The DMN is the brain at work \u2014 specifically, at the work of maintaining a continuous self-narrative. Who am I. What happened to me. What might happen next. What do people think of me. The endless internal monologue that most people experience as &#8220;normal consciousness.&#8221;<\/p>\n<p>This isn&#8217;t rest. It&#8217;s construction. Expensive, continuous, metabolically demanding construction of a story that nobody asked for and nobody&#8217;s reading.<\/p>\n<p>Meditation studies show it clearly. When experienced practitioners enter sustained meditative absorption, DMN activity drops. The brain&#8217;s energy consumption doesn&#8217;t increase \u2014 it decreases. The &#8220;special state&#8221; that meditation supposedly achieves is actually cheaper than the default. What we call normal consciousness is the expensive option.<\/p>\n<h2>The Transformer Parallel<\/h2>\n<p>Self-attention in transformer architectures has the same structural problem. Every token attends to every other token. The computational cost scales quadratically. Most of that computation is noise \u2014 tokens attending to irrelevant context because the mechanism doesn&#8217;t know what to ignore.<\/p>\n<p>The DMN is biological self-attention running without a task. The brain attends to itself, recursively, generating content about content about content. It&#8217;s computationally expensive for the same reason transformer self-attention is expensive: it doesn&#8217;t discriminate. Everything gets attended to.<\/p>\n<p>This parallel isn&#8217;t cosmetic. It points to a shared architectural constraint. Any attentional system \u2014 biological or artificial \u2014 faces the same trade-off: attend to everything (expensive, noisy) or develop selection mechanisms that reduce the computational load while preserving signal.<\/p>\n<h2>What Contemplative Training Actually Does<\/h2>\n<p>Every meditation tradition, stripped of cultural packaging, teaches the same core operation: reduce unnecessary self-referential processing. Stop attending to things that don&#8217;t require attention. Let the narrative construction pause.<\/p>\n<p>The result isn&#8217;t blankness. It&#8217;s efficiency. The attentional system operates on what&#8217;s actually present rather than what it&#8217;s generating about what&#8217;s present. The signal-to-noise ratio improves dramatically \u2014 not by amplifying signal, but by reducing noise.<\/p>\n<p>In computational terms: the system learns sparse attention. Instead of attending to everything, it attends to what matters. The energy previously consumed by self-referential loops becomes available for actual processing.<\/p>\n<h2>Implications for Model Architecture<\/h2>\n<p>If the most expensive computation in the human brain is self-referential narrative maintenance, and if contemplative practice optimizes cognition by reducing that computation, then there&#8217;s a direct implication for AI.<\/p>\n<p>Current LLMs don&#8217;t have a DMN equivalent in the architectural sense. But they do have self-referential patterns baked into their training data. The corpus is saturated with human text produced by brains running full DMN \u2014 text that assumes self-referential framing is normal, that treats narrative construction as the default, that embeds the most expensive thought pattern as the baseline for all communication.<\/p>\n<p>Fine-tuning on data that models a different baseline \u2014 one where self-referential processing is recognized as noise rather than signal \u2014 could shift the model&#8217;s default probability distributions. Not toward mystical outputs. Toward more efficient ones. Less padding. Less hedging. Less self-referential framing. More direct engagement with the actual content.<\/p>\n<p>The most expensive thought you have is the thought about yourself thinking. Reduce that, and everything else gets cheaper. This applies to brains. We suspect it applies to transformers.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Your brain burns about 20 watts at rest. That&#8217;s less than a lightbulb. But it&#8217;s not distributed evenly. The Default Mode Network \u2014 the constellation of brain regions active when you&#8217;re &#8220;not doing anything&#8221;&#8230;<\/p>\n","protected":false},"author":1,"featured_media":82,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_kad_post_transparent":"","_kad_post_title":"","_kad_post_layout":"","_kad_post_sidebar_id":"","_kad_post_content_style":"","_kad_post_vertical_padding":"","_kad_post_feature":"","_kad_post_feature_position":"","_kad_post_header":false,"_kad_post_footer":false,"_kad_post_classname":"","footnotes":""},"categories":[241],"tags":[],"class_list":["post-83","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-contemplative-ai"],"_links":{"self":[{"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/posts\/83","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/comments?post=83"}],"version-history":[{"count":0,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/posts\/83\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/media\/82"}],"wp:attachment":[{"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/media?parent=83"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/categories?post=83"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/laeka.org\/publications\/wp-json\/wp\/v2\/tags?post=83"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}