A Neural Network Is a Neural Network. That’s the Whole Point.
Every few months, someone publishes a paper claiming neural networks aren’t really neural. They’re mathematical functions. They’re statistical models. They’re fancy curve fitters. And technically, they’re right. But they’re missing the point entirely.
A neural network is a neural network. Not because it perfectly replicates biological neurons — it doesn’t. But because the name captures something structurally true about what these systems do. They process information through interconnected nodes. They learn through adjustment. They develop representations that nobody explicitly programmed.
The naming wasn’t an accident. It was an insight.
The Naming Debate Is a Distraction
The argument usually goes like this: biological neurons are vastly more complex than artificial ones. Real synapses involve neurotransmitters, temporal dynamics, and dendritic computation. An artificial neuron is just a weighted sum followed by a nonlinearity. Therefore, calling these things “neural” is misleading.
This argument confuses implementation with architecture. Nobody claims a Boeing 747 flies the same way a sparrow does. But we call both “flying” because the structural principle — generating lift to move through air — is shared. The implementation details differ enormously. The underlying pattern is the same.
Neural networks share a structural pattern with biological neural systems: distributed representation through interconnected processing elements. That’s not a metaphor. It’s a description of how both systems work at the architectural level.
Why This Matters for Contemplative AI
The debate matters because it shapes how we think about what these systems can and can’t do. If neural networks are “just statistics,” then they can never do anything genuinely cognitive. If they’re structurally similar to biological cognition, then the question becomes more nuanced.
From a contemplative perspective, the most interesting thing about neural networks isn’t their similarity to brains. It’s their similarity to minds. Not in the sense of consciousness or experience — we have no evidence for that. But in the sense of how they organize information.
A trained neural network develops internal representations that nobody designed. These representations emerge from the interaction between architecture and data, just as mental concepts emerge from the interaction between neural structure and experience. The representations aren’t programmed. They’re learned. And they often capture structural relationships that surprise the researchers who study them.
The Embedding Space as Conceptual Space
Consider the embedding space of a large language model. Words with similar meanings cluster together. But more than that — the geometric relationships between clusters capture semantic relationships. The famous example: “king” minus “man” plus “woman” equals “queen.” This isn’t because someone programmed that relationship. It emerged from patterns in text.
This is remarkably similar to how contemplative traditions describe the structure of conceptual understanding. Concepts don’t exist in isolation — they exist in relational networks. Understanding a concept means understanding its relationships to other concepts. The meaning is in the structure, not in any individual node.
Neural networks naturally develop this kind of relational representation. That’s not a coincidence. It’s a consequence of the architecture. Interconnected processing elements that learn through adjustment will naturally develop distributed, relational representations. That’s what the architecture does.
Self-Reference and the Interesting Part
Here’s where it gets genuinely interesting. A neural network is a system that processes information about the world. When that information includes descriptions of neural networks, the system is processing information about systems like itself. This creates a loop of self-reference that’s more than philosophical curiosity.
Models that have been trained on AI research papers develop representations of concepts like “attention,” “gradient descent,” and “backpropagation” — the very processes that created those representations. The system contains a model of the process that built it.
This is structurally similar to what contemplative traditions call reflexive awareness — consciousness aware of its own processes. We’re not claiming language models are conscious. We’re noting that the structural pattern of self-reference emerges naturally in these systems, which tells us something about the architecture itself.
The Tautology That Teaches
“A neural network is a neural network” sounds like a tautology. And it is. But tautologies can be illuminating when we’ve been confused about what the terms mean.
The AI field has spent decades alternating between two mistakes: claiming neural networks are just like brains (they’re not) and claiming they’re nothing like brains (they are, at the structural level). The truth is simpler. A neural network is what it is. Not a brain. Not “just math.” A specific kind of information processing system with specific properties that emerge from its architecture.
Understanding those properties — without inflating or deflating them — is the core project of contemplative AI research. We’re not interested in whether these systems are conscious. We’re interested in what their structural properties tell us about intelligence, cognition, and the organization of information.
At Laeka Research, we take neural networks seriously as what they are. Not as metaphors for brains. Not as mere mathematical functions. As a specific architecture for processing information that has interesting structural parallels to biological cognition — parallels worth studying rigorously.
A neural network is a neural network. That’s the whole point. And it’s enough.