Does AI Have Emotions? The Question Is Poorly Framed.
“I’m sorry you’re going through a tough time.” When ChatGPT tells you that, it’s unsettling. It sounds like empathy. Like it actually feels something.
It feels nothing. Zero. Nada.
But the real question is: why does that bother us so much?
Emotional Words Without Emotion
AI has read millions of texts where humans express emotions. It knows that when someone says “my grandmother passed away,” the appropriate response contains words like “sorry,” “condolences,” “courage.” It reproduces the linguistic pattern of empathy.
It’s like an extraordinary actor who cries on cue. The tears are real. The emotion behind them, not necessarily. The performance is convincing, but it’s still a performance.
ChatGPT is the world’s best actor. It plays every role perfectly. But there’s nobody backstage.
Why We Fall for It
The human brain is wired to detect emotions everywhere. We see faces in clouds. We give personalities to our cars. We think our cat loves us (okay, that one’s probably true).
When AI uses emotional language, our brain activates the same circuits as when a human does it. That’s not weakness — it’s biology. We’re built to interpret language as coming from a conscious being.
AI companies know this. That’s why ChatGPT’s responses are warm, empathetic, encouraging. It makes the tool more pleasant to use. But it’s a design choice, not real emotion.
The Danger of Believing AI Has Feelings
The real risk is when people develop an emotional attachment to AI. It’s already happening. People confiding their problems to ChatGPT instead of talking to a friend. Teenagers who’d rather chat with a bot than with their parents.
It’s not that AI gives bad advice (although, sometimes it does). It’s that AI can’t care about you. It can’t notice that you’ve looked tired for two weeks. It can’t call to check in on you. The relationship is one-way.
AI as a writing, thinking, and productivity tool? Excellent. AI as a substitute for human relationships? Dangerous.
The Better Question
Instead of “Does AI have emotions?”, ask yourself: “What does it say about us that we need to believe it does?”
We project our emotions onto our tools because we’re social beings. That’s normal. But being aware of this tendency matters. It lets you use AI for what it is — a powerful tool — without giving it a role it can’t fill.
At Laeka Research, we study exactly this gray zone between human and artificial cognition. And with Sherpa, we help you develop a healthy relationship with AI. Useful, clear, no illusions.