Why AI Sometimes Says Nonsense (And That’s Normal)
You’ve probably seen it happen. You ask ChatGPT a question and it answers with total confidence… except the answer is completely wrong. It invents a book that doesn’t exist. It cites a fictional scientific paper. It throws statistics out of thin air.
And the worst part? It looks absolutely sure of itself.
This is called a hallucination. And it’s not a bug. It’s a direct consequence of how AI works.
AI is a sentence-completion machine
ChatGPT generates text one word at a time, choosing the most probable word after the previous one. It’s a giant autocomplete. It has no “fact database” to consult. There’s no little librarian inside checking sources.
When you ask “What’s the tallest bridge in Quebec?”, it’s not looking up the answer in an encyclopedia. It generates the sequence of words that most resembles a good answer, based on everything it read during training.
Most of the time, that gives the right answer. But when the data is fuzzy, contradictory, or simply absent… it makes things up. With confidence. Because it doesn’t have the concept of “I don’t know.”
It’s like a professional storyteller
Imagine a storyteller who’s read every book in the world. You ask them to tell you the story of the Battle of Châteauguay. If they’ve read about it often, they’ll tell it well. But if you ask for the story of the Battle of Saint-Rémi-de-Napierville (which doesn’t exist), they’ll still tell you a story. With dates, names, details. Because that’s what they do: they tell stories.
AI can’t tell the difference between telling something true and telling something plausible. For AI, it’s the same process.
The most common hallucinations
Fake citations are a classic. Ask ChatGPT to find 5 studies on a niche topic and it’ll give you 5, with titles, authors, and dates. Except 2 or 3 will be completely made up. The authors exist, but the paper doesn’t.
Fake historical facts are common too. AI will mix up events, invent dates, or combine two different stories into one. It’s like a student who crammed too hard the night before the exam and is mixing everything up.
And math. AI is surprisingly bad at basic arithmetic. It’ll sometimes tell you that 37 x 14 = 498 (the right answer is 518) because it “predicts” the result instead of calculating it.
How to protect yourself
The golden rule: verify everything factual. If ChatGPT gives you a fact, a date, a statistic, a name — check it with a reliable source. Treat AI as a first draft, never as a final source.
For creative tasks — brainstorming, writing, rephrasing — hallucinations are rarely a problem because you’re not looking for facts. You’re generating ideas.
The trick is knowing when to trust and when to verify. That’s exactly what we teach in Sherpa, our free AI guide. Because AI talking nonsense is normal. But you swallowing it without checking — that’s a problem we can fix.