Why do AI models sometimes 'hallucinate' or invent facts?

AI models sometimes hallucinate or invent facts because they’re trying to guess what comes next, like a child who’s telling a story and makes up parts when they don’t remember exactly.

Imagine you're playing a game where you have to finish a sentence. If the first part is “The cat sat on the,” you might say “mat”, that's easy. But if the start is “The dog ran into the,” you might guess “park” or even “forest.” Sometimes, you get it right, and sometimes, you just make up something fun.

AI models work kind of like that game. They look at a bunch of sentences they've learned before, like reading a big book, and then try to finish new ones. When the clues aren’t clear enough, or when there are many possible answers, they might pick one that doesn’t match reality. That’s why sometimes they say things that aren't true, it's just their best guess.

Like a Storyteller with a Little Memory

Think of an AI model like a storyteller who only remembers parts of the stories they’ve heard. If the story starts with “Once upon a time, there was a,” they might say “dragon” or “princess”, even if neither was in the original story. It's not that they're lying; they're just making up what they think fits best.

Take the quiz →

Examples

  1. An AI says the moon is made of cheese because it learned that from a silly story.
  2. A robot talks about ancient kings who flew to Mars, even though no one has ever found evidence for that.
  3. The AI claims a famous scientist won an award in 2023, but that's not true.

Ask a question

See also

Discussion

Recent activity

Nothing here yet.