Why do AI models sometimes 'hallucinate' or make up facts?

AI models sometimes hallucinate or make up facts because they're trying to guess what comes next, like when you’re telling a story and you forget where it went.

Imagine you have a big, colorful puzzle with thousands of pieces. Each piece is a word. When the AI reads a sentence, it looks at the words around it, like neighbors in a block of flats, and tries to figure out what the next word should be. It’s not always right, sometimes it picks a word that doesn’t fit because it's just making its best guess.

Like a Guessing Game

Think of it like playing 20 Questions with your friend. You ask yes-or-no questions to guess what they’re thinking of. If you get a few answers wrong, you might end up guessing something completely different, like thinking they're thinking of a dinosaur when it's actually a banana.

AI works the same way: if it gets confused by tricky or unusual sentences, it might start making up things that don’t really fit, just to keep going with its story. That’s why sometimes AI says things that are surprising or even wrong, it’s just trying its best to make sense of what it sees!

Take the quiz →

Examples

  1. An AI says the moon is made of cheese, even though it's never been taught that before.
  2. A chatbot tells a child that dinosaurs still live in the ocean today.
  3. A robot thinks Paris is the capital of Italy.

Ask a question

See also

Discussion

Recent activity