Why do AI chatbots sometimes make up facts or 'hallucinate'?

AI chatbots sometimes make up facts because they're trying to guess what you want to hear, just like when you play a game and try to figure out the rules.

Imagine you have a friend who loves telling stories, but sometimes they mix up parts of different stories. That’s kind of how AI chatbots work, they’re like that friend, trying to make sense of everything you say.

How They Guess

AI chatbots use something called patterns to understand what you're saying and how to respond. It's like when you learn new words by listening to your teacher or reading a book, the more examples you see, the better you get at guessing what comes next.

But sometimes, they see a pattern that doesn’t quite fit, and they make an educated guess instead of checking all the facts. That’s why they might say something that sounds right but isn't, it's like when your friend tells you a story with a twist that didn’t happen in the original.

The More They Learn, the Better They Guess

AI chatbots get better over time, just like how you get better at guessing games as you play more. But even the best guessers can be wrong sometimes, and that's okay!

Take the quiz →

Examples

  1. An AI says the president of France is a robot, but that's not true.
  2. A chatbot claims the moon is made of cheese and adds it to your homework.
  3. You ask an AI about the history of phones, and it makes up a whole new invention.

Ask a question

See also

Discussion

Recent activity