Why are large language models sometimes called 'hallucinating'?

Large language models are sometimes called hallucinating because they make up answers that aren’t true, just like when you imagine a friend is hiding behind the couch even though they’re not there.

What does "hallucinating" mean?

Imagine you have a robot who loves telling stories. Most of the time, it tells real stories about things it knows, like what happened during your last birthday party. But sometimes, it gets confused and starts making up new stories that didn’t happen, like saying your pet dinosaur went to space on a spaceship. That’s hallucinating.

Why do they make up stories?

Think of the robot as a very smart kid who reads a lot of books. It tries its best to answer questions, but sometimes it gets mixed up between different books and makes up answers that are not in any of them. Like if you ask, “What color is the moon on Tuesday?” The robot might say “blue!” even though the moon isn’t usually blue, just like when you think your red balloon is green because it looks a little weird in the dark.

So, hallucinating happens when large language models are confident but wrong, just like when you imagine things that aren’t real.

Take the quiz →

Ask a question

See also

Discussion

Recent activity

Categories: Technology