Why are deepfakes becoming harder for AI to detect?

Deepfakes are becoming harder for AI to detect because they're getting smarter and more realistic.

Imagine you're trying to tell if a friend is wearing a mask or not, it's easier if the mask looks clumsy, but way trickier if the mask looks just like their face. That’s what's happening with deepfakes: they look more like real people now, so it's harder for AI to spot the trick.

Deepfakes are getting better at hiding

At first, deepfakes were like a costume, you could tell something was wrong because the person’s face didn’t quite match their voice or eyes. But now, it’s more like a mirror, the fake looks just as real as the real one.

AI is learning to spot tricks

AI used to be like a detective who could catch fake masks easily. But now, the deepfakes are like clever magicians hiding their tricks well, and the detective has to work harder to find them out.

It’s like a game of hide-and-seek where both players get better at hiding and finding, that makes it harder for AI to win!

Take the quiz →

Examples

  1. A deepfake video looks like a real person talking, but AI can't always tell it's fake because the face moves so naturally.
  2. Sometimes AI thinks a real person is faking it because of small glitches in their face.
  3. AI might get confused if someone uses a deepfake to pretend they're in two places at once.

Ask a question

See also

Discussion

Recent activity