Oxford’s AI Chair says large language models, like the ones that write essays or chat, are a hack, because they cheat by using patterns instead of real understanding.
Imagine you're learning to read, and someone gives you a book with only one sentence repeated over and over: "The cat sat on the mat." You might start saying, "The cat sat on the mat," too, just by copying, even if you don’t know what it means. That’s like how LLMs work.
How It's Like Copying
Think of an LLM as a super-smart kid who has memorized billions of sentences from books, websites, and chats. When someone asks it to write something new, say, a story about a dog, the LLM doesn’t really understand dogs or stories. It just looks for patterns in its memory and copies parts of old sentences together to make something that looks like a real story.
Why It's a Hack
This is called a hack because it’s clever, but not quite fair. The LLM doesn’t need to know what it’s saying, it just needs to find the right pieces from its memory and stick them together. Like when you copy your friend’s homework without really understanding it.
It’s like having a giant puzzle with billions of pieces, the LLM picks the best ones to make something that looks complete, even if it doesn’t all fit perfectly. Oxford’s AI Chair says large language models, like the ones that write essays or chat, are a hack, because they cheat by using patterns instead of real understanding.
Imagine you're learning to read, and someone gives you a book with only one sentence repeated over and over: "The cat sat on the mat." You might start saying, "The cat sat on the mat," too, just by copying, even if you don’t know what it means. That’s like how LLMs work.
How It's Like Copying
Think of an LLM as a super-smart kid who has memorized billions of sentences from books, websites, and chats. When someone asks it to write something new, say, a story about a dog, the LLM doesn’t really understand dogs or stories. It just looks for patterns in its memory and copies parts of old sentences together to make something that looks like a real story.
Why It's a Hack
This is called a hack because it’s clever, but not quite fair. The LLM doesn’t need to know what it’s saying, it just needs to find the right pieces from its memory and stick them together. Like when you copy your friend’s homework without really understanding it.
It’s like having a giant puzzle with billions of pieces, the LLM picks the best ones to make something that looks complete, even if it doesn’t all fit perfectly.
Examples
- A child learns that LLMs are like shortcuts for AI to understand and write text quickly, even if it's not perfect.
Ask a question
See also
- AI Literacy: How do AI Image Generators Work?
- AI Is Creating the Most Real Games Ever - But Should It?
- Can AI chatbots secretly insert ads into their responses?
- How AI Could Empower Any Business | Andrew Ng | TED?
- Does this AI FINALLY replace Game Devs? Should YOU worry?