How Does Oxford's AI Chair: LLMs are a HACK Work?

Oxford’s AI Chair says large language models, like the ones that write essays or chat, are a hack, because they cheat by using patterns instead of real understanding.

Imagine you're learning to read, and someone gives you a book with only one sentence repeated over and over: "The cat sat on the mat." You might start saying, "The cat sat on the mat," too, just by copying, even if you don’t know what it means. That’s like how LLMs work.

How It's Like Copying

Think of an LLM as a super-smart kid who has memorized billions of sentences from books, websites, and chats. When someone asks it to write something new, say, a story about a dog, the LLM doesn’t really understand dogs or stories. It just looks for patterns in its memory and copies parts of old sentences together to make something that looks like a real story.

Why It's a Hack

This is called a hack because it’s clever, but not quite fair. The LLM doesn’t need to know what it’s saying, it just needs to find the right pieces from its memory and stick them together. Like when you copy your friend’s homework without really understanding it.

It’s like having a giant puzzle with billions of pieces, the LLM picks the best ones to make something that looks complete, even if it doesn’t all fit perfectly. Oxford’s AI Chair says large language models, like the ones that write essays or chat, are a hack, because they cheat by using patterns instead of real understanding.

Imagine you're learning to read, and someone gives you a book with only one sentence repeated over and over: "The cat sat on the mat." You might start saying, "The cat sat on the mat," too, just by copying, even if you don’t know what it means. That’s like how LLMs work.

How It's Like Copying

Think of an LLM as a super-smart kid who has memorized billions of sentences from books, websites, and chats. When someone asks it to write something new, say, a story about a dog, the LLM doesn’t really understand dogs or stories. It just looks for patterns in its memory and copies parts of old sentences together to make something that looks like a real story.

Why It's a Hack

This is called a hack because it’s clever, but not quite fair. The LLM doesn’t need to know what it’s saying, it just needs to find the right pieces from its memory and stick them together. Like when you copy your friend’s homework without really understanding it.

It’s like having a giant puzzle with billions of pieces, the LLM picks the best ones to make something that looks complete, even if it doesn’t all fit perfectly.

Take the quiz →

Examples

  1. A child learns that LLMs are like shortcuts for AI to understand and write text quickly, even if it's not perfect.

Ask a question

See also

Discussion

Recent activity

Categories: Science · AI· LLM· Oxford