What are contextual embeddings?

Contextual embeddings are like giving words superpowers based on who they're hanging out with.

Imagine you have a toy box full of blocks, each block is a word. Now, when you put certain blocks together, they do different things. For example, the word “play” can mean having fun with friends or playing a game on a TV. So if you put “play” next to “with”, it might be about playing with toys. But if you put it next to “game”, it could be about a video game.

That’s what contextual embeddings do, they look at the words around a word and change how that word is understood, just like your toy blocks can do different things depending on which other blocks are nearby. Instead of having one fixed meaning for each word, contextual embeddings let each word change its "look" based on where it is in a sentence.

How It Works

Think of it like wearing different outfits to school, you might wear a superhero costume on Monday and a regular shirt on Tuesday. The embedding (like your outfit) changes depending on the context (the day of the week or what you're doing). So, "bank" can mean a place where you keep money or a side of a river, contextual embeddings help us tell them apart!

Take the quiz →

Examples

  1. A child learns that the word 'bank' can mean a place to keep money or the side of a river, depending on what they're reading.
  2. Understanding words in different sentences helps computers know what people are saying better.
  3. Contextual embeddings help computers learn from sentences just like kids do.

Ask a question

See also

Discussion

Recent activity

Categories: Technology · NLP· Machine Learning· AI