How do AI models develop harmful biases?

AI models can pick up harmful biases when they learn from examples that are unfair or unbalanced, just like how a kid might learn to favor one toy over another if they always see the same toy being played with.

Learning from what they see

Imagine you're learning about different jobs by watching people work. If most of the people you see doing important jobs are boys, you might think only boys can be important workers, even though girls can be too! AI models learn in a similar way. They look at lots of examples, like sentences or pictures, and try to find patterns.

When examples aren't fair

If an AI model learns from examples where certain groups are treated unfairly, like always being told "you're not good enough", it might start repeating that unfairness. It's like learning a song by heart: if the song has a mean message, the AI will sing it too.

So, biases in AI come from what they learn, and sometimes, that learning is based on unfair examples. That’s how harmful biases can grow!

Take the quiz →

Examples

  1. An AI trained on job applications might favor men for certain roles if most of the applicants are men.
  2. A facial recognition system may not work well for people with darker skin tones if it was mostly trained on lighter-skinned faces.
  3. A school recommendation tool might suggest less challenging classes to students from lower-income families.

Ask a question

See also

Discussion

Recent activity

Categories: Technology · AI· Bias· Machine Learning