Skip to main content

Module 1: Bias in AI

LESSON 1.3: WHEN AI TAKES A WRONG TURN: ALGORITHMIC BIAS

Algorithmic bias shows up when an AI system makes decisions that are unfair or discriminatory. For instance, facial recognition software might work well for some groups but struggle with others. This happens because the data or design didn’t account for diversity, or sometimes because biased outcomes were baked in intentionally. 

Bias in AI is tricky because it can come from multiple sources. It might be accidental—like using incomplete data—or it might grow unnoticed as the system learns. In some cases, it can even be a deliberate choice, designed to prioritize certain outcomes. No matter how it happens, the results can harm people and deepen inequalities. 

To prevent this, we need strong tools and ethical guidelines to detect and fix bias. By prioritizing fairness and inclusivity, we can ensure AI systems work better for everyone, not just a select few. 

Watch the video lecture (tutorial) titled “Algorithmic Bias: From Discrimination Discovery to Fairness-Aware Data Mining” that was recorded at the KDD conference in San Francisco in 2016 (3. video parts all together). The aim of this tutorial is to survey algorithmic bias, presenting its most common variants, with an emphasis on the algorithmic techniques and key ideas developed to derive efficient solutions. The tutorial covers two main complementary approaches: algorithms for discrimination discovery and discrimination prevention by means of fairness-aware data mining. Conclusion summarizes promising paths for future research.