| Site: | OpenLearn Create |
| Course: | Trustworthy and Democratic AI - Creating Awareness and Change |
| Book: | Module 1: Bias in AI |
| Printed by: | Guest user |
| Date: | Friday, 21 November 2025, 6:18 AM |
Artificial Intelligence (AI) has the potential to transform our lives, but it’s not without challenges. One critical issue is AI bias, where systems unintentionally or intentionally produce unfair outcomes that reflect human prejudices. This module explores how bias enters AI, the impact it can have on individuals and society, and why addressing it is essential for creating ethical, trustworthy systems. Let’s explore how bias starts, how it can affect AI systems, and what steps we can take to ensure AI serves everyone fairly.
In Module 1, we cover the following Lessons:
Lesson 1.1: Why AI Bias Matters and How It Affects Us
Lesson 1.2: Where Bias Begins: Human Bias
LESSON 1.1: WHY AI BIAS MATTERS AND HOW IT AFFECTS US
Bias is something we all experience—it’s shaped by our personal lives, culture, and society. It’s the lens through which we see the world, make decisions, and interpret situations. But here’s the thing: when humans build AI systems, those same biases can sneak into the technology. This can lead to unfair outcomes, even if it’s unintentional. AI bias happens when algorithms produce results that favor one group over another or reinforce stereotypes. This bias can creep in during many stages of the process, from collecting and preparing data to training, testing, and using the AI. What’s worse, even small biases can snowball, affecting the system in ways that are hard to spot and even harder to fix. That’s why understanding and addressing bias is so important—it helps make AI fairer and more trustworthy.
Watch the panel discussion titled "AI for Society" that was held at the 1st European Summer School on Artificial Intelligence (ESSAI) and 20th Advanced Course on Artificial Intelligence (ACAI) in Ljubljana in summer 2023.
The panelists are: Nataša Pirc Musar, PhD, President of the Republic of Slovenia, prof. Michel Dumontier, PhD, Maastricht University, prof. Tijl De Bie, PhD, Ghent University, Žiga Avsec, PhD, Research scientist at DeepMind, prof. Špela Vintar, PhD, University of Ljubljana, assistant prof. Vida Groznik, PhD, University of Primorska and University of Ljubljana.
LESSON 1.2: WHERE BIAS BEGINS: HUMAN BIAS
Bias starts with us. It’s the assumptions, preferences, or prejudices we form over time, often without even realizing it. These biases influence how we collect and interpret data—the same data used to train AI systems.
Since AI learns from real-world data, any existing inequalities or stereotypes in that data become part of the AI. For example, if historical hiring data shows a preference for certain demographics, an AI trained on it might continue that pattern. This is how bias can pass from people to data and then to the AI, amplifying existing problems. To stop this cycle, we need to be thoughtful and inclusive when gathering data to ensure AI systems are built on a foundation of fairness.
Watch the panel discussion titled “Human+AI collaboration: Reskilling and upskilling for the future” that was recorded in summer 2024 in Ljubljana, Slovenia. This panel discusses the symbiotic relationship between human translators and AI, addressing how professionals can adapt and enhance their skills in anticipation of future developments in AI translation. The session encourages discourse on education and skills development, followed by an audience Q&A.
LESSON 1.3: WHEN AI TAKES A WRONG TURN: ALGORITHMIC BIAS
Algorithmic bias shows up when an AI system makes decisions that are unfair or discriminatory. For instance, facial recognition software might work well for some groups but struggle with others. This happens because the data or design didn’t account for diversity, or sometimes because biased outcomes were baked in intentionally.
Bias in AI is tricky because it can come from multiple sources. It might be accidental—like using incomplete data—or it might grow unnoticed as the system learns. In some cases, it can even be a deliberate choice, designed to prioritize certain outcomes. No matter how it happens, the results can harm people and deepen inequalities.
To prevent this, we need strong tools and ethical guidelines to detect and fix bias. By prioritizing fairness and inclusivity, we can ensure AI systems work better for everyone, not just a select few.
Watch the video lecture (tutorial) titled “Algorithmic Bias: From Discrimination Discovery to Fairness-Aware Data Mining” that was recorded at the KDD conference in San Francisco in 2016 (3. video parts all together). The aim of this tutorial is to survey algorithmic bias, presenting its most common variants, with an emphasis on the algorithmic techniques and key ideas developed to derive efficient solutions. The tutorial covers two main complementary approaches: algorithms for discrimination discovery and discrimination prevention by means of fairness-aware data mining. Conclusion summarizes promising paths for future research.
LESSON 1.4: THE BIG PICTURE
Bias in AI isn’t just a technical issue—it’s about people. When we take steps to understand and address it, we create systems that respect everyone’s experiences and promote equality. By building fairer AI, we can make sure this powerful technology benefits all of us, now and in the future.
UNESCO’s graphic novel “Inside AI, An Algorithmic Adventure” invites you to explore both the opportunities and challenges that AI presents. The aim of the graphic novel is not only to make AI concepts accessible to everyone, but also to empower each individual to be aware of and protect the fundamental rights in the digital age.
UNESCO invites you to share your thoughts on #ArtificiaIIntelligence and to interact with them by commenting, retweeting and sharing this graphic novel in your social media networks, and tagging @UNESCO.
