LESSON 1.2: WHERE BIAS BEGINS: HUMAN BIAS
Bias starts with us. It’s the assumptions, preferences, or prejudices we form over time, often without even realizing it. These biases influence how we collect and interpret data—the same data used to train AI systems.
Since AI learns from real-world data, any existing inequalities or stereotypes in that data become part of the AI. For example, if historical hiring data shows a preference for certain demographics, an AI trained on it might continue that pattern. This is how bias can pass from people to data and then to the AI, amplifying existing problems. To stop this cycle, we need to be thoughtful and inclusive when gathering data to ensure AI systems are built on a foundation of fairness.
Watch the panel discussion titled “Human+AI collaboration: Reskilling and upskilling for the future” that was recorded in summer 2024 in Ljubljana, Slovenia. This panel discusses the symbiotic relationship between human translators and AI, addressing how professionals can adapt and enhance their skills in anticipation of future developments in AI translation. The session encourages discourse on education and skills development, followed by an audience Q&A.