Skip to main content

Module 2: Real-Life Examples of Bias

LESSON 2.4: BIAS IN HIRING ALGORITHMS - THE CASE OF AMAZON AND BEYOND

In 2014, Amazon developed an experimental hiring tool that used AI to review job applications and rate candidates from one to five stars. This automated tool was designed to streamline recruitment for positions like software developers and other technical roles, aiming to provide efficient, objective assessments. However, by 2015, Amazon discovered that this system was not evaluating candidates in a gender-neutral way, revealing a significant bias against women. 

The core of the problem lay in the AI model’s training data: ten years’ worth of applications submitted to Amazon, during which the majority of candidates were men. As a result, the algorithm “learned” that male applicants were more desirable for technical roles, penalizing resumes that suggested the applicant might be female. For instance, it downgraded applications that mentioned terms like “women’s chess club captain” or all-women’s colleges, reflecting inherent biases in the historical data rather than actual candidate qualifications. 

Once Amazon detected this bias, they attempted to adjust the system to be gender-neutral. However, management ultimately decided to discontinue the tool in 2017, recognizing the risk of other biases emerging and losing trust in the system’s fairness and reliability. 

Despite Amazon’s decision to end its use of AI for resume review, similar automated sorting tools are still widely used across industries. Most of these tools rely on basic pattern matching to filter candidates based on keywords that match job requirements, with some integrating machine learning to assess skill relevance. However, this approach has its own vulnerabilities: some candidates have discovered that copying the job description or relevant keywords in a white font (invisible to human reviewers) can trick the system into prioritizing their resumes. 

These examples highlight several critical issues in AI-driven hiring: 

  1. Historical Bias in Training Data: When AI systems are trained on past hiring patterns that reflect gender or other demographic imbalances, they can inadvertently perpetuate these biases. 
  2. Data Manipulation: Simple tweaks like embedding invisible keywords reveal how candidates can exploit algorithmic weaknesses, underscoring the need for transparent and fair assessment methods. 
  3. Lack of Trust in AI: Amazon’s abandonment of the tool shows that even with adjustments, management may be wary of relying on AI if bias cannot be fully addressed. 

This case exemplifies the importance of using balanced, representative data and incorporating ongoing monitoring to prevent bias in hiring algorithms. As automated hiring tools become more common, companies must ensure that these systems are fair, transparent, and adaptable to address any emerging biases or unintended manipulation.


Dastin, J. (2018). Insight - Amazon scraps secret AI recruiting tool that showed bias against women. Reuters web-page. Link

Dastin, J. (2022). Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of data and analytics (pp. 296-299). Auerbach Publications. Link

Schneier, B. (2009). Hacking AI Resume Screening with Text in a White Font. Blog article Schneier on security. Link