Skip to main content

Module 5: Virtualized Unbiasing Framework

Site: OpenLearn Create
Course: Trustworthy and Democratic AI - Creating Awareness and Change
Book: Module 5: Virtualized Unbiasing Framework
Printed by: Guest user
Date: Friday, 21 November 2025, 6:14 AM

Description

The Virtualized Unbiasing Framework (VUF) was designed within the AI4Gov project with the aim to function as a visual catalog synthesizing diverse tools tailored for detecting and mitigating biases in AI systems. Structured as a dynamic visual synthesis, this catalog offers a comprehensive overview of bias  mitigation tools, categorized by functionalities and applications.

Serving as a visual catalog, it facilitates exploration, comparison, and informed tool selection, thereby fostering a more effective approach to bias mitigation. Additionally, Bias Detector Toolkit is a part of VUF, where we are developing bias detection tools for specific contexts of use cases on the project, and will be presented in a later edition of MOOC.

The Virtualized Unbiasing Framework is a holistic application focused on explaining AI bias and equipping  developers with an easy-to-navigate and visually organized catalog. It  consists of the scrollytelling application, real life examples and the catalog of methods and tools  for bias mitigation.

In Module 5, we cover:

Lesson 5.1: Scrollytelling Application

Lesson 5.2: Stages of AI Trainings

Lesson 5.3: Real Life Examples

Lesson 5.4: Catalog of Bias Detection and Mitigation Strategies

LESSON 5.1: SCROLLYTELLING APPLICATION 


The first section of the application is the scrollytelling part, where the user can be informed about bias and AI in general. Scrollytelling, also known as scroll-driven storytelling or scroll-based  storytelling, is a web design technique that involves using the scrolling action of a webpage to  reveal content in a narrative or visually engaging way. Instead of presenting information all at  once on a single page, scrollytelling unfolds content gradually as the user scrolls down the page. 

By strategically organizing and presenting data visually, this process enhances comprehension  and facilitates rapid understanding. To introduce the concepts of bias, AI, and bias in AI, we developed an animation-based scrollytelling experience (see Figure 1). 

The scrollytelling application consists of stages that explain what is AI, what is bias, what is bias in AI, and how it affects model outputs.

Figure 1: Scrollytelling exampleFigure 1: Scrollytelling example

LESSON 5. 2: STAGES OF AI TRAININGS

The second section is a step-by-step presentation of bias in training a ML model. This process  unfolds through several key stages. It begins with data collection, where relevant datasets are  acquired to feed into the model. Following this, data preprocessing is of vital importance, focusing  on the cleaning, normalization, and transformation of the raw data towards enhanced and  effective learning. Feature selection follows, where meaningful attributes are chosen to enhance  the model's performance and reduce complexity. Subsequently, the model training phase  involves feeding the processed data into the chosen algorithm or architecture to enable it to learn  patterns and relationships. The trained model is then evaluated using a separate validation set to  gauge its accuracy and generalization capabilities. Once deemed satisfactory, the model proceeds  to deployment, making it operational for real-world applications. Continuous monitoring and  updates may follow to ensure its ongoing effectiveness, adapting to changes in data distribution  or evolving problem domains. This cyclical process of data-driven learning, from collection to  deployment, forms the foundation of ML model training.

LESSON 5.3: REAL LIFE EXAMPLES

The third lesson provides real life examples of bias occurrences in different business sectors. Bias in real-world applications of ML has manifested in various forms, raising ethical concerns and highlighting the importance of responsible AI  development. For each example, there is:

  • A short description that summarizes the problem.
  • The solution that either solved or mitigated the problem.
  • Some reference material for further research on the topic. 

Real-life examples of bias in policies serve as digestible illustrations that underscore the critical  importance of understanding and addressing systemic inequalities. For each step, a short  description is provided along with ways that bias can occur. The intended use for this section is  to provide policy makers, stakeholders and ML engineers with the information needed in order  to prevent the occurrence of bias in workflows. 

Consider standardized testing in education, for instance, where biases can disproportionately  disadvantage certain demographic groups. Such policies can perpetuate societal disparities and  hinder equal opportunities. The importance of recognizing these biases lies in the potential to  empower a general audience. When individuals comprehend the tangible impacts of biased  policies, they are better equipped to advocate for change, engage in informed discussions, and  challenge discriminatory practices. By offering relatable examples, we empower the general  audience to navigate and contribute to conversations about fairness, justice, and equitable policy  reform in their communities and beyond. Real life examples are showcased in D4.3 demonstrator here.

Example from practice (AI4GOV project): OECD documents chatbot

To raise awareness of the importance of ethics in AI, and the importance of bias prevention approaches, AI4Gov consortium partner used the OECD papers as one of the data sources, consisting of various national AI policies and strategies.

OECD has a collection of various national AI policies and strategies. They have an online repository with over 800 AI policy initiatives from 69 countries, territories and the EU. We have developed tools for web scarping these documents and converting them to Markdown format; those documents have been ingested into another platform and enriched with the automatic classification.

AI4Gov is also developing "Policy-Oriented Analytics and AI Algorithms" in the context of Task: “Improve Citizen Engagement and Trust utilising NLP”. The aim is to develop several NLP algorithms in order to analyse large volumes of text data and also assist the respective AI experts. This particular component consists of the following sub-components:

  • Question Answering Service: this service will provide the necessary tool for allowing the AI experts, developers, and policy makers to perform queries on the OECD papers regarding, among others, raising awareness among them of AI solutions. 
  • Multilingual Bias Classification: this tool will support the multilingual identification and classification of bias in the OECD papers, providing all stakeholders information and enhanced knowledge of the types of bias that governments and public authorities take into consideration in their AI policies. It should be noted that, at this point of the project, a standalone implementation of this tool is available and its integration with the Policy-Oriented Analytics and AI Algorithms component will follow as the project progresses.   

The architecture of the Policy-Oriented Analytics and AI Algorithms is depicted in the following Figure. For the shake of completeness, a simplified version of the architecture, showcasing the internal workflow of this component is also provided below.

LESSON 5.4: CATALOG OF BIAS DETECTION AND MITIGATION STRATEGIES

The Catalogue builds upon training steps, providing tools and mitigation techniques for each of  the steps of training AI models. The central idea is not to create another text heavy framework, but  to provide a visual summary of existing bias detection and mitigation strategies in an  approachable and easy-to-grasp format. 

For the Bias Detector Catalogue, we have executed an extensive literature overview for bias  mitigation techniques, that are collected in our Gitlab repo.They serve as the input for the interactive visual  synthesis on the AI4gov platform.

Watch the Demo of the Bias Detector Catalog showcasing the examples of tools that can be utilized to detect or mitigate bias.


Congratulations on completing Module 5! Now you can advance to Module 6. Just one more to go!