Module 5: Virtualized Unbiasing Framework
Serving as a visual catalog, it facilitates exploration, comparison, and informed tool selection, thereby fostering a more effective approach to bias mitigation. Additionally, Bias Detector Toolkit is a part of VUF, where we are developing bias detection tools for specific contexts of use cases on the project, and will be presented in a later edition of MOOC.
The Virtualized Unbiasing Framework is a holistic application focused on explaining AI bias and equipping developers with an easy-to-navigate and visually organized catalog. It consists of the scrollytelling application, real life examples and the catalog of methods and tools for bias mitigation.
In Module 5, we cover:
Lesson 5.1: Scrollytelling Application
Lesson 5.2: Stages of AI Trainings
Lesson 5.3: Real Life Examples
Lesson 5.4: Catalog of Bias Detection and Mitigation Strategies
The third lesson provides real life examples of bias occurrences in different business sectors. Bias in real-world applications of ML has manifested in various forms, raising ethical concerns and highlighting the importance of responsible AI development. For each example, there is:
- A short description that summarizes the problem.
- The solution that either solved or mitigated the problem.
- Some reference material for further research on the topic.
Real-life examples of bias in policies serve as digestible illustrations that underscore the critical importance of understanding and addressing systemic inequalities. For each step, a short description is provided along with ways that bias can occur. The intended use for this section is to provide policy makers, stakeholders and ML engineers with the information needed in order to prevent the occurrence of bias in workflows.
Consider standardized testing in education, for instance, where biases can disproportionately disadvantage certain demographic groups. Such policies can perpetuate societal disparities and hinder equal opportunities. The importance of recognizing these biases lies in the potential to empower a general audience. When individuals comprehend the tangible impacts of biased policies, they are better equipped to advocate for change, engage in informed discussions, and challenge discriminatory practices. By offering relatable examples, we empower the general audience to navigate and contribute to conversations about fairness, justice, and equitable policy reform in their communities and beyond. Real life examples are showcased in D4.3 demonstrator here.
Example from practice (AI4GOV project): OECD documents chatbot
To raise awareness of the importance of ethics in AI, and the importance of bias prevention approaches, AI4Gov consortium partner used the OECD papers as one of the data sources, consisting of various national AI policies and strategies.
OECD has a collection of various national AI policies and strategies. They have an online repository with over 800 AI policy initiatives from 69 countries, territories and the EU. We have developed tools for web scarping these documents and converting them to Markdown format; those documents have been ingested into another platform and enriched with the automatic classification.
AI4Gov is also developing "Policy-Oriented Analytics and AI Algorithms" in the context of Task: “Improve Citizen Engagement and Trust utilising NLP”. The aim is to develop several NLP algorithms in order to analyse large volumes of text data and also assist the respective AI experts. This particular component consists of the following sub-components:
- Question Answering Service: this service will provide the necessary tool for allowing the AI experts, developers, and policy makers to perform queries on the OECD papers regarding, among others, raising awareness among them of AI solutions.
- Multilingual Bias Classification: this tool will support the multilingual identification and classification of bias in the OECD papers, providing all stakeholders information and enhanced knowledge of the types of bias that governments and public authorities take into consideration in their AI policies. It should be noted that, at this point of the project, a standalone implementation of this tool is available and its integration with the Policy-Oriented Analytics and AI Algorithms component will follow as the project progresses.
The architecture of the Policy-Oriented Analytics and AI Algorithms is depicted in the following Figure. For the shake of completeness, a simplified version of the architecture, showcasing the internal workflow of this component is also provided below.
