| Site: | OpenLearn Create |
| Course: | Trustworthy and Democratic AI - Responsible AI Development |
| Book: | Module 1: Responsible AI Development |
| Printed by: | Guest user |
| Date: | Friday, 21 November 2025, 6:28 AM |
Responsible AI development is an ongoing process that involves continuous assessment, refinement, and adaptation of AI systems to address emerging ethical challenges and societal concerns. It is a critical component of building trust in AI and ensuring that AI technologies are used in ways that are beneficial and aligned with human values and rights. Responsible AI development is particularly important in sectors where AI has a significant impact on individuals, such as healthcare, finance, and criminal justice. In this modul we showcase some of best practices and communities in the field of responsible AI development.
In Module 1, we cover the following Lessons:
LESSON 1.1: HORIZON EUROPE PROJECT AI4GOV
Within the Horizon Europe project AI4Gov different tools were developed. We are presenting some of them in short below.
1. Understanding Bias in AI Policy Documents Across Different LanguagesAs governments around the world create rules and policies for how artificial intelligence (AI) should be used, they often write these documents in many different languages. This can make it hard to compare them or spot important issues like bias—when a policy might unfairly favor or disadvantage certain people or groups.
The presented research uses advanced AI tools (large language models) to help read and understand these policy documents in multiple languages. The goal is to find and classify different types of bias in these documents, making it easier to see what kinds of concerns different governments are focusing on when it comes to fairness in AI. By analyzing a large collection of AI policies from the OECD, a system was built that can automatically detect and compare bias across countries and languages. This helps make AI policymaking more transparent, fair, and inclusive—and supports better cooperation between countries.
The original scientific paper is titled: Multilingual Classification of AI-Oriented Policy Documents based on Bias Types; authors: Dr. George Manias, Dr. Chrysa Agapitou, Mr. Nemania Borovits, Dr. Alenka Guček, Mr. Andreas Karabetian, Dr. Matej Kovacic, Mr. Konstantinos Mavrogiorgos, Dr. Tanja Zdolšek Draksler, Prof. Willem-Jan van den Heuvel and Prof. Dimosthenis Kyriazis. Keywords: multilingual classification; bias identification; transformers; transfer learning; pre-trained language models.
Watch the video presentation
and read more in the conference book of abstracts Data for Policy 2025 (DfP'25) - Europe Book of Abstracts (Full paper is accepted for publication in the Data & Policy journal, link will be added when available and published).
2. AI4Gov Holistic Regulatory Framework (HRF)- Making AI Fair and Safe for Everyone
The AI Holistic Regulatory Framework (HRF) is a guide created to help governments use AI in public services in a way that is fair, ethical, and transparent. It’s designed to protect people’s rights and make sure AI works for everyone—not just a few.
2.1 Why It MattersAI is being used more and more in different sectors, like healthcare, education, and government services. But if it’s not carefully managed, it can lead to problems like bias, discrimination, or lack of privacy. The HRF helps prevent these issues.
2.2 How It Was Created2.5 Goal
To make sure AI in public services is safe, fair, and inclusive—so it benefits everyone, not just a few. It’s about building a future where technology supports a just and equal society.
Self-Check Tools Based on the HRF: special self-check tools have been created. These tools are like a checklist that helps teams review their AI projects step by step.
What Do These Tools Do?
It’s a smart way to build trust in AI and make sure it serves the public in a responsible and ethical way.
LESSON 1.2: OTHER EXAMPLES AND PRACTICES WORTH EXPLOITING
1. EU AI Act Compliance Checker
The EU AI Act Compliance Checker is a free, interactive tool that helps you figure out whether your AI system needs to follow the rules set by the EU Artificial Intelligence Act.
How It Works
You answer a few simple questions about your AI system. The tool tells you if your system is likely to be regulated, and what kind of rules might apply. It only takes about 10 minutes to complete.
Important Note
This tool gives you a general idea, but it’s not legal advice. For detailed guidance, it’s best to consult a legal expert or check your country’s official resources.
2. AI Impact Assessment - A tool to set up responsible AI projects (version 2.0, December 2024)
3. More examples:
You have completed Module 1. You can now continue with the Module 2.