LESSON 1.1: HORIZON EUROPE PROJECT AI4GOV
Within the Horizon Europe project AI4Gov different tools were developed. We are presenting some of them in short below.
1. Understanding Bias in AI Policy Documents Across Different LanguagesAs governments around the world create rules and policies for how artificial intelligence (AI) should be used, they often write these documents in many different languages. This can make it hard to compare them or spot important issues like bias—when a policy might unfairly favor or disadvantage certain people or groups.
The presented research uses advanced AI tools (large language models) to help read and understand these policy documents in multiple languages. The goal is to find and classify different types of bias in these documents, making it easier to see what kinds of concerns different governments are focusing on when it comes to fairness in AI. By analyzing a large collection of AI policies from the OECD, a system was built that can automatically detect and compare bias across countries and languages. This helps make AI policymaking more transparent, fair, and inclusive—and supports better cooperation between countries.
The original scientific paper is titled: Multilingual Classification of AI-Oriented Policy Documents based on Bias Types; authors: Dr. George Manias, Dr. Chrysa Agapitou, Mr. Nemania Borovits, Dr. Alenka Guček, Mr. Andreas Karabetian, Dr. Matej Kovacic, Mr. Konstantinos Mavrogiorgos, Dr. Tanja Zdolšek Draksler, Prof. Willem-Jan van den Heuvel and Prof. Dimosthenis Kyriazis. Keywords: multilingual classification; bias identification; transformers; transfer learning; pre-trained language models.
Watch the video presentation
and read more in the conference book of abstracts Data for Policy 2025 (DfP'25) - Europe Book of Abstracts (Full paper is accepted for publication in the Data & Policy journal, link will be added when available and published).
2. AI4Gov Holistic Regulatory Framework (HRF)- Making AI Fair and Safe for Everyone
The AI Holistic Regulatory Framework (HRF) is a guide created to help governments use AI in public services in a way that is fair, ethical, and transparent. It’s designed to protect people’s rights and make sure AI works for everyone—not just a few.
2.1 Why It MattersAI is being used more and more in different sectors, like healthcare, education, and government services. But if it’s not carefully managed, it can lead to problems like bias, discrimination, or lack of privacy. The HRF helps prevent these issues.
2.2 How It Was Created2.5 Goal
To make sure AI in public services is safe, fair, and inclusive—so it benefits everyone, not just a few. It’s about building a future where technology supports a just and equal society.
Self-Check Tools Based on the HRF: special self-check tools have been created. These tools are like a checklist that helps teams review their AI projects step by step.
What Do These Tools Do?
It’s a smart way to build trust in AI and make sure it serves the public in a responsible and ethical way.