Skip to main content

Module 1: Responsible AI Development

LESSON 1.1: HORIZON EUROPE PROJECT AI4GOV

Within the Horizon Europe project AI4Gov different tools were developed. We are presenting some of them in short below. 

1. Understanding Bias in AI Policy Documents Across Different Languages 

As governments around the world create rules and policies for how artificial intelligence (AI) should be used, they often write these documents in many different languages. This can make it hard to compare them or spot important issues like bias—when a policy might unfairly favor or disadvantage certain people or groups. 

The presented research uses advanced AI tools (large language models) to help read and understand these policy documents in multiple languages. The goal is to find and classify different types of bias in these documents, making it easier to see what kinds of concerns different governments are focusing on when it comes to fairness in AI. By analyzing a large collection of AI policies from the OECD, a system was built that can automatically detect and compare bias across countries and languages. This helps make AI policymaking more transparent, fair, and inclusive—and supports better cooperation between countries. 

The original scientific paper is titled: Multilingual Classification of AI-Oriented Policy Documents based on Bias Types; authors: Dr. George Manias, Dr. Chrysa Agapitou, Mr. Nemania Borovits, Dr. Alenka Guček, Mr. Andreas Karabetian, Dr. Matej Kovacic, Mr. Konstantinos Mavrogiorgos, Dr. Tanja Zdolšek Draksler, Prof. Willem-Jan van den Heuvel and Prof. Dimosthenis Kyriazis. Keywords: multilingual classification; bias identification; transformers; transfer learning; pre-trained language models. 

Watch the video presentation 

 

and read more in the conference book of abstracts Data for Policy 2025 (DfP'25) - Europe Book of Abstracts (Full paper is accepted for publication in the Data & Policy journal, link will be added when available and published).

2. AI4Gov Holistic Regulatory Framework (HRF)- Making AI Fair and Safe for Everyone

The AI Holistic Regulatory Framework (HRF) is a guide created to help governments use AI in public services in a way that is fair, ethical, and transparent. It’s designed to protect people’s rights and make sure AI works for everyone—not just a few. 

2.1 Why It Matters

AI is being used more and more in different sectors, like healthcare, education, and government services. But if it’s not carefully managed, it can lead to problems like bias, discrimination, or lack of privacy. The HRF helps prevent these issues. 

2.2 How It Was Created
  • Researchers studied laws, ethics, and real-life experiences from people in Greece, Slovenia, and Spain. 
  • They talked to underrepresented groups to understand how they feel about public services and AI. 
  • They used this input to build a framework with 15 key areas that guide how AI should be used responsibly. 

2.3 The HRF focuses on:

  • Fairness and non-discrimination 
  • Privacy and data protection 
  • Human oversight (keeping people in control) 
  • Transparency and explainability (making AI decisions clear) 
  • Safety and accountability 
  • Public engagement and awareness 
  • Sustainability and societal benefit 
2.4 How It Helps 
  • Ensures AI follows laws like GDPR and the EU AI Act 
  • Helps governments self-check their AI systems using tools like ALTAI and Human Rights Impact Assessments 
  • Promotes trustworthy AI that respects people’s rights 
  • Encourages regular updates to keep up with new technology

2.5 Goal
To make sure AI in public services is safe, fair, and inclusive—so it benefits everyone, not just a few. It’s about building a future where technology supports a just and equal society. 

Self-Check Tools Based on the HRF: special self-check tools have been created. These tools are like a checklist that helps teams review their AI projects step by step. 

What Do These Tools Do? 

  • They help developers evaluate their AI systems during every stage—from design to deployment and beyond. 
  • The tools ask questions about 15 important areas like fairness, privacy, transparency, and safety. 
  • Developers answer using a rating scale (1 to 7) and simple Yes/No questions. 

It’s a smart way to build trust in AI and make sure it serves the public in a responsible and ethical way.