Module 1: Responsible AI Development
Responsible AI development is an ongoing process that involves continuous assessment, refinement, and adaptation of AI systems to address emerging ethical challenges and societal concerns. It is a critical component of building trust in AI and ensuring that AI technologies are used in ways that are beneficial and aligned with human values and rights. Responsible AI development is particularly important in sectors where AI has a significant impact on individuals, such as healthcare, finance, and criminal justice. In this modul we showcase some of best practices and communities in the field of responsible AI development.
In Module 1, we cover the following Lessons:
LESSON 1.2: OTHER EXAMPLES AND PRACTICES WORTH EXPLOITING
1. EU AI Act Compliance Checker
The EU AI Act Compliance Checker is a free, interactive tool that helps you figure out whether your AI system needs to follow the rules set by the EU Artificial Intelligence Act.
How It Works
You answer a few simple questions about your AI system. The tool tells you if your system is likely to be regulated, and what kind of rules might apply. It only takes about 10 minutes to complete.
Important Note
This tool gives you a general idea, but it’s not legal advice. For detailed guidance, it’s best to consult a legal expert or check your country’s official resources.
2. AI Impact Assessment - A tool to set up responsible AI projects (version 2.0, December 2024)
3. More examples:
- AI now Institute
- Partnership on AI: checklist
- Algorithmic Justice League (AJL)
- NIST AI Risk Management Framework (AI RMF)
- UNESCO AI Ethics, Global AI Ethics and Governance Observatory
You have completed Module 1. You can now continue with the Module 2.
