4 Reverse engineering
Reverse engineering in AI allows experts to dissect how a system makes decisions, especially when dealing with opaque 'black box' models. This process can support accountability and fairness by revealing whether systems operate without discrimination and within legal boundaries.
What is Reverse Engineering?

Reverse engineering is the process of analysing and deconstructing an existing AI system. This process helps to understand how an AI system works, and helps us to understand the decision-making processes used in particular AI models.
Reverse engineering is important when it comes to dealing with opaque systems – so-called ‘black box’ models – as it can help to examine potentially biased or discriminatory outputs (Information Commissioner's Office and The Alan Turing Institute, 2022).
Reverse engineering is important in high-risk fields such as policing, hiring, and healthcare. However, it can raise concerns over intellectual property rights and confidentiality, especially where AI systems are protected under trade secret laws. UK law generally permits reverse engineering of legally acquired products unless restricted by contract.
From an ethical perspective, reverse engineering is crucial for identifying flaws that could affect public safety or violate rights. It allows regulators and researchers to simulate vulnerabilities and verify that systems behave as intended. Current UK and EU policy discussions are exploring how to balance innovation with transparency through regulated access to AI models for scrutiny and conformity assessments.
It is becoming increasingly important within Europe to ensure that reverse engineering can be facilitated to ensure that there is compliance with regulatory requirements introduced under the EU AI Act (European Parliament, 2023) for example.
3 Accuracy and reliability
