10 Responsibility and accountability

As GenAI becomes more integrated into decision-making, questions of accountability become critical.

When AI systems make mistakes – misdiagnosing a patient, generating biased decisions, or producing defamatory content – determining who is responsible is complex. However, this is exactly why it is so important to have clear lines of accountability – human accountability in the deployment of AI models (Floridi et al., 2018).

The legal position in the UK is still evolving to clarify liability. However, gaps remain. Ethical frameworks stress that developers, deployers, and end users must share responsibility.

Described image
Image generated using the AI prompt: Draw a responsibility flow chart for AI use in small organisations.

Professionals, particularly in regulated sectors, cannot defer blame to algorithms. They remain accountable for ensuring outputs are fair, lawful, and appropriate.

Maintaining human oversight – the 'human-in-the-loop' – is essential to uphold autonomy, particularly when decisions have significant personal or legal consequences. Internal governance structures, documentation, and regular audits are critical tools for supporting accountability.

9 Ethical considerations and bias