Session 3: Mitigating AI risks – 40 minutes

12 Human oversight

GenAI technologies are increasingly embedded in organisational operations, bringing both opportunities and complex challenges. To ensure responsible use, it is vital to manage associated risks – ethical, legal, and operational – through robust oversight, clear frameworks, and adaptive planning.

Based on what you have learnt about the risks and challenges posed by the use of GenAI in organisations, it is important to take appropriate steps to mitigate those risks. This will assist with regulatory compliance, and prevent regulatory breaches, particularly in regulated professions.

Human oversight is a foundational safeguard in the responsible deployment of GenAI. While these systems excel at automating tasks, they lack contextual judgment, ethical reasoning, and an understanding of nuanced social or legal implications.

A flat-style digital illustration conveys the concept of human oversight in AI. It features a man in business attire standing beside an open laptop. On the laptop screen, a central AI chip icon is surrounded by circuit lines, representing artificial intelligence. The man is holding a large magnifying glass over the AI symbol, symbolising scrutiny and monitoring. The background includes soft cloud shapes, and the caption below reads "HUMAN OVERSIGHT" in bold, dark blue capital letters. The image uses a muted colour palette of blues, greys, and beige tones.
Image generated using the AI prompt: Generate an image of human oversight.

Organisations should:

  • Define specific thresholds where human intervention is mandatory (e.g., legal, financial, or medical decisions).

  • Establish protocols for manual review, ethical audits, and expert escalation.

  • Maintain oversight as a continuous process – through regular review of outputs, decision documentation, and cross-functional review panels.

Importantly, regulatory bodies such as the Solicitors Regulation Authority (SRA) and the Information Commissioner’s Office (ICO) have reinforced that AI use does not negate human accountability. Professionals remain responsible for the outcomes of GenAI-supported decisions.

13 The GenAIUM framework