8 Human in the loop

Another important safeguard to ensure the ethical and responsible use of GenAI tools is the concept of the ‘human in the loop’. This refers to the importance of having an individual responsible for reviewing the outputs of GenAI, overseeing the system and checking the output is correct.
The Nuffield Family Justice Observatory’s briefing paper on ‘AI in the family justice system’ (Saied-Tessier, 2024) recognised the importance of the human in the loop (HITL), defining it as being where “humans are directly involved in the system’s decision-making process. HITL is often used in situations where human judgement and expertise are crucial, and AI is used as a tool to assist human decision-making”.
Human in the loop requires that there is human oversight when GenAI systems are designed, developed and used. When using GenAI in legal contexts, for example, this would involve a person with legal expertise checking the outputs of an LLM to ensure it is accurate and does not include hallucinated, incorrect or biased results before it is used or relied on.
In particular, the human in the loop in a legal context would need to check the law was accurately stated, up-to-date and there were no relevant omissions; any cases or statutes referred to were real and related to the legal arguments being put forward; and that any legal arguments were evidenced and logical. You will find out more about AI literacy in the eighth course Preparing for tomorrow: horizon scanning and AI literacy.
A human in the loop addresses ethical concerns
a.
Bias
b.
Data protection issues
c.
Deskilling of humans
d.
Digital divide
e.
Environmental concerns.
f.
Explainability
g.
Hallucinations
h.
Legal implications
i.
Societal considerations such as deepfakes
The correct answers are a, b, g, h and i.
Discussion
Human in the loop can help address some, though not all, of these concerns. It would potentially identify the concerns identified above so they could be amended and rectified.
The idea of a human in the loop is endorsed by the Law Society (2024), whose GenAI guidance states:
“Even if outputs are derived from Generative AI tools, this does not absolve you of legal responsibility or liability if the results are incorrect or unfavourable. You remain subject to the same professional conduct rules if the requisite standards are not met…You should…carefully fact check its products and authenticate the outputs.”
Having considered some of the ways in which organisations and individuals can mitigate against the ethical concerns identified in this course, the next section discusses the legal consequences of using GenAI unethically or irresponsibly.
Further reading
If you are interested, you can learn more about how important human skills are in this supplementary course content – The importance of human skills.
Session 3: How to use GenAI responsibly and ethically – 45 minutes
