9 Legal liability for Generative AI use

What rights do users have if a GenAI tool provides inaccurate legal advice?
This section considers whether it is possible to hold the owners of the GenAI tools responsible for inaccurate legal advice, and the responsibility of organisations using these tools.
At the time of writing, it is unclear who would have liability for any harm caused by relying on inaccurate legal advice produced by a GenAI tool. The terms and conditions of LLMs which have been made freely available typically seek to pass on both liability and risk for using the tool to the user.
Since court cases were issued seeking compensation for breach of copyright related to the training materials of LLMs, the companies have offered indemnities to users. However, typically this only applies to commercial paying users and they are restricted and often subject to a cap.
For example, Microsoft has announced its Copilot Copyright Commitment, through which it pledges to assume responsibility for potential legal risks involved in using Microsoft’s Copilot services and the output they generate. If a third party were to bring a claim against a commercial customer of Copilot for copyright infringements for the use and distribution of its output, Microsoft has committed to defending the claim and covering any corresponding liabilities. (De Freitas and Costello, 2024)
It is unclear whether a case could be brought against the owners of a GenAI tool for inaccurate advice or harm caused by a biased decision. It is also important for individuals to be aware that technological companies offering GenAI legal advice are usually not regulated by the Law Society. This means that there is no access to compensation schemes for inaccurate advice obtained directly from the tools. The company is also not covered by legal professional rules such as client confidentiality or the duty to act in the best interests of the client.
By contrast, the legal organisations using GenAI tools may be liable for any harm caused by their unethical or irresponsible use. In providing services to clients, organisations have a duty of care towards them and may be liable in negligence for inaccurate advice, if it was foreseeable that relying on it uncritically could lead to harm.
GenAI guidelines by the Law Society, Bar Council and Judicial Guidance all stress the need to check the accuracy of outputs from an LLM. The guidance also confirms that the responsibility for what is presented to court and to clients remains with the legal adviser, even if it originates from a GenAI tool.
Given the likelihood that organisations will be held liable for any harm caused by the irresponsible use of GenAI, it is important that organisations put in place processes to evaluate the workings of GenAI tools such as ‘Human in the Loop’ and / or the adoption of an Ethical use framework. The third course, Key considerations for successful Generative AI adoption, also looks at how organisations can successfully use GenAI.
Finally, in the next section, this course considers whether the public trusts GenAI tools, and the implications this has for the ethical and responsible use of GenAI.
8 Human in the loop
