Session 1: Introduction to prompting – 45 minutes

1 Using Generative AI – the reality

The moon comes up over the horizon on the sea. The reflection of the moon in the sea is distorted.

There is nothing intelligent about GenAI – it is not thinking or reasoning the way we consider humans to be capable of thinking and reasoning.

As we discussed in the first course, Understanding Generative AI, GenAI, specifically LLMs, predicts the next word in the sentence through their training on billions of pieces of data, and the tools can be tuned or trained to respond in particular ways.

With LLMs, the scale of the analysis and vast quantities of written data used in their training have led to them being able to mimic fact recall and human communication. This makes them compelling communicators, and they can produce persuasive and eloquent text. They’re really helpful with lightweight problems where the consequences of being wrong, biased or inappropriate are relatively small.

However, if you know how they work, it makes them hard to trust when you really need to be certain of accuracy and correct reasoning. When using GenAI for legal tasks, it is also important to be aware that the tools’ access to up-to-date and accurate legal information is limited, as it is mostly hidden behind paywalls.

Here are some headline stories you may have spotted over the past few years.

 

In 2021, leaders in the Dutch parliament, including the prime minister, resigned after an investigation found that over the preceding eight years, more than 20,000 families were defrauded due to a discriminatory algorithm.

 

 

In August 2023, tutoring company iTutorGroup agreed to pay $365,000 to settle a claim brought by the US Equal Employment Opportunity Commission (EEOC). The federal agency said the company, which provides remote tutoring services to students in China, used AI-powered recruiting software that automatically rejected female applicants aged 55 and older and male applicants aged 60 and older.

 

 

In February 2024, Air Canada was ordered to pay damages to a passenger after its virtual assistant gave him incorrect information at a particularly difficult time.

 

By June 2023 there had been a number of reports where lawyers were presenting hallucinated content to courts, such as made-up or wrongly applied cases. Despite this wide publicity, it was still happening at the time of writing this course in 2025.

In January 2025, the BBC reported on a study by the legal firm Linklaters, looking into how well GenAI models were able to engage with real legal questions. The test involved posing the type of questions which would require advice from a "competent mid-level lawyer" with two years' experience. The report states “Linklaters said it showed the tools were "getting to the stage where they could be useful" for real-world legal work – “but only with expert human supervision”.

This was clarified later as:

 

“The newer models [of AI] showed a ‘significant improvement’ on their predecessors” Linklaters said, but still performed below the level of a qualified lawyer.

Even the most advanced tools made mistakes, left out important information and invented citations – albeit less than earlier models.

The tools are "starting to perform at a level where they could assist in legal research" Linklaters said, giving the examples of providing first drafts or checking answers.

However, it said there were "dangers" in using them if lawyers "don't already have a good idea of the answer".

 

Activity icon Having confidence in its use

Timing: Allow 5 minutes

If you decide to use an LLM (for example, ChatGPT, Copilot or Claude) for your work in a legal or free advice organisation, what would you do to make sure you have confidence in its use?

To use this interactive functionality a free OU account is required. Sign in or register.
Interactive feature not available in single page view (see it in standard view).

Discussion

The range of potential problems means that GenAI output needs a lot of checking, and so it would be important to ensure there was a process to check the accuracy and appropriateness of its outputs. In the first course , Understanding Generative AI, you briefly heard about some potential problems in the performance and use of GenAI tools: hallucinations, bias, ethical and legal concerns.

It’s also possible to ask questions that the LLMs cannot answer because it is ill-suited to that kind of question. This was identified shortly after the release of ChatGPT when people noticed it couldn’t do basic arithmetic. We’ll come back later to consider how to encourage AIs to tell you that they don’t know what they’re talking about.

If you’re using GenAI in a real-world scenario, particularly where there might be harm if they do something wrong, you need to treat them like a very poorly trained assistant who regularly makes all kinds of mistakes. You need to be specific about what you ask them to do, and you have to be ready to review everything they complete. This course looks at two key aspects of using GenAI – prompting, to ensure we get what we want from the AI, and reviewing the output, to check that what the AI gave us is safe to use.

So, when is it a good idea to use GenAI and when should it be avoided?

Let’s look briefly at some ways GenAI can be used in the legal context (often called ‘use cases’). Use cases are explored in more depth in the fourth course: Integrating Generative AI into workflows.

2 Using Generative AI – when is artificial intelligence a good choice?