8 Correcting and adapting

Assuming that during the review of an LLM’s output we found something we did not find acceptable, what can we do about that?
An obvious possibility when we find an error in the output, is to tell the LLM they were incorrect. As we saw in Understanding Generative AI, simply asking the tool if the information they presented was accurate is not enough: they will often reply that the information was correct (even where it is not).
However, being explicit in telling the LLM that there is an error usually elicits an apology, followed by an alternative output. Nevertheless, the LLM may not have identified the incorrect content. Unless you make it explicit what the tool got wrong, it may not be able to identify the error and therefore the correction may be random.
Anecdotal evidence among the authors of the course is that simply stating the LLM was wrong more than twice in a conversation can push it into an unstable state, and it starts to hallucinate. A similar thing can happen with some GenAI tools if you tell them that something is incorrect when it is, in fact, correct.
If you want to correct an AI, you need to be specific about what they got wrong.
For example,
Prompt
You were asked to produce a short summary. This is a bit long. Can you reduce it? Other parts of the request were satisfactorily delivered.
In most cases, when you find something unacceptable, you will want to refine the prompt. You can either do this conversationally, or by starting a new conversation and reinforcing the aspect of the prompt that appeared to have been applied poorly.
You will now have an opportunity to put everything you have learnt into practice in the next section.
Reproducibility
