Explainability

As you learnt in the first course, GenAI tools have access to so much training data, with complex and self-learning algorithms powering them, that it is difficult for us to understand why the tool has produced a particular output.
The neural networks underpinning GenAI tools can have many billions of parameters. It is therefore challenging for such tools themselves to explain how they have produced a given response to a prompt. This can make it difficult for a user to work out whether they can have confidence in the reliability and accuracy of the output.
There is ongoing research into explainable AI – in how to provide a clear reasoning to users as to why certain responses were produced – so the explainability of tools may become clearer over time.
This section will now consider two concerns that relate to the legal consequences of using GenAI outputs. These are also considered in more detail in the sixth and seventh courses.
Hallucinations
