6 Understanding Generative AI outputs
One of the criticisms of GenAI is the impossibility of understanding why the tool produced a specific output to a given prompt. Individual users do not know what material the tool has been trained on, or the way in which it has been programmed. Even if this information is known, the output is chosen from a complex neural network through statistical decision-making. With the output being dependent on such complex activity and interaction, it is hard to determine why specific inputs create specific outputs.
For example, a study into a small neural network by a team from Tübingen University examined a vision-based neural net recognition system to see what features in an identified image was getting its attention. They asked what led the system to identify pictures of ‘Tench’ (a breed of fresh- and brackish-water fish found across Western Europe and Asia), for example. The answer came as something of a surprise. What it showed was that it considered one important part of the image to be the presence of human fingertips. Not at all what was expected!

The reason this occurred was bias in the underlying data. Tench are what are called trophy fish, prized by anglers for their size, shape and colours. Being a trophy fish means many are photographed being held by the angler that caught them, such as in the image above. They then end up on social media with convenient identifying tags, such as ‘Record breaking Tench, 7lb 3oz, Willen Lake, 20-02-2025’. When the neural network was looking for similarities in pictures that were labelled as containing Tench, one of the most common similarities was therefore the appearance of human fingertips (Shane, 2020).
As GenAI tools cannot explain themselves, they can be hard to control. And this brings problems. How do companies stop GenAI tools from producing illegal or offensive content? We examine this in the next section.
5 Large language models
