Bias

In the first course, Understanding Generative AI, we identified bias in outputs that come about because the training data is itself biased (rubbish in, rubbish out). Outputs of GenAI showing bias for age, gender, and race in certain representations of both professionals and public groups can be relatively easy to spot.

On the other hand, if the AI is presenting an opinion or an explanation around cause and effect, it may not be obvious what trust can be placed in the dataset on which it bases that opinion or explanation. The bias may not be explicitly represented in the output.

This can be very hard to detect unless the reviewer is aware of the potential for bias in materials relevant to the prompt.