Bias
Generative AI models attain their so-called ‘intelligence’ predominantly by being trained on masses of data. All of this data, when traced to its origin, is ultimately a large collection of human input.
As argued by Professor Daniel Kahneman, all humans are inexorably (at least somewhat) biased – be this conscious or unconscious.
It follows thus that all generative AI models bear the risk of being tainted with some degree of bias. In other words, these models can inherit human bias, and then deploy this bias at scale.
Bias is a huge problem in the world of AI. However, there are certain ways in which the issue can be alleviated. Stuart Russell points to the concept of ‘suitably designed’ machines: this essentially means that ‘different metrics and standards’ of fairness should form part of the model’s training data so that the models are ‘not co-opted by economic and political interests’. Despite this, AI companies should always be transparent with regards to their training data, and strengthening society’s AI literacy (an aim of this course) is also essential.