1. Hallucinations. This is a concern you may well have heard of – that GenAI tools can produce responses which are nonsensical, inaccurate, or misleading. It has led to ChatGPT being described as a ‘bullshit generator’. The tool is not producing a ‘lie’ – it has no understanding of meaning, and is simply producing a statistically-relevant output. 2. Bias. There is an age-old adage in computer science of ‘rubbish in, rubbish out’. In GenAI tools, the outputs generated reflect the bias present in the data the tools have been trained on. This is typically information from the internet produced in WEIRD (Western, Educated, Industrialised, Rich, and Democratic) countries, and often reflects the perspective of a white male. 3. Environmental impact. Underlying AI technologies are vast data centres. This infrastructure has huge environmental impacts – not just from the amount of silicon needed to develop the chips inside the computers within the data centre, but the energy and water consumption needed to run them. 4. Legal implications. GenAI tools are trained on vast amounts of data, much of which is under copyright. Across the world there are a range of test cases currently working their way through the courts to explore whether the use of this copyrighted data in generative AI outputs is legal. Similarly, it is not clear who owns the output of a GenAI tool, and whether copyright can be claimed over that output. 5. Data privacy. Some tools claim rights over the content that is submitted to them, primarily to assist in training future versions of the tool. This can mean that information you put into the tool as a prompt can then be re-used by the company in the training data. If in doubt, do not submit any sensitive or confidential data into a GenAI tool. 6. Ethical concerns. There are many ethical concerns regarding GenAI tools, ranging from deepfakes (photos, audio recordings or videos that has been manipulated to make it appear as though one person has said or done something they have not) through to the appropriateness of using low-paid workers in Africa on the training data. 7. Explainability. As you have learnt in section 6 (Understanding Generative AI outputs), it is extremely difficult for GenAI tools to explain how they have produced a given response to a prompt. 8. Digital divide. Access and competency in using technology can lead to societal impacts, with those unable to make use of such technologies being disadvantaged. Within the UK, 8.5 million people lack basic digital skills and 1.5 million do not have access to a smart phone, tablet or laptop. 9. De-skilling expertise. Some people are concerned that overly relying on AI tools will de-skill people. Beyond the economic impact this may have, it has implications for the development of validated knowledge to continue training AI tools.