4 Societal concerns

At the start of this course, we discussed an example of an image generated by AI which was shared on social media to wrongly suggest Taylor Swift supported Donald Trump’s presidential campaign. This is an example of some of the broader societal concerns around the use of GenAI.
Deepfakes involve the use of GenAI tools to mimic someone’s image or voice in a way that appears real. Whilst this can be fun, it potentially has frightening consequences if the image of a world leader or company owner is used to spread wrong information, or the deepfake is used to de-fraud individuals.
Whilst it has been possible to generate these fakes for some time, the advances in GenAI now means that these deepfakes can be done much more easily, cheaply, quickly and with very little original data (a photo or short audio clip is now sufficient to train the GenAI tool). The speed of these changes and the improvement in the quality of both images and text is remarkable (Giattino et al., 2024) but could be harmful if used for misinformation or phishing.
Ofcom research from 2024 revealed that two in five people say they have seen at least one deepfake in the last six months, and this typically involves the generation of sexual content and scam adverts (Ofcom 2024).
Listen to this video of Harry Clark, a lawyer with Mishcon de Reya, discussing the importance of verifying information in a GenAI era.

Transcript
There are also concerns that GenAI tools can be used in a way they were not envisaged originally, sometimes in a harmful way. For example, could GenAI meeting summarisation software, or GenAI facial recognition software, potentially be used for surveillance by companies or governments (Olvera, 2024; Pfau, 2024)?
Finally, GenAI systems are labour intensive and use low-paid workers in the Global South to work through some of the training data used, tagging a vast number of examples of racist, sexist, or violent language. This has allowed the tool to identify inappropriate content and implement guardrails (discussed in the first course). For example, OpenAI reportedly used workers from Kenya to go through such material, paying $2 an hour (Time, 2023).
Whilst these concerns may not directly impact you, your clients or your organisation, it is important when using AI tools to inform yourself of the societal concerns surrounding both the development of the tool, and how it can be misused. You will then be able to make choices that minimise harm to others and to society.
As well as being labour intensive, the development of GenAI tools is also resource intensive. The next section discusses the final ethical concern about GenAI use, which is the impact such tools have on the environment.
Session 2: Bias, societal and environmental concerns – 90 minutes
