10 Building trust in Generative AI

For an organisation or individual planning to use a GenAI tool, it’s important to understand how other people will respond to its operation and the outputs it produces.
The trust employees and clients place in GenAI will affect how and when an organisation plans to use such tools. It is important to note however that this is a rapidly developing area, and the public’s trust in GenAI systems may well develop and increase, as the use of such tools becomes more familiar and frequent within society.
Current surveys suggest that the older generations are less likely to trust GenAI, with two thirds of those not using GenAI being born before 1980. By contrast, Gen Z (those born after 1995), who have grown up with technology, are most likely to use and trust GenAI (Koetsier, 2023). One consequence is that supervisors, managers and owners of organisations may not be aware of the extent of their employees’ use of GenAI tools.
For example, did you know that a 2024 survey of 4,500 employees and managers found that over half of Gen Z employees trusted GenAI more than their managers? The same survey suggested that employees’ expectations around GenAI use within work was not matched by their managers, with over half of Gen Z employees also expressing concerns about inadequate organisational guidelines (Khan, 2024). While these employees are more likely to trust GenAI and be confident in using it, they do so uncritically and are less likely to understand the tools’ limitations (Merrian and Saiz, 2024).
Organisations therefore need clear guidance on the responsible and ethical use of GenAI for employees and to ensure that they understand the limitations and risks of inappropriate use.
When considering client trust in GenAI, this will depend on their age, background and familiarity with GenAI tools. Businesses, older clients or clients who are unfamiliar with GenAI may need reassurances before trusting the use of GenAI tools for legal advice. They are likely to be concerned about data privacy, the accuracy of any outputs, and potential bias in outputs (Price Waterhouse Coopers, 2024).
Addressing these issues through a responsible and ethical use policy will encourage the client’s trust in the use of these tools. Simply not using these tools can also have consequences, in terms of reduced efficiency, increased expense and societal concerns. For example, the UN warns of the potential of widening digital divides if individuals and businesses fail to take advantage of GenAI due to a lack of trust (United Nations, 2024).
By contrast, individual clients (particularly if they are younger) are more likely to trust GenAI and adopt any output uncritically. An interesting study by Southampton University found that when offered advice written by a lawyer and by a GenAI tool, members of the public were more likely to choose to believe the GenAI tool. This remained true even when they knew the advice was from a GenAI tool (Schneiders et al., 2024). One possible reason may be the over-confident tone of advice produced by an LLM, which typically will not include the caveats and disclaimers likely to be included by lawyers.
Given these concerns, it is important that outputs of GenAI tools are evaluated before being released to clients, and that organisations and individuals are transparent in reporting when GenAI has been used, and outputs are being included in client communication.
How do you ensure your ethical and responsible use of GenAI?
Are your clients sceptical, or overconfident, when using GenAI tools? In light of the trust that they have (or do not have) in GenAI, what do you need to do to ensure your ethical and responsible use of GenAI (or that of your organisation)?
Make a note of this below and also in your work folders to ensure you address any identified issues in your GenAI guidance.
9 Legal liability for Generative AI use
