Implementing AI tools
There is a lot of interest within the voluntary and advice sector around how GenAI can support organisations to help more people, but it is essential to think carefully before integrating a tool as there can be significant reputational risks if technology implementation goes wrong and the risk of harm to service users.
Citizens Advice SORT had the technical capability within its innovation team to build a custom-based model inhouse. Citizens Advice Scotland and the NSPPC worked with HelpFirst to build their solution. Wyser is another company that is working with advice and public sector organisations to integrate AI solutions, and you can read some of the case studies on their website.
Reputational risk when implementing AI tools
Watch this video from Stuart Pearson where he discusses the importance of organisations being aware of reputational risk when implementing AI tools.

Transcript
What principles does Stuart identify as important when thinking about implementing AI tools?
Discussion
Stuart discusses the importance of organisations being aware of reputational risk and thinking about how AI can assist, not replace people. Organisations should consider their existing practices, and ensure they establish guard rails. They should involve stakeholders, act with transparency, and make sure there is human contact and oversight.
Citizens Advice SORT has adopted an approach to integrating AI tools that aligns with the strategies of Citizens Advice Scotland and the NSPCC. These organisations have placed strong emphasis on the responsible use of AI, ensuring that the technology supports and does not undermine the core work and values of the organisation.
Citizens Advice Scotland
