11 Security

AI systems face growing security threats, from data breaches to adversarial attacks. To mitigate these risks, organisations must adopt comprehensive security measures including:

Firewalls and Intrusion Detection Systems (IDS)

These act as a first line of defence by monitoring network activity and alerting teams to suspicious or unauthorised access attempts.

Data encryption

Protects sensitive data – such as customer information, intellectual property, or algorithmic models – both at rest and in transit (National Cyber Security Centre, 2020).

Regular security audits and penetration testing

Conducting frequent evaluations helps to identify system vulnerabilities and address them before exploitation can occur.

Access controls and authentication protocols

Implementing multi-factor authentication (MFA), least-privilege access, and strong identity verification helps reduce the risk of internal and external breaches (Srinivasan and Lee, 2022).

Risk and mitigation

Now that you have learnt about some of the key risks that can arise in the use of GenAI by organisations, look at the table below which matches the risk to the appropriate mitigation step.

RiskMitigation
Inaccurate and unreliable data.Accuracy and reliability checks are required.
AI is a ‘black box’ and it is hard to understand the decision-making processes.Reverse engineering is required to deconstruct the model.
AI is heavily dependent on data collection and processing that is not always data protection compliant.Data protection compliance requires protection for personal and sensitive data.
AI data collection and use is often cross border and pays no attention to jurisdictional limits.Cross-border data transfers require safeguard steps to be taken, including standard contractual clauses to be used.
There is little opacity in the algorithmic decision-making adopted by AI systems.Within regulated professions there are obligations to main confidentiality therefore as a minimum data protection impact assessments (DPIAs) are required.
AI systems can be vulnerable to cyber threats.Key practices and comprehensive security frameworks are required to safeguard AI models and the data they use.
AI systems are not readily transparent, and are increasingly used for automated decision-making, reflecting biases in the training data which can be prejudicial.Clear accountability is required where AI systems are used to make decisions. This requires – for regulated professions – human ownership of decisions.

Having thought in depth about the potential risks that can arise with the use of GenAI, the next section explores managing GenAI tools and the considerations to be made when organisations consider the use of GenAI.

10 Responsibility and accountability

Session 3: Mitigating AI risks – 40 minutes