Privacy Threat Modeling Frameworks

View

LINDDUN

An iceberg with the LINDDUN privacy threats. Only data disclosure and non-compliance are visible above the water.

At most companies, only Data Disclosure and Non-Compliance are top of mind for privacy. We tend to be much less aware of the other LINDDUN threat categories, and yet these can have serious consequences for users. (Image source)

LINDDUN is a threat modeling framework for privacy threats developed at KU Leuven. It categorizes privacy threats into 7 types: Linking, Identifying, Non-Repudiation, Detecting, Data Disclosure, Unawareness, and Non-Compliance. It is by far the most widely-known privacy threat modeling framework, and has been included in the ISO 27550 standard on privacy engineering for system life cycle processes and in the European Data Protection Supervisor's Preliminary Opinion on Privacy by Design. A simplified threat model is available as cards (LINDDUN Go) to gamify the threat modeling process. To learn more about each threat type, click on its card below.

You can think of linking as 'connecting the dots', combining multiple data points together to reveal more information about an individual or group. Over time, you may be able to learn enough about them to uniquely identify them. Even if you don't, this can still violate their privacy. For example, you might be able to learn their preferences and start targeting them with manipulative personalized ads.

Learning the identity of an individual in a context where they expected to be able to be anonymous. This can either be direct (for example forcing a user to provide and verify their phone number before they can browse your site) or indirect (such as by reidentifying entries in a poorly "anonymized" dataset).

If you have a background in security, this category may surprise you because, in some security contexts, repudiation (the ability to deny a claim about you, such as that you visited a specific web page or took a certain action in the product) can be a threat. From a privacy perspective, however, it's quite different. There are various scenarios where a user may need plausible deniability of an action they took. Imagine, for example, that a woman searches for nearby abortion clinics in an area where abortion is illegal: that search history could be requested by law enforcement and used as one piece of evidence in a prosecution case.

This threat category is closely linked to Linking and Identifying. However, for Detecting threats, it's not necessary to read the actual data; instead, just knowing that the data exists allows an attacker to learn information about the individual. For example, if a website shows a "welcome back!" login page when an email is entered for an existing account, an attacker can learn that the person who uses this email is registered for the site. They don't need to know or guess their password: just entering their email (which may be public information) is enough.

Aha! Finally a leak to security! This is just the same as a breach of confidentiality, right? Actually, in a privacy context this threat is much broader. It does include traditional data breaches where personal data is hacked or leaked, but it also includes unnecessary or excessive collection, processing, and sharing of personal data. Your systems might be perfectly secure, but if you are are collecting your users' location 24/7 when that's completely unnecessary for your product, then you are the threat actor and pose a data disclosure threat to your users.

This threat category encompasses all cases where users are not provided with enough information about or control over their privacy. For example, an app might fail to disclose that it is selling personal data to advertisers, or it might not be possible for users to exercise their privacy right to delete their data. One easily-overlooked scenario is that insufficiently informing users can also lead to them causing privacy harms to other people: it's important that they understand how their actions in the product could affect others.

Unlike the other privacy threats, which are primarily threats to users, this is a major threat to your company. If your use of personal data is unlawful under data protection law or you have insufficient data governance and cyber risk management processes in place, you could be subject to significant fines from data protection authorities. If a data breach occurs, it's also likely to impact you much more if you don't have appropriate processes in place. For example, the breach may include the data of many more users if you don't have appropriate data retention policies, because the data of former users was never cleaned up.


💻 Exercise: check out the detailed LINDDUN threat trees, which include specific threats and a range of examples for each of the threat categories above. Have you encountered any of these threats as a tech user? How did that make you feel? What could the product have done differently to make you feel safer and reduce the risk of harm to you?



Models of Applied Privacy

The MAP framework combines LINDDUN with other threat and harm taxonomies to categorize threat actors, threat mechanisms (i.e. categories, such as Linkability from LINDDUN), and threat impacts (i.e. privacy harms). The framework's unique aspect is that it is persona-based. Rather than just thinking about broad categories of threat actor like 'malicious insider' or 'data protection regulator', investing the time to imagine your actor's motivation, skill level, and cultural context using MAP's persona cards can help you better anticipate their behavior. MAP's categorization also encourages you to think about threat actors acting with good intentions who may still cause privacy harms, such as a developer who simply doesn't consider privacy, or who thinks they are handling personal data appropriately but actually is not. Such non-malicious threat actors can sometimes pose the highest risk to your users.



Plot4AI

The Plot4AI framework is based around a library of threats across 8 categories:

  • Non-Compliance
  • Technique & Processes
  • Accessibility
  • Identifiability & Linkability
  • Security
  • Safety
  • Unawareness
  • Ethics & Human Rights

As the categories suggest, it also considers threats beyond privacy in the broader space of trust & safety. It was created with the goal of expanding LINDDUN's scope into artificial intelligence and machine learning. Even if you don't work with this, the framework is worth checking out, because many other threats are broadly applicable. The threat library is phrased as a series of very specific questions, which are a helpful starting point if you're struggling for ideas when threat modeling. They can also help you identify your threat "blind spots". For example, perhaps you routinely assume your users are adults and forget to consider threats to children: the library has a question ready to remind you.



Further Reading