For some policy areas, it seems that the closer you look, the more confusion results. So it seems with the General Data Protection Regulation, coming into force in May – especially as it applies to children.
In our recent roundtable discussion, we invited experts from the Information Commissioner’s Office to present their thinking to a multi-stakeholder audience, in advance of a public consultation process (now open). Building on an earlier roundtable and new report, and some subsequent lively discussion, we hoped this new roundtable would replace confusion with clarity.
As set out in our new report, some things are now clearer, and others are not. Let’s start with the former:
First, in case it wasn’t already obvious, key human rights organisations have reminded everyone concerned that children (like everyone else) have the right to privacy – including in relation to business and the digital environment – see UNICEF’s statement, that of the Children’s Commissioner for England, the recent reportby the House of Lords, and also deliberations at the Council of Europe.
Second, Article 29 Working Party information is now available online including Article 29 Working Party consent guidance and Article 29 Working Party draft guidelines on profiling. As mentioned earlier, ICO has published Children and the GDPR guidance for consultation which closes on 28 February 2018. The guidance includes new rules about automated decision-making, the right to erasure and consent. In the meantime, the UK government has recently announced that it will support an amendment to the Data Protection Bill to impose a stricter code of practice for protecting children’s privacy online – by design, with the focus especially on provision for 13-17 year olds who are above the age of consent but still children.
Last, also clearer is the age of consent that many countries are now choosing for children’s access to information society services (as provisionally indicated in the figure below – see here).
Each country gives a different reasoning (if it gives any), perhaps because, it seems, none has consulted evidence to ground its decision:
- In France, for instance, the lack of a convincing rationale to reduce the age below 16 is sufficient to stay with 16, arguing that this will encourage a productive involvement of parents in teenagers’ online lives (since parental consent will be required for children’s online activities);
- In the UK, however, the lack of a convincing rationale against reducing the age to 13 (the current norm) was considered equally compelling, arguing that this will encourage online providers to develop protective tools (since in effect, responsibility for child protection is being shifted from parents to industry);
- In the Czech Republic, the rationale for 13 is that teenagers are already using social media, and anyway the risks are not as great as activities for which a higher age is set (e.g. driving).
As a consequence, as each country makes its decision, reducing uncertainty for families in that country, other uncertainties arise. For instance, in countries where a higher age is chosen than current practice, will teenagers be unceremoniously thrown off services? What will happen to their photos, their data, their connections?
Obviously too, the differences across Europe raise questions both for providers and also for children who may access resources across countries (perhaps moving or even taking a holiday!) As revealed in our roundtable discussion, regarding the applicable jurisdiction there are three options – the domicile of the information society service or that of a child or the physical location of a child when she or he is actually using the service – and it’s not evident which one should apply.
Surprising to us as academic observers, aware that a lot of lawyers are now actively advising on all things to do with the GDPR, there’s a lot that’s still unresolved, with legal interpretations already differing on some key issues relating to children. These include clarity over the legitimate bases of processing data (i.e. when is consent required, and how frequently), and the practicalities of how a parent is to be identified and verified (i.e. linked to the right child). Most concerning, perhaps, is the question of how providers are even to know that a user is a child in the first place, so that child-appropriate protections can apply (age-verify every single user?).
As John Carr comments on the guidance notes from the Article 29 Working Party, the requirement of an impact assessment sounds promising but is currently unclear. He asks whether information society services “will need to consider each discrete and particular data processing activity that is possible on their site or within their service …to ensure they have completed an impact assessment for all of them and have obtained the appropriate permissions.”
Then, Recital 71 states that “profiling should not concern a child” whereas Article 22 indicates provided a decision doesn’t pose a “legal or similarly significant effect on the child”, data controllers are not prohibited from making profiling decisions about children. The Article 29 Working Party guidance says profiling affecting a child’s choices and behaviour, depending on their nature, has a potential risk of having a “legal or similarly significant effect”. So how will this be weighed and by whom?
Besides, there are some other issues remaining in question including the definition of information society services, how to deal appropriately with specific vulnerable children (rather than the ‘average’ child), underage individuals’ unauthorised usage (already a widespread practice), the consequences of the adjustment of the age of consent across countries, and the practicalities of giving parental consent.
Meanwhile, vast amounts of data are being collected from children, with or without consent, with or without knowledge that children are even using the services and, further, with or without adequate security provision, and without parental awareness of many of these issues. Further, while much of the meeting focused on consent-based data processing by the private sector, there are also a host of questions about data being processed – and often shared with third parties – in the public sector (schools being the most obvious case, often with little parental understanding). Data breaches seem an increasingly common occurrence. Will the GDPR solve these problems?
We note that many of the services and products which are under scrutiny when discussing the protection of children’s data also offer them freedom of expression as well as joy and value. While it seems obvious to us that, in consequence, children should not be excluded from the policymaking process, it appears that, unfortunately, children themselves have rarely if ever been consulted on their views about the unfolding policy that will manage both their opportunities and risks, notwithstanding good practice in deliberative policymaking, including with children.
Thus their understanding of their online privacy is not taken into account in these debates. On the bright side, this long-standing complaint is being rectified, for the first author has just been awarded a grant from the ICO to research exactly this – so, watch this space. But much else remains confusing and unresolved, making the ICO’s currently-open consultation all the more important. Please respond!
This article originally appeared on the LSE Media Policy blog under a CC-BY-NC-ND licence