1 Police use of technology
Police forces have always used the latest technology to track and identify potential offenders. Recently, some forces have deployed Live Facial Recognition (LFR) systems, usually at large public events. LFR links one or more camera feeds (usual mounted on a van) to a computer where AI (artificial intelligence) software maps facial features in real time, comparing this data to a predetermined watchlist of people of interest (for example, missing persons or individuals with outstanding warrants). If matches are found, officers on the ground are alerted to decide whether to make a stop.
Figure 2: Notice warning the public of overt use of LFR in Croydon in 2024 [Description: A red sign with white writing displayed on a pavement reading: ‘Police live facial recognition in operation.’ Beneath are several paragraphs of smaller text.] Source:https://www.bbc.co.uk/news/uk-england-london-68638348
The technology was first used at a UEFA Champions League Final in Cardiff in 2017 (College of Policing, 2024). During early pilots of LFR, concerns over privacy, data protection and equality laws led to led to a court case against South Wales Police. Following a final judgement in this case (Court of Appeal vs Bridges, 2020) the College of Policing issued Authorised Professional Practice guidance on the use of LFR and tests were commissioned on the accuracy of the algorithms used by police forces LFR systems.
Activity 1 Police use of live facial recognition
In 2021–22, the House of Lords Justice and Home Affairs Committee considered the issue of new technologies in the justice system and focussed in part on the use of LFR.
Read the except below from the ‘Summary’ of the Committee’s report (p.3-4). As you read, consider the following question:
What is the Committee’s view on the use of AI in live facial recognitions systems by the police?
In recent years, and without many of us realising it, Artificial Intelligence has begun to permeate every aspect of our personal and professional lives. We live in a world of big data; more and more decisions in society are being taken by machines using algorithms built from that data, be it in healthcare, education, business, or consumerism.
Our Committee has limited its investigation to only one area–how these advanced technologies are used in our justice system.
[...]
We began our work on the understanding that Artificial Intelligence (AI), used correctly, has the potential to improve people’s lives through greater efficiency, improved productivity. and in finding solutions to often complex problems.
But while acknowledging the many benefits, we were taken aback by the proliferation of Artificial Intelligence tools potentially being used without proper oversight, particularly by police forces across the country. Facial recognition may be the best known of these new technologies but in reality there are many more already in use, with more being developed all the time.
[...]
Informed scrutiny is therefore essential to ensure that any new tools deployed in this sphere are safe, necessary, proportionate, and effective. This scrutiny is not happening.
Instead, we uncovered a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.
Public bodies and all 43 police forces are free to individually commission whatever tools they like or buy them from companies eager to get in on the burgeoning AI market.
And the market itself is worryingly opaque. We were told that public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality–despite the fact that many of these systems will be harvesting, and relying on, data from the general public.
[...]
We learnt that there is no central register of AI technologies, making it virtually impossible to find out where and how they are being used, or for Parliament, the media, academia, and importantly, those subject to their use, to scrutinise and challenge them. Without transparency, there can not only be no scrutiny, but no accountability for when things go wrong. We therefore call for the establishment of a mandatory register of algorithms used in relevant tools.
And we echo calls for the introduction of a duty of candour on the police to ensure full transparency over their use of AI given its potential impact on people’s lives, particularly those in marginalised communities.
Comment
While the House of Lords Justice and Home Affairs Committee was generally in favour of technologies such as AI and accepted its importance, for example, in the health care system, they were concerned about lack of regulation, consistency and transparency in its use by police forces. As such, they pressed for greater regulation in the use of AI, particularly in relation to facial recognition, by police forces.
Despite initial controversy, the LFR continued to be deployed. In March 2024, the Metropolitan Police Service used it quite extensively in South London. Based on LFR matches, 17 people were arrested. The deployments prompted considerable protest and public concern. While initial concerns had been around the storage of data or potential bias in the algorithms used, these later protests focussed more on the proportionality of the level of surveillance, and the possibility that the technology would amplify alleged racial biases in police practices. Several London councils (Islington, Haringey and Newnham) called for the technology not to be used inside their boundaries. As of 2024, the technology is still in use but remains controversial.
Many other examples could be found of new technologies, embraced and deployed by the police, which have subsequently proved either controversial or problematic. The collection and use of DNA evidence (held in the DNA National Database run by the Home Office and the NPIA) would be another example of how the adoption of new technologies by the police is often not without controversy and risk.