Are biometrics above the law?

Alongside ethical concerns, unclear governance has driven a backlash against the use of second-generation biometrics by police. Alexandra Leonards finds out where the tech fits into current legislation and looks at one of the technology’s biggest flaws: racial bias.

Prompting formal investigations, media debate, and even legal action, the use of facial recognition by police forces has proved highly contentious in recent years. Longstanding ethical concerns are underpinned further by ambiguity around where newer biometrics systems belong within existing legal frameworks.

Contemporary biometrics appear to be in a legal limbo, with critics warning that governance of the technology is at best unclear, at worst non-existent.

Despite not strictly being governed by current legislation, over the past few years UK police have experimented with everything from live facial recognition to gait analysis and voice recognition.

Biometrics legislation: what’s the verdict?

The law currently governing police use of biometrics is the Protection of Freedoms Act (PoFA), enacted nearly a decade ago, which oversees the use of DNA and fingerprints.

Since the legislation was established, many new, ‘second-generation’ biometrics technologies have been developed and adopted.

“Exploration by the police of the new biometric technologies and AI-driven analytics has been unnecessarily dampened by the failure of the Home Office to provide governance, leadership and re-assurance to support such work,” writes Paul Wiles, the former commissioner for the retention and use of biometric material, in an annual report submitted to the home secretary in 2020.

In the same year, South Wales Police faced legal action for its use of automatic facial recognition (AFR) in a public space. The technology in question compared pedestrians to a database of persons of interest.

The trial was the first ever time a court had considered this kind of tech.

Initially, it was decided that the police had not broken the law. But the Court of Appeal soon ruled that its use of AFR had in fact been unlawful.

The court found that there hadn’t been clear guidance on where the technology could be used and who could be on a watchlist. The ruling also decided that the data protection impact assessment associated with the tech wasn’t up to scratch.

Additionally, the court said that the police force had not taken reasonable steps to find out whether the AFR it used included either racial or gender bias.

Meanwhile, the Metropolitan Police Service (MPS) announced that it would adopt live facial recognition to help identify perpetrators and victims of gun and knife crime, child sexual exploitation, and other serious crimes.

According to Wiles’ report, the Met put a legal mandate for the use of live facial recognition on their website, alongside a guidance document for the governance of the deployment. It explains that these steps were taken to show the legality of their use of the technology and to demonstrate they were keeping inside the “jurisprudence of the South Wales judgement.”

The former commissioner says that while this was indeed a legal move, the public’s view cannot be forgotten.

At the time, the ‘London Policing Ethics Panel,’ an independent panel set up by the Mayor of London to provide ethical advice on policing issues that impact public confidence, found that while the use of newer biometrics by police highlighted important ethical issues that should be addressed, it did not think that this was enough to stop its use.

But data protection and privacy specialist Owen Sayers claims that UK police are not currently adhering to much of the Data Protection Act’s law enforcement processing requirement for biometrics technologies like facial recognition. He thinks that the appeal court judgment for South Wales Police is testament to this.

“We are seeing ‘ethics panels’ springing up all over the place in law enforcement today - mechanisms that are I am sure intended to give the public confidence that the police are doing the right thing,” says Sayers. “The problem is that there is ample evidence – in the UK at least – that even though they are creating ethics panels, the UK police and wider criminal justice system are also breaking the law consistently and repetitively – and their own internal policies – and seem to be doing so with gay abandon.”

The call for new legislation


In the ex-biometrics commissioner’s report, he calls for new legislation that includes new rules for the police, alongside others.

“The alternative is that we are likely to see further legal challenges to other biometric use by the police,” says Wiles.

Although these legal challenges help determine how the police should behave, he warns that without new legislation, it would result in the much slower adoption of new biometrics and relying on “judge-made law.”

Even Wiles’ predecessor argued that new laws would be essential in the governance of second-generation biometrics.

New legislation was proposed in the government’s most recent manifesto, but it’s unlikely to come to fruition any time soon. Last month the government finally responded to a four-year old document written by the biometrics commissioner and forensic science regulator, which strongly called for legislation on modern biometrics. The government said that there already exists a “comprehensive legal framework’ for the management of biometrics, which is spread out across a number of laws.

These laws include, says the government: “the police common law powers to prevent and detect crime, the Data Protection Act 2018, the Human Rights Act 1998, the Equality Act 2010, the Police and Criminal Evidence Act 1984 (PACE), the Protection of Freedoms Act 2012 (POFA), and police forces’ own published policies.”

But some argue that existing legislation is not sufficient in governing the latest biometrics technologies.

The government did however say that it would “watch with interest” the progress of Scotland’s recent legislation, which will oversee how second-generation biometrics data is stored. The Scottish government has also created the new role of biometrics commissioner.

While Scotland gains a biometrics commissioner, the rest of the UK loses one. The role has now been merged with that of the surveillance camera commissioner.

This has sparked concern from critics, who believe that this is a huge responsibility for just one person.

“The number of eyes looking at the issues of biometrics use and surveillance camera use has just halved,” warns data protection expert Owen Sayers. “The useful, and I think very important, counterpoint that existed between the biometrics commissioner – traditionally a scientific specialist – and the surveillance camera commissioner – generally a former senior Police Officer – has been lost.”

He adds: “Now it’s down to one individual to comment on these hugely diverse topics and to protect the public interest in the face of clear governmental and police desire to accelerate roll-out of new technology, and which evidentially they seem to not do their legal diligence upon particularly well.”

Biometrics in the EU

The UK isn’t the only country grappling with biometrics and the legislation surrounding the technology. Greece’s plans to roll out ‘smart policing’ this summer have been criticised by digital rights groups, who accuse the police of failing to meet legal requirements.

The €4 million programme will see the Hellenic Police sporting smartphone-sized devices that enable speedy facial recognition and automated fingerprint identification.

These devices will be used during police stops, where officers can take close-up photographs of a person’s face and collect his or her fingerprints. In real-time, the images and fingerprints will be compared with data already stored in both EU and national databases for identification purposes.

Back in December 2019, Greek digital rights organisation Homo Digitalis filed a freedom of information request to the Hellenic Police to learn more about the technology. After receiving what it describes as an “unsatisfying” response, the organisation proceeded with a complaint against the plans before the Hellenic Data Protection Authority (DPA.)

According to the group, the DPA started an official investigation into the plans in August last year.

“We claim that the processing of biometric data, such as the data described in the contract, is allowed only when three criteria are met,” says Eleftherios Chelioudakis, technology lawyer and co-founder of Homo Digitalis. “One, it is authorised by Union or Member State law; two, it is strictly necessary; three, it is subject to appropriate safeguards for the rights and freedoms of the individuals concerned.”

Chelioudakis claims that none of these criteria are applicable to Greece’s smart policing programme.

“Specifically, there exist no special legal provisions allowing for the collection of such biometric data during police stops by the Hellenic Police,” he says. “Moreover, the use of these devices cannot be justified as strictly necessary, since the identification of an individual is adequately achieved by the current procedure used. Nevertheless, such processing activities are using new technologies and are very likely to result in a high risk to the rights and freedoms of the data subjects.”

Aaron Amankwaa, research scientist in forensic genetics, biometrics, and biology at Northumbria University, says that the potential benefits of the technology are very clear in terms of improving the efficiency of the police.

“The privacy issue here is which specific databases will be used and what safeguards are already in place for those databases,” says Amankwaa. “The general principles that should apply here when drawing the line are the necessity [or] relevance of data collection; the severity of offences for which data will be collected and searched in databases; and the characteristics of data subjects.”

Additionally, he says, there needs to be clarity on the purposes of the data collected from individuals, how long the data will be retained, and when it will be destroyed.

“Greek smart policing technology falls at the first data protection hurdle: in the EU, just like in the UK, law enforcement has its own dedicated legal text, not the GDPR, but the Law Enforcement Directive: all the data protection principles apply across law enforcement,” says Chiara Rustici, data regulation analyst from the law specialist group at BCS, The Chartered Institute for IT. “Policing is not a legitimation of blanket biometric data collection; while explicit consent may not be possible when it circumvents the purpose of an investigation, lawfulness, necessity and proportionality remain key principles and prior authorisation by the country's Data Protection Supervisory Authority is a necessary legitimacy check.”

Rustici adds that the onus is on the Greek police to demonstrate the necessity and proportionality of all the proposed data linkage.

This month the European Union announced plans to ban the use of AI systems for “high risk” applications like mass surveillance or ranking social behaviour. Under the new rules, authorities would also need special permission for the use of biometrics like facial recognition in public spaces.

The EU did also say that it would make exceptions to the rules for certain public security concerns, including military usage.

It’s unclear at this stage whether these upcoming regulations, which are likely to take several years to come into force, will have any impact on Greece’s upcoming plans.

In-built discrimination

Another argument for new legislation is to help address the existence of racial biases in biometrics. This was one of the issues brought up by the Court of Appeal in the South Wales Police case, which found it had not done the necessary due diligence to identify whether the technology was discriminatory.

A study by the National Institute of Standards (NIST), which evaluated the impact of race, age, and sex on facial recognition software, discovered that depending on the algorithm, black and Asian people were between 10 and 100 times more likely to be misidentified in one-to-one matching.

For one-to-many matching, which compares a person’s image with a database like that used by police, there were higher rates of false positives for black women. NIST said that false positives, which happen when the software wrongly identifies two different people as the same person, are particularly important because they can lead to false accusations.

The Massachusetts Institute of Technology further backs up the research, with an examination of facial-analysis software showing an error rate of 0.8 per cent for white men and 34.7 per cent for dark-skinned women.

Mistakes made by these systems can lead to miscarriages of justice. A recent article published in The New York Times revealed that over the past year in the US, police officers arrested and temporarily jailed three people, all black men, based on inaccurate facial recognition.

But it’s not as if police are unaware of these flaws. Several years ago in the UK, it was reported that documents from the Home Office, police, and university researchers showed that the police had knowledge about the existence of racial biases in the technology

According to the BBC, the police have on a number of occasions failed to test this. In fact, they had at least three opportunities to investigate the technology over a five-year period.

“There are two basic reasons for racial bias in any deployment of AI: one that depends on how representative of reality is the training data set, and one that depends on what the assumptions that we code into the algorithms are,” says BCS’ Chiara Rustici. “It's not the biometric element that makes data racially biased: it is always a human choice that produces the bias.”

She says that the biases in facial recognition are no more sophisticated than old fashioned stereotypes and that these technologies encompass “long-standing, enduring human biases” easily corrected by good auditing.

“Generally, police databases are biased and not representative of the racial structure of populations,” says Northumbria University’s Aaron Amankwaa. “This is due to variations in policing practices which tend to target some minority groups or the nature of some offences which may be localised to specific communities.”

Amankwaa says that biometric misidentification is a potential risk with AI and Machine-Learning driven databases and that to minimise this risk, the police should use additional independent evidence to corroborate any identification evidence.

While the software itself can be biased, how the technology is used by law enforcement can also be discriminatory.

“In the technical specifications document of [Greece’s smart policing] contract, the police acknowledge that the use of this gear will increase ‘the average number of daily police stops’ as well as the ‘efficiency in the detection of third-country nationals who have exceeded the period of their legal residence in the country,’” claims Homo Digitalis’ Eleftherios Chelioudakis. “So, this project is specifically targeting marginalised communities, such as third country nationals; over policing these communities gives raise to discrimination.”

Discussions around the use of second-generation biometrics, particularly by police, are complex. But they’re arguably made more complicated by ambiguous legal frameworks.

Perhaps the technology is not above the law. But there are ethical and privacy concerns that, left without clear-cut legislation, could force the police into a decades-long battle with the public and criminal justice system.

    Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.