An independent review recently concluded that biometrics technology trialled by two UK police forces doesn’t discriminate based on race, but human rights advocates warn that people of colour will still be disproportionately targeted by law enforcement. Alexandra Leonards reports.
Both The Metropolitan and South Wales Police recently said they would continue using live facial recognition (LFR) in public after an independent review found that the technology they have deployed “does not discriminate based on gender, age or race”.
South Wales Police had previously paused its use of LFR after it faced legal action for using the technology in a public space to compare pedestrians to a database of persons of interest. The 2020 trial was the first of its kind, and found that there was no clear guidance about where it could legally be used and who could be on an LFR ‘watchlist'. The court also decided that South Wales Police, which subsequently paused its use of facial recognition, had failed to take steps to identify whether the technology included racial or gender biases.
Last year the two police forces also came under scrutiny by researchers at The University of Cambridge’s Minderoo Centre for Technology and Democracy, who called for a ban on the use of the technology by police after carrying out an audit which found they had failed to meet “minimum ethical and legal standards”. The organisation also said that because the police kept important information about its use of LFR from view, it was difficult to evaluate whether the technology perpetuates racial profiling.
But tests carried out by the National Physical Laboratory (NPL) in April revealed that, when used with a particular setting, the algorithms behind the LFR system used by both The Met and South Wales Police can be operated without variation between different demographic groups.
The Minderoo Centre welcomed the “substantial improvement” in accuracy. However, its response also warned that the 10 LFR trials carried out by The Metropolitan Police between 2016 and 2019 used a lower face-match threshold setting of 0.55, which it claims would have produced a worse performance for Black individuals compared to white individuals based on the NPL’s findings.
The police forces will now use a 0.6 or above setting, which has a much lower false-positive rate, with the director of intelligence at The Met saying that the chance of a false match is now one in 6,000 people who pass the camera.
Does accuracy mean the end of discrimination?
The Minderoo Centre said that while it is glad that the police answered calls from lawmakers and civil society groups to carry out independent scrutiny and testing, it was still troubled by the fact that the NPL research took place only after a significant number of deployments of the technology were made. It also pointed out that there are further concerns associated with LFR that live outside questions of accuracy.
“Moreover, we need to consider ethical and legal concerns that go beyond the issues of accuracy and bias that the NPL evaluates,” it says. “We need to understand and consider the impact of facial recognition on policing practice and on communities.”
Some human rights organisations claim that a more accurate algorithm doesn’t mean that the technology won’t continue to facilitate discriminatory practices.
“The way we understand the concept of crime is already a racialised and classed concept,” says Emmanuelle Andrews, policy and campaigns manager at Liberty (formerly known as the National Council for Civil Liberties). “We also know that police go out and over-police, surveil certain communities, and arrest certain people disproportionately and bring them into custody.”
Custody images, which are used by police to build LFR watchlists, are taken whether or not a person is convicted, charged or found not guilty. Even if a person is found innocent or released without charge, while their DNA or fingerprints are automatically deleted, their images are not.
Black people are over three times more likely to be arrested than white people, mixed ethnicities nearly twice as likely, and Asian people also more likely to be arrested than their white counterparts when looking at rates of arrest per 1,000 people. As such, it is likely that people of colour will also be overrepresented in LFR watchlists given that these systems use images taken in custody.
“Facial recognition watchlists are based on these images, and since there is no explicit legal basis for facial recognition technology, police forces can simply use it however, wherever, and against whomever they choose,” warns Andrews.
Andrews says that given the findings of Louise Casey’s well-publicised investigation into The Met, which found it was institutionally racist, misogynistic, and homophobic, his organisation is concerned about it being given more tools that “seek to absolve them of this”.
“Fundamentally, technology is not neutral and will continue to be used to oppress communities across the country,” he warns.
The use of facial recognition by police continues to be a highly contentious issue for this country, with many discounting its benefits because of concerns about privacy and discriminatory practice. Whether the technology’s algorithm is accurate or not, not enough has been done to allay concerns that live facial recognition systems and their watchlists will reinforce the same biases that see marginalised communities of the UK disproportionately arrested and targeted by the police.
Recent Stories