Met commissioner defends facial recognition

The Metropolitan Police commissioner Cressida Dick has defended the force’s use of facial recognition technology, calling critics ill-informed.

The Met began operational use of the technology earlier this month, despite concerns about its accuracy and privacy implications being raised by the likes of Amnesty International, Liberty and Big Brother Watch.

Speaking at the Royal United Services Institute (RUSI) on Monday, Dick stated: “I and others have been making the case for the proportionate use of tech in policing, but right now the loudest voices in the debate seem to be the critics, sometimes highly incorrect and/or highly ill-informed – and I would say it is for the critics to justify to victims of crimes why police shouldn’t use tech lawfully and proportionately to catch criminals.”

Addressing some of the specific criticisms, she explained that the Met’s system does not store biometric data, it has been proved not to have an ethnic bias and human officers will always make the decision about whether to intervene.

Dick said the only people on the facial recognition watchlist were those wanted for serious crimes and the only bias in the technology was that it was slightly harder to identify a wanted woman than a wanted man.

Trials had led to the arrests of eight criminals who would probably not have been caught otherwise, and she continued that it was not for the Met to decide the boundary between privacy and security.

“Speaking as a member of public, I will be frank, in an age of Twitter and Instagram and Facebook, concern about my image and that of my fellow law-abiding citizens passing through LFR [live facial recognition] and not being stored, feels much, much, much smaller than my and the public’s vital expectation to be kept safe from a knife through the chest,” she added.

RUSI, where she was speaking, published its own study at the weekend, stating that guidelines were required to ensure that the use of data analytics, artificial intelligence (AI) and computer algorithms developed “legally and ethically”.

The report suggested that the police’s expanding use of digital technology to tackle crime was in part driven by funding cuts, while officers are battling against information overload as the volume of data around their work grows.

Commissioned by the Centre for Data Ethics and Innovation, the research stated that while technology could help improve police “effectiveness and efficiency”, it was held back by “the lack of a robust empirical evidence base, poor data quality and insufficient skills and expertise”.

    Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.