Privacy advocates in the United Kingdom responded with alarm Friday to an announcement that the Metropolitan Police plans to use live facial recognition cameras at specific London locations “to try to locate and arrest wanted people.”
“Rolling out a mass surveillance tool that has been rejected by democracies and embraced by oppressive regimes is a dangerous and sinister step.”
—Clare Collier, Liberty
“This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the U.K.,” Silkie Carlo, director of the London-based privacy campaign group Big Brother Watch, declared in a statement.
“This is a breathtaking assault on our rights and we will challenge it, including by urgently considering next steps in our ongoing legal claim against the Met and the Home Secretary,” Carlo added. “This move instantly stains the new government’s human rights record and we urge an immediate reconsideration.”
We condemn the decision of @metpoliceuk to roll out Facial Recognition technology in London.
Considering how the Prevent programme has been used to target activists and minorities, we have little confidence that this won’t be used in the same way.https://t.co/nQapSSjV22
— YouthStrike4Climate (@Strike4Youth) January 24, 2020
The rights advocacy group Liberty, based in London, issued a similarly scathing statement slamming the Met’s decision.
“This is a dangerous, oppressive, and completely unjustified move by the Met,” said Clare Collier, Liberty’s advocacy director. “Facial recognition technology gives the state unprecedented power to track and monitor any one of us, destroying our privacy and our free expression.”
“Rolling out a mass surveillance tool that has been rejected by democracies and embraced by oppressive regimes is a dangerous and sinister step,” she warned. “It pushes us towards a surveillance state in which our freedom to live our lives free from state interference no longer exists.”
Ed Bridges last year lost his case against South Wales Police over the use of AFR. His crowdfunded appeal was the world’s first legal challenge over police use of facial recognition technology.
The court went with the police’s claim that the AFR apparatus is placed in public not a form of covert surveillance that would contravene Regulation of Investigatory Powers Act 2000, which states that “surveillance is covert if, and only if, it is carried out in a manner that is calculated to ensure that persons who are subject to the surveillance are unaware that it is or may be taking place.”
The Met’s announcement presents plans to use live facial recognition (LFR) technology as in line with the court’s ruling, saying that “the Met will begin operationally deploying LFR at locations where intelligence suggests we are most likely to locate serious offenders. Each deployment will have a bespoke ‘watch list,’ made up of images of wanted individuals, predominantly those wanted for serious and violent offenses.”
“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” the statement continued. “The cameras will be clearly signposted and officers deployed to the operation will hand out leaflets about the activity. The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video, or ANPR.”
The Met’s assistant commissioner, Nick Ephgrave, framed the move as “an important development for the Met and one which is vital in assisting us in bearing down on violence.” Ephgrave said that “we are using a tried-and-tested technology, and have taken a considered and transparent approach in order to arrive at this point.”
Critics, however, pushed back against the suggestion that LFR technology is reliable. BBC News reported:
Trials of the cameras have already taken place on 10 occasions in locations such as Stratford’s Westfield shopping center and the West End of London.
The Met said in these trials, 70% of wanted suspects in the system who walked past the cameras were identified, while only one in 1,000 people generated a false alert.
But an independent review of six of these deployments found that only eight out of 42 matches were “verifiably correct.”
Referencing those findings, Big Brother Watch’s Carlo said Friday that moving forward with the use of this technology “flies in the face of the independent review showing the Met’s use of facial recognition was likely unlawful, risked harming public rights, and was 81% inaccurate.”
Sharing the Met’s announcement on Twitter, Big Brother Watch said, “See you in court.”
The Met’s move, the New York Times noted, “comes amid a worldwide debate about the use of facial recognition systems. Police departments contend that the software gives them a technological edge to catch criminals. Critics say the technology is an invasion of privacy and is being rolled out without adequate public debate.”
Critics often highlight the technology’s issues with racial and gender biases. As Common Dreams reported last month, the U.S. government’s first major federal study of facial recognition surveillance confirmed civil liberties groups’ warnings that the technology often disproportionately favors white middle-aged men and frequently misidentifies those of other identities, particularly people of color.
In the absence of federal regulations on facial recognition technology in the United States, some cities have started banning local agencies from using it. The digital rights group Fight for the Future, which launched the Ban Facial Recognition campaign last year, maintains an interactive map of the U.S. that shows “where facial recognition surveillance is happening, where it’s spreading to next, and where there are local and state efforts to rein it in.”Print