The UK Is Expanding Facial Recognition Faster Than It Can Test for Bias
Essex Police has paused its use of live facial recognition — technology that scans the faces of people passing through public spaces in real time and checks them against a police watchlist — after a University of Cambridge study found the system performed differently depending on the race of the person being scanned. The force had been running the technology since summer 2024, scanning roughly 1.3 million faces across more than 70 deployments and making 48 arrests. Researchers tested 188 volunteers during an active deployment and found that Black participants were 27% more likely to be correctly identified than participants from other ethnic groups. Put plainly: if you were on a watchlist and you walked past those cameras, the system was more likely to catch you if you were Black.
A second study, conducted by the National Physical Laboratory under standardized lab conditions, found no statistically significant bias in the same technology. Two studies, same system, conflicting findings — which is itself a problem. If the tools for measuring bias in facial recognition produce contradictory results depending on whether you test in a lab or in the field, the technology isn’t ready for routine deployment. The UK’s Information Commissioner’s Office, the body responsible for data protection oversight, audited Essex Police’s program and warned that all forces using facial recognition should conduct routine bias testing. Without it, the ICO said, there is a real risk of unfairness.
What makes the Essex situation alarming isn’t the disagreement between two studies. It’s the policy context surrounding it. In January, Home Secretary Shabana Mahmood announced the government would expand the number of live facial recognition vans available to police from 10 to 50, making them available to every force in England and Wales. The Home Office committed over £37 million to facial recognition capabilities and a new national policing AI center called Police.AI. In an interview with former Prime Minister Tony Blair, Mahmood described her ambition to use AI and facial recognition to achieve what Jeremy Bentham envisioned with the panopticon — a prison designed so every inmate could be watched at all times. Her exact framing: the goal is for the eyes of the state to be on people at all times.
That’s the Home Secretary describing a national surveillance architecture while one of the forces operating it just paused because the system subjects Black people to higher rates of identification by law enforcement.
Essex ran more than 70 deployments before the bias findings surfaced. The ICO’s audit came after the pause, not before it. The Home Office announced its nationwide expansion while its own consultation on a new legal framework for the technology was still open. At every stage, deployment outpaced accountability. In reporting I did on facial recognition and policing in the UK, I found the same pattern. The Metropolitan Police’s own data showed that 80% of the people incorrectly flagged by its facial recognition system were Black. People I spoke with — including someone who had been personally stopped by the cameras — described a system that offered minimal transparency and almost no way to challenge its decisions. The technology was deployed first and scrutinized later, if at all.
The UK’s data protection regulator has said that forces should test for bias routinely and that the risk of unfairness is real. But saying the right thing while expanding deployment fivefold isn’t caution. It’s a bet that the oversight will catch up to the rollout. Essex just showed that it doesn’t.