Tech
Top 10 police technology stories of 2025 | Computer Weekly
In 2025, Computer Weekly’s police technology coverage focused extensively on developments in the use of data-driven technologies such as facial recognition and predictive policing.
This included stories on the Met’s decision to deploy permanent live facial recognition (LFR) cameras in Croydon and the Home Office launching a formal consultation on laws to regulate its use, as well as reports highlighting the lawfulness, necessity and proportionality of how UK police are using the technology.
Further stories continued Computer Weekly’s ongoing coverage of police hyperscale cloud use, after documents obtained from Scottish policing bodies revealed that Microsoft is refusing to hand them critical information about its data flows.
Computer Weekly also reported on efforts to change police data protection rules, which essentially legalise previously unlawful practices and pose a risk to the UK’s law enforcement data adequacy with the European Union (EU).
One investigation by freelance journalists Apostolis Fotiadis, Giacomo Zandonini and Luděk Stavinoha also revealed how the EU’s law enforcement agency has been quietly amassing data to feed an ambitious-but-secretive artificial intelligence (AI) development programme.
The Home Office formally opened a consultation on the use of facial recognition by UK police at the start of December 2025, saying the government is committed to introducing a legal framework that sets out clear rules for the technology.
The move – initially announced by policing minister Sarah Jones in early October 2025 after then home secretary Yvette Cooper told a Lords Committee in July that the UK government will create “a proper, clear governance framework” to regulate police use of the tech – marks a distinct shift in Home Office policy, which for years has claimed there is already “comprehensive” legal framework in place.
The Home Office has now said that although a “patchwork” legal framework for police facial recognition exists (including for the increasing use of the retrospective and “operator-initiated” versions of the technology), it does not give police themselves the confidence to “use it at significantly greater scale … nor does it consistently give the public the confidence that it will be used responsibly”.
It added that the current rules governing police LFR use are “complicated and difficult to understand”, and that an ordinary member of the public would be required to read four pieces of legislation, police national guidance documents and a range of detailed legal or data protection documentation from individual forces to fully understand the basis for LFR use on their high streets.
While the use of LFR by police – beginning with the Met’s deployment at Notting Hill Carnival in August 2016 – has ramped up massively in recent years, there has so far been minimal public debate or consultation.
UK police forces are “supercharging racism” through their use of automated “predictive policing” systems, as they are based on profiling people or groups before they have committed a crime, according to a 120-page report published by Amnesty International.
While proponents claim these systems can help more efficiently direct resources, Amnesty highlighted how predictive policing tools are used to repeatedly target poor and racialised communities, as these groups have historically been “over-policed” and are therefore massively over-represented in police data sets.
This then creates a negative feedback loop, where these so-called “predictions” lead to further over-policing of certain groups and areas; reinforcing and exacerbating the pre-existing discrimination as increasing amounts of data are collected.
“The use of predictive policing tools violates human rights. The evidence that this technology keeps us safe just isn’t there, the evidence that it violates our fundamental rights is clear as day. We are all much more than computer-generated risk scores,” said Sacha Deshmukh, chief executive at Amnesty International UK, adding that these systems are deciding who is a criminal based “purely” on the colour of their skin or their socio-economic background.
In June 2025, Green Party MP Siân Berry argued in the Commons that “predictive” policing technologies infringe human rights “at their heart” and should be prohibited in the UK, after tabling an amendment to the government’s forthcoming Crime and Policing Bill.
Highlighting the dangers of using predictive policing technologies to assess the likelihood of individuals or groups committing criminal offences in the future, Berry said that “such technologies, however cleverly sold, will always need to be built on existing, flawed police data … That means that communities that have historically been over-policed will be more likely to be identified as being ‘at risk’ of future criminal behaviour.”
Berry’s amendment would also prohibit the use of certain information by UK police to “predict” people’s behaviour: “Police forces in England and Wales shall be prohibited from … Predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons.”
In April, the Met Police announced it was planning to install the UK’s first permanent LFR cameras in Croydon, but critics raised concerns that this continues the force’s pattern of deploying the technology in areas where the Black population is much higher than the London average.
Local councillors also complained that the decision to set up facial recognition cameras permanently has taken place without any community engagement from the force with local residents, echoing situations that have happened in boroughs such as Newham and Lewisham.
According to data gathered by Green Party London Assembly member Zoë Garbett, over half of the 180 LFR deployments that took place during 2024 were in areas where the proportion of Black residents is higher than the city’s average, including Lewisham and Haringey.
While Black people comprise 13.5% of London’s total population, the proportion is much higher in the Met’s deployment areas, with Black people making up 36% of the Haringey population, 34% of the Lewisham population, and 40.1% of the Croydon population.
“The Met’s decision to roll out facial recognition in areas of London with higher Black populations reinforces the troubling assumption that certain communities … are more likely to be criminals,” she said, adding that while nearly two million people in total had their faces scanned across the Met’s 2024 deployments, only 804 arrests were made – a rate of just 0.04%.
In March 2025, Computer Weekly reported that proposed reforms to police data protection rules could undermine law enforcement data adequacy with the European Union (EU).
During the committee stage of Parliamentary scrutiny, the government’s Data Use and Access Bill (DUAB) – now an act – sought to amend the UK’s implementation of the EU Law Enforcement Directive (LED), which is transposed into UK law via the current Data Protection Act (DPA) 2018 and represented in Part Three of the DPA, specifically.
In combination with the current data handling practices of UK law enforcement bodies, the bill’s proposed amendments to Part Three – which include allowing the routine transfer of data to offshore cloud providers, removing the need for police to log justifications when accessing data, and enabling police and intelligence services to share data outside of the LED rules – could present a challenge for UK data adequacy.
In June 2021, the European Commission granted “data adequacy” to the UK following its exit from the EU, allowing the free flow of personal data to and from the bloc to continue, but warned the decision may yet be revoked if future data protection laws diverge significantly from those in Europe.
While Computer Weekly’s previous reporting on police hyperscale cloud use has identified major problems with the ability of these services to comply with Part Three, the government’s DUAB changes are seeking to solve the issue by simply removing the requirements that are not being complied with.
To circumvent the lack of compliance with these transfer requirements, the government has simply dropped them from the DUAB, meaning policing bodies will no longer be required to assess the suitability of the transfer or report it to the data regulator.
In August, Computer Weekly reported on documents obtained from the Scottish Police Authority (SPA), which showed that Microsoft is refusing to tell Scottish policing bodies where and how the sensitive law enforcement data uploaded to its cloud services will be processed.
Citing “commercial confidentiality”, the tech giant’s refusal to hand over crucial information about its international data flows to the SPA and Police Scotland means the policing bodies are unable to satisfy the law enforcement-specific data protection rules laid out in Part Three of the Data Protection Act 2018 (DPA18), which places strict limits on the transfer of policing data outside the UK.
“MS is unable to specify what data originating from SPA will be processed outside the UK for support functions,” said the SPA in a detailed data protection impact assessment (DPIA) created for its use of O365. “To try and mitigate this risk, SPA asked to see … [the transfer risk assessments] for the countries used by MS where there is no [data] adequacy. MS declined to provide the assessments.”
The SPA DPIA also confirms that, on top of refusing to provide key information, Microsoft itself has told the police watchdog it is unable to guarantee the sovereignty of policing data held and processed within its O365 infrastructure.
Further revelations published by Computer Weekly a month later showed that policing data hosted in Microsoft’s hyperscale cloud infrastructure could be processed in more than 100 countries.
This information was not provided to the policing bodies by Microsoft, and only came to light because of an analysis conducted by independent security consultant Owen Sayers, who identified from the tech giant’s own distributed online documentation that Microsoft personnel or contractors can remotely access the data from 105 different countries, using 148 different sub-processors.
Although the documentation – which is buried in non-indexed, difficult-to-find web pages – has come to light in the context of Computer Weekly investigating police cloud use, the issue of routine data transfers in Microsoft’s cloud architecture affects the whole of the UK government and public sector, which are obliged by the G-Cloud and Tepas frameworks to ensure data remains in the UK by default.
According to multiple data protection litigation experts, the reality of Microsoft’s global data processing here, on top of its failure to meet key Part Three obligations, means data subjects could have grounds to successfully claim compensation from Police Scotland or any other force using hyperscale cloud infrastructure.
In November 2025, freelance journalists Apostolis Fotiadis, Giacomo Zandonini and Luděk Stavinoha published an extensive investigation into how the EU’s law enforcement agency has been quietly amassing data to feed an ambitious-but-secretive AI development programme.
Based on internal documents obtained from Europol, and analysed by data protection and AI experts, the investigation raised serious questions about the implications of the agency’s AI programme for people’s privacy across the bloc.
It also raised questions about the impact of integrating automated technologies into everyday policing across Europe without adequate oversight.
In May 2025, Computer Weekly reported on an equality impact assessment that Essex Police had created for its use of live facial recognition, but the document itself – obtained under Freedom of Information rules by privacy group Big Brother Watch and shared exclusively with Computer Weekly – was plagued with inconsistencies and poor methodology.
The campaigners told Computer Weekly that, given the issues with the document, the force had likely failed to fulfil its public sector equality duty (PSED) to consider how its policies and practices could be discriminatory.
They also highlighted how the force is relying on false comparisons to other algorithms and “parroting misleading claims” from the supplier about the LFR system’s lack of bias.
Other experts noted the assessment was “clearly inadequate”, failed to look at the systemic equalities impacts of the technology, and relied exclusively on testing of entirely different software algorithms used by other police forces trained on different populations to justify its conclusions.
After being granted permission to intervene in a judicial review of the Met’s LFR use – brought by anti-knife campaigner Shaun Thompson, wrongly stopped by officers after a false LFR identification – the UK’s equality watchdog said the forces’ use of the tech is unlawful.
Highlighting how the Met is failing to meet key legal standards with its deployments – particularly around Articles 8 (right to privacy), 10 (freedom of expression) and 11 (freedom of assembly and association) of the European Convention on Human Rights – the UK’s the Equality and Human Rights Commission (EHRC) said LFR should only be used where necessary, proportionate and constrained by appropriate safeguards.
“We believe that the Metropolitan Police’s current policy falls short of this standard,” said EHRC chief John Kirkpatrick.
The EHRC further highlighted how, when used on a large scale, even low-error rates can affect a significant number of people by brining unnecessary and unwanted police attention, and warned that its use at protests could have a “chilling effect” on people’s freedom of expression and assembly.
Senior police officers from both the Met and South Wales Police have previously argued that a major benefit of facial-recognition technology is its “deterrence effect.”
A comparative study of LFR trials by law enforcement agencies in London, Wales, Berlin and Nice found that although “in-the-wild” testing is an important opportunity to collect information about how AI-based systems like LFR perform in real-world deployment environments, the police trials conducted so far have failed to take into account the socio-technical impacts of the systems in use, or to generate clear evidence of the operational benefits.
Highlighting how real-world testing of LFR systems by UK and European police is a largely ungoverned “Wild West”, the authors expressed concern that “such tests will be little more than ‘show trials’ – public performances used to legitimise the use of powerful and invasive digital technologies in support of controversial political agendas for which public debate and deliberation is lacking, while deepening governmental reliance on commercially developed technologies which fall far short of the legal and constitutional standards which public authorities are required to uphold”.
Given the scope for interference with people’s rights, the authors – Karen Yeung, an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School, and Wenlong Li, a research professor at Guanghua Law School, Zhejiang University – said that evidence of the technology’s effectiveness in producing its desired benefits “must pass an exceptionally high threshold” if police want to justify its use.
They added that without a rigorous and full accounting of the technology’s effects – which is currently not taking place in either the UK or Europe – it could lead to the “incremental and insidious removal” of the conditions that underpin our rights and freedoms.
Tech
Europe’s Online Age Verification App Is Here
The European online age verification app is ready.
The app works with passports or ID cards, is built to be “completely anonymous” for the people who use it, works on any device (smartphones, tablets, and PCs), and is open source. “Best of all, online platforms can easily rely on our age verification app, so there are no more excuses,” said European Commission president Ursula von der Leyen at a press conference on Wednesday. “Europe offers a free and easy-to-use solution that can protect our children from harmful and illegal content.”
High Expectations
“It is our duty to protect our children in the online world just as we do in the offline world. And to do that effectively, we need a harmonized European approach,” von der Leyen said at Wednesday’s press conference. “And one of the central issues is the question, how can we ensure a technical solution for age verification that is valid throughout Europe? Today, I can announce that we have the answer.”
This answer takes the form of an open source app that any private company can repurpose, as long as it complies with European privacy standards and offers the same technical solution throughout the European Union. The user downloads the app, agrees to the terms and conditions, sets up a pin or biometric access, and proves their age through an electronic identification system, or by showing a passport or ID card (in which case biometric verification is also provided). The app does not store your name, date of birth, ID number, or any other personal information, according to the European Commission—only the fact that you are over a certain age.
After that, when a person using the app wants to access a social network (minimum age: 13), pornographic site (minimum age: 18), or any other age-protected content, if they are logged in from a computer, they need only scan the QR code shown on the site they want to visit. If, on the other hand, the person logs in from a smartphone, the app sends the proof of age directly. The platform does not access the document with which the user proved it in the first place.
Adoption Event
The need to introduce a common system for the entire European Union has been discussed for some time, and according to commission technicians, the technical work is now complete. Of course, it will still be possible to circumvent the system—all it takes is for an adult to lend their phone to a younger friend—but the technological architecture exists, and it will be up to EU member states to decide whether to integrate it into national digital wallets or develop independent apps.
“No More Excuses”
For the app to really be effective, platforms must be obligated to verify the age of their users—that’s where things get tricky. The Digital Services Act, which went into effect in 2024, requires “very large online platforms”—those with more than 45 million monthly users in the European Union—to take concrete steps to mitigate systemic risks related to child protection, with heavy penalties for noncompliance.
“And that’s why Europe has the DSA: to call online platforms to their responsibilities. Because Europe will not tolerate platforms making money at the expense of our children,” European Commission executive vice president Henna Virkkunen told a press conference. She added that after an investigation into TikTok, the European institutions plan to take similar action against Facebook, Instagram, and Snapchat, as well as four porn sites. “Since the platforms do not have adequate age verification tools, we developed the solution ourselves,” he concluded. In short, as von der Leyen also remarked, “there are no more excuses.”
Bare Minimum
So far, this is the European framework that sets the general rules. On this basis, member states can consider more restrictive measures. Italy was among the first to discuss how to regulate the use of social media by minors but has so far not landed on anything concrete. Elsewhere in the EU, France’s Emmanuel Macron has been a trailblazer on the issue, pushing France to discuss a rule to ban social networks for minors under the age of 15 entirely. So far, this measure has received broad political support—but the outcome depends largely on compatibility with the Digital Services Act and the availability of effective age verification systems like the app the European Commission just released.
This article originally appeared on WIRED Italia and has been translated.
Tech
Anthropic Plots Major London Expansion
Anthropic is moving into a new London office as it seeks to expand its research and commercial footprint in Europe, setting up a scrap between the leading AI labs for talent emerging from British universities.
The company, which opened its first London office in 2023, is moving to the same neighborhood as Google DeepMind, OpenAI, Meta, Wayve, Isomorphic Labs, Synthesia, and various AI research institutions.
Anthropic’s new, 158,000-square-foot office footprint will have space enough for 800 people—four times its current head count—giving it room to potentially outscale OpenAI, which itself recently announced an expansion in London.
“Europe’s largest businesses and fastest-growing startups are choosing Claude, and we’re scaling to match,” says Pip White, head of EMEA North at Anthropic. “The UK combines ambitious enterprises and institutions that understand what’s at stake with AI safety with an exceptional pool of AI talent—we want to be where all of that comes together.
UK government officials had reportedly attempted to coax Anthropic into expanding its presence in London after the company recently fell out with the US administration. Anthropic refused to allow its models to be used in mass surveillance and autonomous weapon systems, leading to an ongoing legal battle between the AI lab and the Pentagon.
As part of the expansion, Anthropic says it will deepen its work with the UK’s AI Security Institute, a government body that this week published a risk evaluation of its latest model, Claude Mythos Preview. According to Politico, the UK government is one of few across Europe to have been granted access to the model, which Anthropic has released to only select parties, citing concerns over the potential for its abuse by cybercriminals.
The increasing concentration of AI companies in the same London district is an important step in creating a pathway for research to translate into AI products, says Geraint Rees, vice-provost at University College London, whose campus is around the corner from Anthropic’s new office.
“This cluster didn’t emerge from a planning document. It grew because serious researchers and companies understand that proximity isn’t a nice-to-have,” he said last month, speaking at an event attended by WIRED. “That’s how the innovation system actually works. It’s not a clean, linear transfer from lab to market. It’s messier, richer, more human than that.”
Tech
CYBERUK ’26: UK lagging on legal protections for cyber pros | Computer Weekly
The increasingly long-in-the-tooth Computer Misuse Act (CMA) of 1990 remains an albatross around the neck of British cyber security professionals, and even though the UK government committed last December to reforming it, every minute of delay is holding back the nation’s security innovation, resilience, talent, and ability to defend itself against cyber attacks, campaigners have warned.
Ahead of the National Cyber Security Centre’s (NCSC’s) upcoming CYBERUK conference in Glasgow, the CyberUp Campaign for reform of the Computer Misuse Act (CMA) has published a new report, titled Protections for Cyber Researchers: How the UK is being left behind to maintain pressure on Westminster.
The CMA defines the vague offence of unauthorised access to a computer, which the campaigners want changed because it was written 35 years ago and fails to account for the development of the cyber security profession, and the fact that in the course of their day-to-day work, cyber pros may sometimes need to hack into other systems.
“Cyber attacks are growing in scale, sophistication and severity, with a devastating impact on infrastructure, businesses and charities,” said a CyberUp campaign spokesperson.
“While other countries have moved to refresh their cyber laws in response, the UK’s Computer Misuse Act hasn’t been updated since before the modern internet – hardly the best platform for accelerating our defences into the next decade.”
The group’s report highlights how other nations, Australia, Belgium, France, Germany, Hong Kong, Malta, Portugal, and the USA, have already secured legal protections for cyber professionals that enable them to go about their business without fear of prosecution.
In Portugal – Britain’s oldest formal ally under a treaty dating back to the 14th Century – the government last year published Decreto-Lei 125/2025, implementing the European Union (EU) Network and Information Systems (NIS2) Directive and revising the country’s cyber crime law to ensure that ethical hackers and professional cyber security practitioners working in good faith are both recognised and protected.
Portgual’s laws now accept some elements of cyber work may have to happen without explicit permission or involve unanticipated technical overreach that has a legitimate purpose.
As such, Portugal says that security work undertaken in good faith won’t be punished as long as the researcher fulfills a set of conditions. For example, they can act only to find vulnerabilities and these must be reported immediately, they must avoid taking harmful actions, like conducting DDoS attacks or installing malware, and they must respect the integrity of any data they may find or access and delete it within 10 days once the issue is addressed.
CyberUp said Portugal’s example demonstrates how cyber crime laws can be modernised to legally protect research carried out in the public interest.
“Portugal has demonstrated how to modernise their equivalent law through cyber legislation. We urge the government to follow this example and act swiftly through the Cyber Security and Resilience Bill to achieve meaningful reform, or risk lagging even further behind our peers,” the spokesperson said.
Defence Framework
Working with cyber security experts and legal advisors, the CyberUp campaign has developed its own Defence Framework that would allow cyber professionals to present a statutory defence in court as long as they adhere to the Framework’s four core principles.
- Harm Vs. Benefit: The benefits of the activity must outweigh the potential harms;
- Proportionality: Cyber pros must take all reasonable steps to minimise the risks of their activity;
- Intent: They must act honestly, sincerely, and clearly direct themselves towards improving security;
- Competence: Their qualifications and professional memberships should demonstrate they are suitably equipped to perform cyber security work.
The campaigners say this framework will bring clarity and confidence to the security sector, enabling cyber pros to run essential research tasks without fear of criminal prosecution, helping organisations operate to recognised legal standards, and enabling a more open and collaborative relationship between the cyber sector and the UK government.
-
Entertainment1 week agoQueen Elizabeth II emotional message for Archie, Lilibet sparks speculation
-
Tech1 week agoAzure customers up in arms over ‘full’ UK South region | Computer Weekly
-
Tech1 week agoAs the Strait of Hormuz Reopens, Global Shipping Will Take Months to Recover
-
Fashion1 week agoCII submits 20-pt agenda to Indian govt to back firms hit by Iran war
-
Tech1 week agoThis AI Button Wearable From Ex-Apple Engineers Looks Like an iPod Shuffle
-
Politics7 days agoIndian airlines hit hardest after Dubai limits foreign flights until May 31
-
Entertainment4 days agoPalace left in shock as Prince William cancels grand ceremony
-
Politics6 days agoChinese, Taiwanese will unite, Xi tells Taiwan opposition leader
