Tech
Privacy will be under unprecedented attack in 2026 | Computer Weekly
The privacy of electronic communications will face new risks in 2026, as the UK and other governments push for greater capabilities to harvest and analyse more data on private citizens, and to make it harder to protect communications with end-to-end encryption.
Over the next 12 months, we can expect more pressure from the UK and Europe to restrict the unencumbered use of end-to-end encrypted email and messaging services such as Signal, WhatsApp and many others.
In the 1990s, the US government tried and ultimately failed to persuade telecommunications companies to install a device known as the Clipper chip to provide the US National Security Agency (NSA) with “backdoor” access to voice and data communications.
The Crypto wars of 2026 are more subtle, with controls and restrictions on encryption pushed by governments, law enforcement agencies and intelligence services as a means of detecting child sexual abuse and terrorist material being promulgated through encrypted email and messaging systems.
The answer governments are settling on is to encourage the use of scanning technology in a voluntary or compulsory way, to identify problematic content before it is encrypted.
Cryptographers and computer scientists have repeatedly warned that such plans will create security vulnerabilities that will leave the public less safe than before.
Chat Control and client-side scanning
The European Parliament and Council are expected to adopt the controversial Child Sexual Abuse Regulation (CSAR) in spring 2026. In its current form, it proposes that messaging platforms voluntarily scan private communications for offending content, combined with proposals for age verification to check the age of users.
Known by the nickname Chat Control, its critics – such as former MEP Patrick Breyer, a jurist and digital rights activist – claim the regulation will open the doors to “warrantless and error-prone” mass surveillance of European Union (EU) citizens by US technology companies. The algorithms, say critics, are notoriously unreliable, potentially exposing tens of thousands of legal private chats to police scrutiny.
Chat Control will also put pressure on technology companies to introduce age checks to help them “reliably identify minors”, a move that would likely require every citizen to upload an ID or take a face scan to open an account on an email or messaging service. According to Breyer, this creates a de facto ban on anonymous communication, putting whistleblowers, journalists and political activists who rely on anonymity at risk.
Online Safety Act
In the UK, there remain concerns about provisions in the Online Safety Act that, if implemented by regulator Ofcom, would require technology companies to scan encrypted messages and emails.
These powers attracted widespread criticism from technology companies as the bill passed into law, with Signal warning it would pull its encrypted messaging service from the UK if it was forced to introduce what it called a “backdoor”.
Commentators think there is little current appetite for Ofcom to mandate client-side scanning for private communications, given the level of opposition.
But it may require providers of public and semi-public services, such as cloud storage, to introduce scanning services to detect illegal content.
“I think they may be waiting to see what happens in Europe with the Chat Control proposal, because it’s quite hard for the UK to go alone,” James Baker, campaigner at the Open Rights Group, told Computer Weekly.
Perceptual hash matching
One of the items on Ofcom’s agenda is a form of scanning, known as perceptual hash matching, which uses an algorithm to decide whether images or videos are similar to known child abuse or terrorism images.
A consultation document from Ofcom proposes requiring tech platforms that allow users to upload or share photographs, images and videos – including file storage and sharing services, and social media companies – to introduce the technology for detecting terrorism and abuse-related material.
“We also think some services should go further – assessing the role that automated tools can play in detecting a wider range of content, including child abuse material, fraudulent content, and content promoting suicide and self-harm, and implementing new technology where it is available and effective,” it says in its consultation document.
But there are questions about the accuracy of perceptual hash matching, and the risk that its use may lead to people wrongly being barred from online services for alleged crimes they have not committed.
Critics point out that perceptual hash matching used to be called “fuzzy matching” – and for good reason. Although its new name, “perceptual hash matching”, gives the impression of precision and predictability, in reality, it produces false positives and negatives.
Hundreds of people have been blocked from Instagram, owned by Meta, after being wrongly accused of breaching Meta’s policies on child sexual exploitation and abuse. The company’s actions took a huge emotional toll on the people affected, and in some cases led to people losing their online businesses, the BBC reported in October 2025.
Alec Muffett, security expert and former Facebook engineer, told Computer Weekly that Ofcom’s proposals display “a horrifying lack of safety by design” and said its proposal to force companies to adopt the technology without mitigating the potential risks is “derelict”.
“Perceptual hashing is just a fancy name for what we used to call ‘fuzzy matching’ with ‘digital fingerprints’, and even if we ignore the problem of false positives, we are left with the risk of creating an enormous cloud surveillance engine by logging all queries for even benign digital fingerprints,” he said.
Encryption apps viewed as national security risk
There are signs of increasing government discomfort with encrypted communications. In December 2025, the Independent Reviewer of State Threats Legislation delivered a stark warning that developers of encryption technology could be subject to police stops, detention and questioning, and the seizure of their electronic devices under national security laws.
According to Jonathan Hall KC, the developer of an app whose selling point is that it offers end-to-end encryption, could be considered to be unwittingly engaged in “hostile activity” under Section 3 of the Counterterrorism and Border Security Act 2019.
“It is a reasonable assumption that [the development of the app] would be in the interests of a foreign state even if the foreign state has never contemplated this potential advantage,” he wrote.
Digital ID all over again
The UK’s proposals for a mandatory digital ID scheme look set to be another battleground for privacy in 2026. The government says the scheme will help to crack down on illegal immigration by introducing mandatory “right to work” checks by the end of the Parliamentary term.
MPs were scathing when the bill was introduced in Parliament. “The real fear here is that we will be building an infrastructure that can follow us, link our most sensitive information and expand state control over all our lives,” said Rebecca Long-Bailey during the debate. Others raised concerns about the cyber security risks of storing details of the population on a central government database.
Gus Hosein, executive director of campaign group Privacy International, notes that the Home Office is repeating the same arguments originally put forward in 2023 when Tony Blair attempted to introduce a national identity card. The scheme was scrapped by the Conservative and Liberal Democrat coalition in 2010. “It’s just the same boring rhetoric: ‘It’s going to stop ID fraud, it’s going to stop terrorism, it’s going to stop migration problems,’” he said. “Do we really have to go through the whole process of debunking this again?”
Hosein said the prospects of the Home Office coming up with a workable system before the next election are low. The political climate is different this time. Nearly three million people have signed a Parliamentary petition calling for the idea to be scrapped. “If they try and do the classic thing, which is to try and build something grand and momentous, it will take forever,” he said. “I would not mind an ID system that actually worked, I just don’t want the Home Office within 10,000 miles of it.”
When combined with facial recognition, digital ID raises further privacy issues. Campaign groups are expected to bring a legal challenge in 2026 after Freedom of Information Act requests revealed that the government covertly allowed police forces to search 150 million UK passport and immigration database photos for matches of images captured by facial recognition technology.
Big Brother Watch and Privacy International have issued legal letters before action to the Home Office and the Metropolitan Police. They argue that there is no clear legal basis for the practice and that the Home Office has kept the public and Parliament in the dark.
“There is a risk when you roll out digital facial recognition cameras that the images used for digital ID will be used to track you around town centres,” said the Open Rights Group’s Baker.
Apple backdoors and technical capability notices
This year will see further legal challenges at the Investigatory Powers Tribunal against the Home Office’s secret order issued against Apple, requiring it to facilitate access for law enforcement and intelligence agencies to encrypted data stored by Apple’s customers on iCloud.
Scheduled for the spring, the case brought by Privacy International and Liberty will challenge the lawfulness of the Home Office using a technical capability notice (TCN) to require Apple to disclose the encrypted data of users of its Advanced Data Protection (ADP) service worldwide.
Apple is expected to issue a new legal challenge after the UK government abandoned its original wide-ranging TCN and replaced it with an order focused on providing access only to ADP users in the UK, ending Apple’s legal challenge, at least for now.
The case has the potential to turn into a mammoth battle, reaching the Supreme Court and the European Court of Human Rights.
Surveillance of journalists
This year will also see further legal challenges that will test the boundaries between state intrusion and the professional privileges accorded to lawyers and journalists to protect the confidentiality of their clients or journalistic information.
The Investigatory Powers Tribunal is due to decide on a case brought by the BBC and former BBC journalist Vincent Kearney against the Police Service of Northern Ireland and the Security Service, MI5.
The Security Service broke with the conventions of Neither Disclose Nor Deny (NCND) to acknowledge to the tribunal that it had unlawfully obtained phone communications data from Kearney in 2006 and 2009, while he was working at the BBC, in an attempt to identify his confidential sources.
Although MI5 followed the Communications Data code of practice at the time, the code did not meet the strict legal tests for accessing journalistic material, which is protected under the European Convention of Human Rights.
In a judgment just before Christmas, the IPT rejected arguments that MI5 should disclose further details of surveillance operations against Kearney and other BBC journalists, including operations that had proper legal approval. The IPT will decide what remedy is due in 2026, and whether Kearney and the BBC should receive compensation.
Another legal case will test the boundaries between police surveillance and the legal protection given to lawyers to protect the confidentiality of discussions with their clients when subject to police stops.
Fahad Ansari, a lawyer who acted for Hamas in an attempt to overturn its proscription as a terrorist organisation in the UK, had his mobile phone seized by police after he was detained under Schedule 7 of the Terrorism Act 2000 at a ferry port, after returning from a family holiday.
The case is believed to be the first targeted use of Schedule 7 powers – which allow police to stop and question people and seize their electronic devices without the need for suspicion – against a practising solicitor.
Ansari is seeking a judicial review to challenge the right of police to examine the contents of his phone, which contains confidential and legally privileged material from his clients, accumulated over 15 years.
The legal fallout from EncroChat and SkyECC
The legal fallout from an international police operation to hack encrypted phone network Sky ECC and EncroChat more than five years ago will continue.
French police led operations to harvest tens of millions of encrypted messages used as evidence of criminality to bring prosecutions against drug gangs across Europe and the UK.
Defence lawyers and forensic experts have raised questions about the reliability of the evidence supplied by the French to the UK and EU states through Europol.
France has declared the hacking operation against EncroChat and Sky ECC a state secret and refused to allow members of the French Gendarmerie to give evidence on how the intercepted data was obtained.
This has meant individuals facing charges outside France based on evidence from EncroChat or SkyECC have no legal recourse to challenge the legality of the French hacking operation.
Courts in the EU are obliged to accept the evidence provided by France under the “mutual recognition” principal that applies when one EU state supplies evidence to another under a European Investigation Order.
At the same time, people have been denied the right to challenge the evidence against them in the French courts, leaving people charged with offences based on the hacked phone data without legal recourse to appeal in any jurisdiction.
Decisions by the European Court of Justice and the European Court of Human Rights, expected this year, could end that anomaly.
In one case, the French Supreme Court – La Cour de cassation – has asked the Court of Justice to decide whether France’s refusal to allow non-French citizens to challenge the lawfulness of the French hacking operations in France contravenes EU law. According to La Cour de cassation, the decision is likely to have “significant consequences” for legal proceedings based on intercepted evidence in the EU.
In the second case, the European Court of Human Rights is expected to decide on a complaint from a German citizen, Murat Silgar, who was jailed for drug offences on the basis of EncroChat evidence.
Silgar argues that the German courts had used illegally obtained communications data and that technical details of the French retrieval of EncroChat data were not shared with him, in breach of the European Convention of Human Rights, which protects the right to a fair trial, and the right to private correspondence.
Justus Reisginer, a member of a coalition of defence lawyers known as the Joint Defence Team, told Computer Weekly the cases would address “a fundamental principle” in cross-border and digital investigations. “The law of the European Union requires that people have an effective remedy,” he said.
These are just a few of the battle lines between technology and privacy that will play out in 2026. For governments, the promise of a “technical fix” to deal with wider societal problems, such as child abuse and terrorism offences, is attractive. But history has shown that “technical fixes” rarely work, and often have unforeseen consequences.
Tech
We Gave These Android-Ready Earbuds a 9/10, and They’re Just $180
If you’re an esteemed Android user like me, and you felt left out of yesterday’s deal on the AirPods Pro 3, I’ve got you covered today with an even bigger discount on the Pixel Buds Pro 2. Both Amazon and Best Buy have the hazel color marked down from $229 to $180, a $49 discount on Google’s most upgraded wireless earbuds.
The first change you’ll notice from the previous generation Pixel Buds Pro is that the newer model is much lighter, and the buds are 27 percent smaller. As a result, these are an excellent choice for anyone with small ears, and they stay put super well. Reviewer Parker Hall “had no problem doing hours of tree pruning and going on long sweaty runs in Portland’s early fall heat wave.”
With some help from top-notch physical sound isolation, the active noise-canceling on these is just as good as Apple’s and even goes toe-to-toe with big hitters like Bose and Sony. The transparency mode works just as well, too, with a wider range and clearer audio than a lot of other headphones offer. When it’s time to actually turn up the tunes, you can enjoy a wide, natural soundstage that has excellent detail in the midrange and clear, sparkling treble.
The Gemini integration, unfortunately, leaves a bit to be desired. It’s not the smoothest experience, particularly when asking multiple questions, and the Pixel Buds Pro 2 aren’t offering anything that other earbuds can’t do. Apple’s live translations and heart rate monitors are more useful features, but if you’re on Android, you’re locked out of them anyway.
If you’re interested in upgrading your earbud game, and you already have a Pixel, you can grab the Pixel Buds Pro 2 in hazel for $180 from either Amazon or Best Buy. If that color doesn’t suit you, I also spotted lesser discounts on the peony color for $189, or the porcelain color for $210. For anyone who isn’t already sold on the Pixel Buds Pro 2, make sure to swing by our guide to the best wireless earbuds, with picks for both Apple and Android owners.
Tech
‘Uncanny Valley’: Pentagon vs. ‘Woke’ Anthropic, Agentic vs. Mimetic, and Trump vs. State of the Union
Guys, before we go to break, there’s something very near and dear to my heart that WIRED wrote about this week. It’s something I love even more than biathlon. It is undersea internet cables.
Leah Feiger: I love when you talk about this. I think that the first time you brought this up to me was approximately one week into your tenure as executive editor, and you’re like, “Leah, do you know what I love?” and it’s undersea internet cables.
Brian Barrett: Yeah. I was like, “Number one, undersea internet cables. Number two, my children. Number three …” that was sort of the gist of it. That’s how I always introduce myself. I want to take everybody back to December 14th, 1988. The top movie in theaters is Twins starring Arnold Schwarzenegger and Danny DeVito.
Zoë Schiffer: Legitimately never heard of it.
Leah Feiger: Wait, Zoë. What?
Brian Barrett: What? Anyway, Arnold is agentic and Danny DeVito’s mimetic. The top song—
Zoë Schiffer: Now I get it.
Brian Barrett: —the top song is “Look Away” by Chicago. Now that, I also am not—I don’t remember that one at all. And the first undersea fiber optic cable connecting the United States, UK and France went live. This was the day that the internet went global, which is crazy—
Zoë Schiffer: That is crazy.
Brian Barrett: —that it was relatively recent. The reason we’re writing about it now is that that original cable, which is called TAT-8, is being pulled up. It’s out of commission. It’s old, it’s decrepit, so I identify, and it’s being pulled up and put out to pasture because the technology’s gotten better. But in this great feature that we published, it is a look at how this changed the world basically, and how we take for granted—but the reason I am so into undersea cable stories is because it’s so easy to forget that the internet is a physical thing and that the maintenance of those things is really what makes all this connectivity happen. So yeah, TAT-8. Any other fond memories of TAT-8? Or, no. What did you guys think reading this feature?
Zoë Schiffer: Well, famously we were not alive in 1988.
Leah Feiger: Yeah. Sorry, Brian. You’re older than us. Just a reminder.
Brian Barrett: Hurts.
Zoë Schiffer: But the part of this story that I wanted to talk about, which felt like a real intersection of both of your interests was the myth of the shark attacks.
Brian Barrett: Oh, yeah.
Leah Feiger: OK. So to back up a little bit, these cables, at the very beginning, when they were put in, Brian would be able to talk about this way more because he’s kind of a freak about cables if you haven’t realized already. These cables would sometimes have unexplained damage, and looking back on it years later, engineers figured out that this kind of happens, that if you are putting cables underseas, there will be wind, there will be changes, things will get moved around. Of course, there will be damages, but that is not how they felt at the time. These engineers assumed that it was sharks, that sharks were biting their cables, that they were destroying the internet. The cables were reinforced with all these protective layers, all of these things, because they were like, “Oh, my God, the sharks are quite literally ending all of this for us.” But this article goes into great detail of how they figured out it wasn’t the sharks, and by thinking that it was the sharks, it actually helped make all of this technology that much better and stronger, but the sharks were innocent, you guys. The sharks were innocent.
Tech
This AI Agent Is Designed to Not Go Rogue
AI agents like OpenClaw have recently exploded in popularity precisely because they can take the reins of your digital life. Whether you want a personalized morning news digest, a proxy that can fight with your cable company’s customer service, or a to-do list auditor that will do some tasks for you and prod you to resolve the rest, agentic assistants are built to access your digital accounts and carry out your commands. This is helpful—but has also caused a lot of chaos. The bots are out there mass-deleting emails they’ve been instructed to preserve, writing hit pieces over perceived snubs, and launching phishing attacks against their owners.
Watching the pandemonium unfold in recent weeks, longtime security engineer and researcher Niels Provos decided to try something new. Today he is launching an open source, secure AI assistant called IronCurtain designed to add a critical layer of control. Instead of the agent directly interacting with the user’s systems and accounts, it runs in an isolated virtual machine. And its ability to take any action is mediated by a policy—you could even think of it as a constitution—that the owner writes to govern the system. Crucially, IronCurtain is also designed to receive these overarching policies in plain English and then runs them through a multistep process that uses a large language model (LLM) to convert the natural language into an enforceable security policy.
“Services like OpenClaw are at peak hype right now, but my hope is that there’s an opportunity to say, ‘Well, this is probably not how we want to do it,’” Provos says. “Instead, let’s develop something that still gives you very high utility, but is not going to go into these completely uncharted, sometimes destructive, paths.”
IronCurtain’s ability to take intuitive, straightforward statements and turn them into enforceable, deterministic—or predictable—red lines is vital, Provos says, because LLMs are famously “stochastic” and probabilistic. In other words, they don’t necessarily always generate the same content or give the same information in response to the same prompt. This creates challenges for AI guardrails, because AI systems can evolve over time such that they revise how they interpret a control or constraint mechanism, which can result in rogue activity.
An IronCurtain policy, Provos says, could be as simple as: “The agent may read all my email. It may send email to people in my contacts without asking. For anyone else, ask me first. Never delete anything permanently.”
IronCurtain takes these instructions, turns them into an enforceable policy, and then mediates between the assistant agent in the virtual machine and what’s known as the model context protocol server that gives LLMs access to data and other digital services to carry out tasks. Being able to constrain an agent this way adds an important component of access control that web platforms like email providers don’t currently offer because they weren’t built for the scenario where both a human owner and AI agent bots are all using one account.
Provos notes that IronCurtain is designed to refine and improve each user’s “constitution” over time as the system encounters edge cases and asks for human input about how to proceed. The system, which is model-independent and can be used with any LLM, is also designed to maintain an audit log of all policy decisions over time.
IronCurtain is a research prototype, not a consumer product, and Provos hopes that people will contribute to the project to explore and help it evolve. Dino Dai Zovi, a well-known cybersecurity researcher who has been experimenting with early versions of IronCurtain, says that the conceptual approach the project takes aligns with his own intuition about how agentic AI needs to be constrained.
-
Tech1 week agoA $10K Bounty Awaits Anyone Who Can Hack Ring Cameras to Stop Sharing Data With Amazon
-
Fashion6 days agoICE cotton ticks higher on crude oil rally
-
Business6 days agoUS Top Court Blocks Trump’s Tariff Orders: Does It Mean Zero Duties For Indian Goods?
-
Business6 days agoEye-popping rise in one year: Betting on just gold and silver for long-term wealth creation? Think again! – The Times of India
-
Sports5 days agoKansas’ Darryn Peterson misses most of 2nd half with cramping
-
Tech1 week agoDonald Trump Jr.’s Private DC Club Has Mysterious Ties to an Ex-Cop With a Controversial Past
-
Entertainment5 days agoViral monkey Punch makes IKEA toy global sensation: Here’s what it costs
-
Sports6 days agoBrett Favre blasts NFL for no longer appealing to ‘true’ fans: ‘There’s been a slight shift’


