Tech
Agentic AI to make data uplink the next mobile bottleneck | Computer Weekly
One of the consequences of artificial intelligence (AI) systems becoming capable of reasoning, planning and executing tasks autonomously is that mobile traffic patterns are changing noticeably, with uplink being of growing importance.
A study from InterDigital has shown how the emergence of agentic AI will redefine the demands placed on devices, networks and cloud infrastructure. Among the findings of the The distributed network shift enabling AI on device report, conducted by ABI Research for the InterDigital, was that the rapid adoption of agentic systems – which is expected to increase across enterprise and consumer markets over the next three years – was increasing uplink traffic from AI devices, changing the way modern networks operate. The result will be a reimagining of network design.
The study noted that modern mobile networks have historically been optimised for downlink throughput and video delivery. However, unlike traditional mobile applications that primarily consume data via downlink, agentic AI systems continuously generate and exchange contextual information to enable real-time reasoning and decision-making. Therefore, as AI devices generate increasing volumes of upstream data, networks risk becoming overloaded, leading to higher latency and costs.
The study found four main devices driving uplink traffic: smart glasses, wearables, smartphones, IoT sensors and devices. Smart glasses continuously capture video, images and environmental context, sending data upstream for real-time AI inference and assistance. ABI Research predicts 70 million smart glasses shipments by 2030, with cellular-enabled devices representing more than 12% of shipments.
By contrast, wearables – including next-generation tech that collects voice, biometric and contextual signals – support persistent agentic AI interactions. Smartphones increasingly transmit multimodal inputs such as voice, photos, video and sensor data to cloud and edge AI systems. In their operation, IoT sensors and devices continuously stream operational or environmental data to AI models for analysis, automation and decision-making.
The study also found uplink pressures are already visible in video-heavy applications such as livestreaming and real-time video collaboration, where many users uploading simultaneously can create localised mobile cell congestion. It added that unlike these temporary spikes, agentic AI systems will generate continuous upstream data exchanges from connected devices, potentially creating sustained pressure on uplink capacity.
The report suggested that to meet AI demands of modern devices, the industry must transition toward distributed intelligence architectures, where AI workloads are orchestrated across on-device processors, and cloud platforms based on their complexity. It said that embedding intelligence deeper into network infrastructure will ensure AI-enabled applications can operate efficiently without compromising on performance.
The study observed that as the entire mobile ecosystem continues to innovate and integrate the latest AI technology at pace, ensuring a coherent and complementary direction of travel is essential to enabling future AI applications and their associated experiences.
This is particularly seen as the case for 6G networks, which will be designed to make smartphones better at Mobile Broadband (MBB) access by improving network speeds, reducing latency and refining the battery life of devices.
However, InterDigital cautioned this is just the foundation on which additional services will be built. Integrating AI in the network will allow smartphones to offload demanding applications to the edge of the network – as well as into centralised locations – to ensure optimal resource utilisation, enabling a distributed intelligence fabric.
“Agentic AI introduces a new set of requirements for both networks and devices,” said Larbi Belkhit, and Paul Schell, senior analysts at ABI Research and co-authors of the report. “Supporting autonomous AI systems will demand far more distributed computing architectures and significantly more intelligent networks. Operators will need to manage increasingly symmetrical traffic patterns while enabling real-time AI workloads across device, edge and cloud.”
“Agentic AI marks the next phase in the evolution of intelligent connectivity,” said InterDigital chief technology officer Rajesh Pankaj. “Intelligence must be distributed across devices, networks and the cloud, and delivering these AI-enhanced services efficiently will require a new computing architecture that balances performance, latency and energy efficiency.”
Tech
A Top Democrat Is Urging Colleagues to Support Trump’s Spy Machine
United States congressman Jim Himes, the ranking Democrat on the House Intelligence Committee, is privately lobbying colleagues to preserve the FBI’s power to conduct warrantless searches of Americans’ communications, WIRED has learned, arguing that he has seen no evidence that the Trump administration is abusing its authority.
In a letter obtained by WIRED, Himes urges fellow Democrats to support the White House’s request to renew a controversial surveillance program that intercepts the electronic data of foreigners abroad. While targeted at foreigners, the program—authorized under Section 702 of the Foreign Intelligence Surveillance Act—also sweeps in vast quantities of private messages belonging to US citizens.
Himes’ pitch relies on the “56 reforms” passed by Congress in 2024, which codified the FBI’s own internal protocols as a substitute for constitutional warrants. In the letter, Himes claims these changes are “working as intended” to prevent domestic misuse, citing a compliance rate “exceeding 99 percent” over the past two years.
The structural foundations of that defense, however, have been fundamentally altered by recent changes within the FBI. Himes’ “99 percent” compliance metric was produced by the Office of Internal Auditing, for instance—a unit that long served as a smoke alarm designed to detect illegality, but no longer exists.
The unit was shuttered by FBI director Kash Patel last year. Historic court opinions based on its data had previously exposed hundreds of thousands of improper FBI searches. Without the auditors required to calculate failure rates, the compliance mechanisms Himes points to have effectively ceased to function.
In a statement, Himes’ office largely reiterated the positions laid out in his letter to colleagues. “I am open to making further reforms to Section 702, building on the many successful reforms we made in reauthorization legislation two years ago,” he says. “A short-term reauthorization of Section 702 will enable Congress to thoroughly debate the pros and cons of these suggested reforms—and to determine if compromise is possible—without placing our national security in peril by allowing the program to expire.”
As a member of the so-called Gang of Eight—a bipartisan group of lawmakers who are briefed on highly sensitive classified information—Himes possesses some of the deepest knowledge of the spy program. Nevertheless, his letter contains several other claims that appear fundamentally at odds with the mechanics of FISA oversight.
“Because of how heavily it is overseen by all three branches of government,” Himes says, “any effort to misuse the program would almost certainly become known to the Foreign Intelligence Surveillance Court and to Congress.”
The Foreign Intelligence Surveillance Court is a secret court that possesses no investigative arm to audit FBI databases. Similar to Congress, its oversight role is purely reactive, relying entirely on the US Justice Department to self-report violations.
“Neither Congress nor the FISA Court conducts independent audits of the FBI’s queries,” says Liza Goitein, senior director of the Brennan Center’s Liberty and National Security Program. “They rely on the Department of Justice to conduct thorough audits and to report the results truthfully and promptly. This particular Department of Justice has gutted internal oversight mechanisms and has been rebuked by dozens of federal courts for providing inaccurate, misleading, or incomplete information.”
There are no judges standing between the FBI and the private communications of millions of Americans, something that Himes and other members of his committee claim is necessary for the government to react quickly to terrorist threats. Critics argue that, given the current administration’s efforts to dismantle internal checks at the FBI, this is a massive vulnerability, leaving Americans exposed to surveillance abuses that will take years to declassify—if they’re ever reported at all.
Tech
Gamers Hate Nvidia’s DLSS 5. Developers Aren’t Crazy About It, Either
After a day of widespread, overwhelming pushback, Nvidia CEO Jensen Huang doubled down and said gamers are “completely wrong” about DLSS. (You know how much gamers love being told that they’re wrong.) But developers at Capcom and Ubisoft say they didn’t even know what the tech demo would look like and, according to Insider Gaming, found out about it the same time everyone else did and were just as surprised. (Nvidia, Ubisoft, and Capcom did not immediately get back to our request for comment.)
“I think the reaction from gamers is understandable,” Marwan Mahmoud, a game developer at Incrypt, wrote in an email to WIRED. “Some games started relying too heavily on these technologies instead of focusing on proper optimization. From a developer perspective, it feels a bit different because you see DLSS as a tool that helps rather than a core solution.”
The problem for many people, developers included, is the one-size-fits-all approach of a technology that can adjust visuals across various game types.
“The artist has a style, the artist has an art direction that you’re going to give him, and that’s something that AI kind of doesn’t respect all the time,” says Raúl Izquierdo, an indie game developer in Mexico, “Maybe I don’t want my characters to be yassified.”
Bates agrees, saying he doesn’t think every game needs to be photo real. And that sentiment is also echoed by game developer Sterling Reames, who has worked at Striking Distance Studios and Zynga. “People just want better games,” Reames wrote in a message to WIRED. “That’s as plainly as I can put it.
At GTC, Nvidia ran its demo on its most powerful consumer graphics cards, two GeForce RTX 5090s. Had Nvidia made its selling point for the tech that it saves resources, thus enabling older hardware to deliver more impressive graphics, there may have been something to that.
“What’s the point if you’re not going to do it on weaker hardware?” Izquierdo says. “If this were done on an [RTX 2080 graphics card], for instance, I think I would be thinking differently about it. OK, this is for the betterment of gamers’ experiences and everything, not just for selling graphic cards.”
Ultimately, Nvidia’s demo, and GTC writ large, was a flex of the company’s power in the AI space. The reaction, Bates posits, is more about humans dealing with not just crossing the uncanny valley, but what happens when we reach the other side.
“Right now it’s pretty clearly a thing they are forced to do to demonstrate their prowess as an AI company,” Bates says. “But the truth is, this is going to be the default in a few years, and nobody is even going to think twice about it. It’s Jensen’s world, we’re just living in it.”
Tech
Essex Police halts live facial recognition over bias and accuracy risks | Computer Weekly
Essex Police has paused its use of live facial recognition (LFR) technology after identifying potential accuracy and bias risks.
The force’s suspension of its LFR system – provided by Israeli biometrics firm Corsight – was revealed in an audit document published by the Information Commissioner’s Office (ICO), which said Essex Police must work to “reduce the risks” identified before continuing with future deployments.
A list of LFR deployments from Essex Police shows the last time the force used the technology was on 26 August 2025, meaning its deployments had already been paused by the time the ICO carried out its audit that November.
While it is currently unclear what specifically prompted the force to suspend its LFR use, Computer Weekly exclusively reported in May 2025 that Essex Police had failed to properly consider its potentially discriminatory impacts, after a “clearly inadequate” equality impact assessment (EIA) was obtained via Freedom of Information rules by privacy campaign group Big Brother Watch.
Experts criticised the document at the time for being “incoherent”, failing to look at the systemic equalities impacts of the technology, and relying exclusively on testing of entirely different software algorithms used by other police forces trained on different populations.
The force was also criticised for “parroting misleading claims” from the supplier about the LFR system’s lack of bias, with the National Institute of Standards and Technology – a body widely recognised as the gold standard for LFR testing, where all of the testing data is publicly shared – holding no information to support the accuracy figures cited by Corsight, or its claim to essentially have the least-biased algorithm available.
Big Brother Watch alleged at the time that these issues taken together meant the force had likely failed to fulfil its public sector equality duty to consider how its policies and practices could be discriminatory.
Independent testing
Responding to the criticisms, the force said at the time that it was continuing to carry out evaluations, noting that both the National Physical Laboratory (NPL) and Cambridge University had been commissioned to conduct further independent testing of its system.
According to the results of that Cambridge study – published on 12 March 2026 – the system was more likely to correctly identify men than women, and was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”.
Matt Bland, a criminologist involved in the study, said: “If you’re an offender passing facial recognition cameras which are set up as they have been in Essex, the chances of being identified as being on a police watchlist are greater if you’re black. To me, that warrants further investigation.”
By contrast, the further NPL testing – also published in March 2026 – found black men were most likely to be correctly matched by the system and white men least likely, but noted that the disparity was not statistically significant.
Computer Weekly contacted the force to ask what specifically prompted the LFR suspension decision, including whether it was the study results or previous criticisms of the EIA.
“In line with our commitment to our Public Sector Equality Duty, Essex Police commissioned two independent studies which were completed by academia,” a spokesperson said. “The first of these indicated there was a potential bias in the positive identification rate, while the second suggested there was no statistical relevant bias in the results.
“Based on the fact there was potential bias, the force decided to pause deployments while we worked with the algorithm software provider to review the results and seek to update the software,” they added. “We then sought further academic assessment.
“As a result of this work, we have revised our policies and procedures and are now confident that we can start deploying this important technology as part of policing operations to trace and arrest wanted criminals. We will continue to monitor all results to ensure there is no risk of bias against any one section of the community.”
Responding to news of the suspension, Jake Hurfurt, the head of research and investigations at Big Brother Watch, said: “Police across the country must take note of this fiasco. AI [artificial intelligence] surveillance that is experimental, untested, inaccurate or potentially biased has no place on our streets.”
Ramping up deployments without debate
While the use of LFR by police – beginning with the Met’s deployment at Notting Hill Carnival in August 2016 – has already ramped up in recent years, there has so far been minimal public debate or consultation, with the Home Office claiming for years that there is already “comprehensive” legal framework in place.
However, in December 2025, the Home Office launched a 10-week consultation on the use of LFR by UK police, allowing interested parties and members of the public to share their views on how the controversial technology should be regulated.
The department has said that although a “patchwork” legal framework for police facial recognition exists (including for the increasing use of the retrospective and “operator-initiated” versions of the technology), it does not give police themselves the confidence to “use it at significantly greater scale … nor does it consistently give the public the confidence that it will be used responsibly”.
It added that the current rules governing police LFR use are “complicated and difficult to understand”, and that an ordinary member of the public would be required to read four pieces of legislation, police national guidance documents and a range of detailed legal or data protection documents from individual forces to fully understand the basis for LFR use on their high streets.
Before the consultation had even closed, however, the Home Office announced plans for the massive roll-out of AI and facial-recognition technologies as part of sweeping reforms to the UK’s “broken” policing system.
Under the proposals – announced in late January 2026, nearly three weeks before the consultation closed – the Home Office will increase the number of LFR vans available to police from 10 to 50; set up a new National Centre for AI in Policing – to be known as Police.AI – to build, test and assure AI models for policing contexts; and invest £115m over three years to help identify, test and scale new AI technologies in policing.
‘Panopticon’ vision
In a recent interview with former prime minister Tony Blair, UK home secretary Shabana Mahmood described her ambition to use technologies such as AI and LFR to achieve Jeremy Bentham’s vision of a “panopticon”, referring to his proposed prison design that would allow a single, unseen guard to silently observe every prisoner at once.
Typically used today as a metaphor for authoritarian control, the underpinning idea of the panopticon is that by instilling a perpetual sense of being watched among the inmates, they would behave as the authorities wanted.
“When I was in justice, my ultimate vision for that part of the criminal justice system was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his panopticon,” Mahmood told Blair. “That is that the eyes of the state can be on you at all times.”
-
Business1 week agoStock market crash today (March 12, 2026): Nifty50 opens below 23,600; BSE Sensex down over 900 points on continuing US-Iran war – The Times of India
-
Business1 week agoUS issues 30-day waiver for Russian oil shipments stranded at sea | The Express Tribune
-
Fashion1 week agoUS unemployment rate 4.4% in Feb 2026, LFPR 62%: BLS
-
Fashion1 week agoChinese firm to invest $15.34 million in garment factory at BEPZA
-
Tech6 days agoTips and Advice for Buying Used or Refurbished Electronics
-
Business1 week agoOil holds above $100 as tensions escalates between Iran, US and Israel – SUCH TV
-
Tech1 week agoTech Traveler’s Guide to Dumbo: Where to Stay, Eat, and Recharge
-
Sports1 week agoBilas’ All-America teams: My top 20 men’s college basketball players of 2025-26
