Tech
Gartner: Why neoclouds are the future of GPU-as-a-Service | Computer Weekly
For the past decade, hyperscalers have defined how CIOs and IT leaders think about their organisation’s cloud infrastructure. Scale, abstraction and convenience became the default answers to almost every compute question. But artificial intelligence (AI) is breaking the economics of cloud computing and neoclouds are emerging as the response.
Gartner estimates that by 2030, neocloud providers will capture around 20% of the $267bn AI cloud market. Neoclouds are purpose-built cloud providers designed for graphics processing unit (GPU)-intensive AI workloads. They are not a replacement for hyperscalers, but a structural correction to how AI infrastructure is built, bought and consumed. Their rise signals a deeper shift in the cloud market: AI workloads are forcing infrastructure to unbundle again.
This is not a return to on-premises thinking, nor a rejection of the cloud operating model. It is the next phase of cloud specialisation, driven by the practical realities of running AI at scale.
Why AI breaks the hyperscaler model
AI workloads differ fundamentally from traditional organisational compute. They are GPU-intensive, latency-sensitive, power-hungry and capital-heavy. They also scale unevenly, spiking for model training, throttling for inference, then surging again as models are refined, retrained and redeployed.
Hyperscalers were designed for breadth, not the specific demands of GPU-heavy AI workloads. Their strength lies in offering general-purpose services on a global scale, abstracting complexity behind layers of managed infrastructure. For many organisational workloads, that abstraction remains a strength. For AI workloads, however, it increasingly becomes friction.
Companies are now encountering three interrelated constraints that are shaping AI infrastructure decisions. Cost opacity is rising as GPU pricing becomes increasingly bundled and variable, often inflated by overprovisioning and long reservation commitments that assume steady-state usage. At the same time, supply bottlenecks are constraining access to advanced accelerators, with long lead times, regional shortages and limited visibility into future availability. Layered onto this are performance trade-offs, where virtualisation layers and shared tenancy reduce predictability for latency-sensitive training and inference workloads.
These pressures are no longer marginal. They create a market opening that neoclouds are designed to fill.
What neoclouds change
Neoclouds specialise in GPU-as-a-service (GPUaaS), delivering bare-metal performance, rapid provisioning and transparent consumption-based economics. Many provide cost savings of up to 60–70% compared with hyperscaler GPU instances, while offering near-instant access to the latest hardware generations.
Yet the more significant change is architectural rather than financial.
Neoclouds encourage organisations to make explicit decisions about AI workload placement. Training, fine-tuning, inference, simulation and agent execution each have distinct performance, cost and locality requirements. Treating them as interchangeable cloud workloads is increasingly inefficient, and often unnecessarily expensive.
As a result, AI infrastructure strategies are becoming inherently hybrid and multicloud by design, not as a by-product of vendor sprawl, but as a deliberate response to workload reality. The cloud market is fragmenting along functional lines, and neoclouds occupy a clear and growing role within that landscape.
Co-opetition, not disruption
The growth of neoclouds is not a hyperscaler extinction event. In fact, hyperscalers are among their largest customers and partners, using neoclouds as elastic extensions of capacity when demand spikes or accelerator supply tightens.
This creates a new form of co-opetition. Hyperscalers retain control of platforms, ecosystems and company relationships, while neoclouds specialise in raw AI performance, speed to hardware and regional capacity. Each addresses a different constraint in the AI value chain.
For companies and organisations buying cloud services, this blurs traditional cloud categories. The question is no longer simply which cloud provider to use, but how AI workloads should be placed across environments to optimise cost, performance, sovereignty and operational risk.
The real risk: tactical adoption
The greatest risk for CIOs and technology leaders is treating neoclouds as a short-term workaround for GPU shortages. Neoclouds introduce new considerations: integration complexity with existing platforms, dependency on specific accelerator ecosystems, energy intensity and vendor concentration risk. Used tactically, they can fragment architectures and increase long-term operational exposure. Used strategically, however, they unlock something more valuable, control:
- Control over cost visibility, through transparent, consumption-based GPU pricing that reduces overprovisioning and exposes the true economics of AI workloads
- Control over data locality and sovereignty, by enabling regional or sovereign deployments where regulatory or latency requirements demand it
- Control over workload placement, by allowing organisations to deliberately orchestrate AI training and inference across hyperscalers, neoclouds and on-premises environments based on performance, cost and compliance requirements.
From cloud strategy to AI placement strategy
Neoclouds are not an alternative cloud. They are a forcing function, compelling organisations to rethink infrastructure assumptions that no longer hold in an AI-driven world.
The new competitive advantage will come from AI placement strategy – deciding when hyperscalers, neoclouds, on-premises or edge environments are the right choice for each workload.
Over the next five years, IT leaders will be defined not by how much cloud they consume, but by how precisely they place intelligence where it creates the most value.
Mike Dorosh is a senior director analyst at Gartner.
Gartner analysts will further explore how neoclouds and AI workload placement are reshaping cloud and data strategies at the Gartner IT Symposium/Xpo in Barcelona, from 9–12 November 2026.
Tech
Government Docs Reveal New Details About Tesla and Waymo Robotaxis’ Human Babysitters
Are self-driving vehicles really just big, remote-controlled cars, with nameless and faceless people in far-off call centers piloting the things from behind consoles? As the vehicles and their science-fiction-like software expand to more cities, the conspiracy theory has rocketed around group chats and TikToks. It’s been powered, in part, by the reluctance of self-driving car companies to talk in specifics about the humans who help make their robots go.
But this month, in government documents submitted by Alphabet subsidiary Waymo and electric-auto maker Tesla, the companies have revealed more details about the people and programs that help the vehicles when their software gets confused.
The details of these companies’ “remote assistance” programs are important because the humans supporting the robots are critical in ensuring the cars are driving safely on public roads, industry experts say. Even robotaxis that run smoothly most of the time get into situations that their self-driving systems find perplexing. See, for example, a December power outage in San Francisco that killed stop lights around the city, stranding confused Waymos in several intersections. Or the ongoing government probes into several instances of these cars illegally blowing past stopped school buses unloading students in Austin, Texas. (The latter led Waymo to issue a software recall.) When this happens, humans get the cars out of the jam by directing or “advising” them from afar.
These jobs are important because if people do them wrong, they can be the difference between, say, a car stopping for or running a red light. “For the foreseeable future, there will be people who play a role in the vehicles’ behavior, and therefore have a safety role to play,” says Philip Koopman, an autonomous-vehicle software and safety researcher at Carnegie Mellon University. One of the hardest safety problems associated with self-driving, he says, is building software that knows when to ask for human help.
In other words: If you care about robot safety, pay attention to the people.
The People of Waymo
Waymo operates a paid robotaxi service in six metros—Atlanta, Austin, Los Angeles, Phoenix, and the San Francisco Bay Area—and has plans to launch in at least 10 more, including London, this year. Now, in a blog post and letter submitted to US senator Ed Markey this week, the company made public more aspects of what it calls its “remote assistance” (RA) program, which uses remote workers to respond to requests from Waymo’s vehicle software when it determines it needs help. These humans give data or advice to the systems, writes Ryan McNamara, Waymo’s vice president and global head of operations. The system can use or reject the information that humans provide.
“Waymo’s RA agents provide advice and support to the Waymo Driver but do not directly control, steer, or drive the vehicle,” McNamara writes—denying, implicitly, the charge that Waymos are simply remote-controlled cars. About 70 assistants are on duty at any given time to monitor some 3,000 robotaxis, the company says. The low ratio indicates the cars are doing much of the heavy lifting.
Waymo also confirmed in its letter what an executive told Congress in a hearing earlier this month: Half of these remote assistance workers are contractors overseas, in the Philippines. (The company says it has two other remote assistance offices in Arizona and Michigan.) These workers are licensed to drive in the Philippines, McNamara writes, but are trained on US road rules. All remote assistance workers are drug- and alcohol-tested when they are hired, the company says, and 45 percent are drug-tested every three months as part of Waymo’s random testing program.
Tech
DHS Wants a Single Search Engine to Flag Faces and Fingerprints Across Agencies
The Department of Homeland Security is moving to consolidate its face recognition and other biometric technologies into a single system capable of comparing faces, fingerprints, iris scans, and other identifiers collected across its enforcement agencies, according to records reviewed by WIRED.
The agency is asking private biometric contractors how to build a unified platform that would let employees search faces and fingerprints across large government databases already filled with biometrics gathered in different contexts. The goal is to connect components including Customs and Border Protection, Immigration and Customs Enforcement, the Transportation Security Administration, US Citizenship and Immigration Services, the Secret Service, and DHS headquarters, replacing a patchwork of tools that do not share data easily.
The system would support watchlisting, detention, or removal operations and comes as DHS is pushing biometric surveillance far beyond ports of entry and into the hands of intelligence units and masked agents operating hundreds of miles from the border.
The records show DHS is trying to buy a single “matching engine” that can take different kinds of biometrics—faces, fingerprints, iris scans, and more—and run them through the same backend, giving multiple DHS agencies one shared system. In theory, that means the platform would handle both identity checks and investigative searches.
For face recognition specifically, identity verification means the system compares one photo to a single stored record and returns a yes-or-no answer based on similarity. For investigations, it searches a large database and returns a ranked list of the closest-looking faces for a human to review instead of independently making a call.
Both types of searches come with real technical limits. In identity checks, the systems are more sensitive, and so they are less likely to wrongly flag an innocent person. They will, however, fail to identify a match when the photo submitted is slightly blurry, angled, or outdated. For investigative searches, the cutoff is considerably lower, and while the system is more likely to include the right person somewhere in the results, it also produces many more false positives that necessitate human review.
The documents make clear that DHS wants control over how strict or permissive a match should be—depending on the context.
The department also wants the system wired directly into its existing infrastructure. Contractors would be expected to connect the matcher to current biometric sensors, enrollment systems, and data repositories so information collected in one DHS component can be searched against records held by another.
It’s unclear how workable this is. Different DHS agencies have bought their biometric systems from different companies over many years. Each system turns a face or fingerprint into a string of numbers, but many are designed only to work with the specific software that created them.
In practice, this means a new department-wide search tool cannot simply “flip a switch” and make everything compatible. DHS would likely have to convert old records into a common format, rebuild them using a new algorithm, or create software bridges that translate between systems. All of these approaches take time and money, and each can affect speed and accuracy.
At the scale DHS is proposing—potentially billions of records—even small compatibility gaps can spiral into large problems.
The documents also contain a placeholder indicating DHS wants to incorporate voiceprint analysis, but it contains no detailed plans for how they would be collected, stored, or searched. The agency previously used voiceprints in its “Alternative to Detention” program, which allowed immigrants to remain in their communities but required them to submit to intensive monitoring, including GPS ankle trackers and routine check-ins that confirmed their identity using biometric voiceprints.
Tech
Metadata Exposes Authors of ICE’s ‘Mega’ Detention Center Plans
A PDF that Department of Homeland Security officials provided to New Hampshire governor Kelly Ayotte’s office about a new effort to build “mega” detention and processing centers across the United States contains embedded comments and metadata identifying the people who worked on it.
The seemingly accidental exposure of the identities of DHS personnel who crafted Immigration and Customs Enforcement’s mega detention center plan lands amid widespread public pushback against the expansion of ICE detention centers and the department’s brutal immigration enforcement tactics.
Metadata in the document, which concerns ICE’s “Detention Reengineering Initiative” (DRI), lists as its author Jonathan Florentino, the director of ICE’s Newark, New Jersey, Field Office of Enforcement and Removal Operations.
In a note embedded on top of an FAQ question, “What is the average length of stay for the aliens?” Tim Kaiser, the deputy chief of staff for US Citizenship and Immigration Services, asked David Venturella, a former GEO Group executive whom The Washington Post described as an adviser overseeing an ICE division that manages detention center contracts, to “Please confirm” that the average stay for the new mega detention centers would be 60 days.
Venturella replied in a note that remained visible on the published document, “Ideally, I’d like to see a 30-day average for the Mega Center but 60 is fine.”
DHS did not respond to a request for comment about what the three men’s role in the DRI project is, nor did it answer questions about whether Florentino had access to a PDF processor subscription that might have enabled him to scrub metadata and comments from the PDF before sending it to the New Hampshire governor. (The so-called Department of Government Efficiency spent last year slashing the number of software licenses across the federal government.)
The document itself says that ICE intends to update a new detention model by the end of September of this year. ICE says it will create “an efficient detention network by reducing the total number of contracted detention facilities in use while increasing total bed capacity, enhancing custody management, and streamlining removal operations.”
“ICE’s surge hiring effort has resulted in the addition of 12,000 new law enforcement officers,” the DHS document says. “For ICE to sustain the anticipated increase in enforcement operations and arrests in 2026, an increase in detention capacity will be a necessary downstream requirement.”
ICE plans on having two types of facilities: regional processing centers that will hold between 1,000 to 1,500 detainees for an average stay of three to seven days, and the mega detention facilities, which will hold an average of 7,000 to 10,000 people for an average of 60 days. It’s been referred to as a “hub and spoke model,” where the smaller facilities will feed into the mega ones.
“ICE plans to activate all facilities by November 30, 2026, ensuring the timely expansion of detention capacity,” the document says.
Beyond detention centers, ICE plans to buy or lease offices and other facilities in more than 150 locations, in nearly every state in the US, according to documents first reported by WIRED.
The errant comment in the PDF sent to New Hampshire’s governor is not the only issue the set of documents apparently had; according to the New Hampshire Bulletin, a previous version of an accompanying document, an economic impact analysis of a processing site in Merrimack, New Hampshire, referenced “the Oklahoma economy” in the opening lines. The errant document remains on the governor’s website, as of publication.
Across the country, ICE’s mega detention center projects have sparked controversy. ICE’s purchase of a warehouse in Surprise, Arizona, spurred hundreds to attend a city council meeting on the topic, according to KJZZ in Phoenix. In Social Circle, Georgia, city officials have pushed back against DHS’s proposal to build a mega center there, because officials say the city’s water and sewage treatment infrastructure would not be able to handle the influx of people.
-
Business1 week agoTop stocks to buy today: Stock recommendations for February 13, 2026 – check list – The Times of India
-
Fashion1 week agoIndia’s PDS Q3 revenue up 2% as margins remain under pressure
-
Politics1 week agoIndia clears proposal to buy French Rafale jets
-
Fashion1 week ago$10→ $12.10 FOB: The real price of zero-duty apparel
-
Tech1 week agoElon Musk’s X Appears to Be Violating US Sanctions by Selling Premium Accounts to Iranian Leaders
-
Tech4 days agoRakuten Mobile proposal selected for Jaxa space strategy | Computer Weekly
-
Entertainment3 days agoQueen Camilla reveals her sister’s connection to Princess Diana
-
Politics3 days agoRamadan moon sighted in Saudi Arabia, other Gulf countries
