Tech
Neoclouds: Meeting demand for AI acceleration | Computer Weekly
ChatGPT, launched in 2022, began making a significant impact on the market by late 2023, according to Synergy Research Group. The company’s chief analyst, John Dinsdale, points out that cloud market leaders have experienced accelerated revenue growth over time. Additionally, the emergence of numerous neocloud companies (see box: What is a neocloud?) has further strengthened the already positive momentum in the market.
This sentiment is reflected in the Rethinking AI sovereignty whitepaper, published to coincide with the World Economic Forum, which notes that surging demand for compute is spawning new AI infrastructure development models, such as neocloud providers, national cloud providers and industry-specific artificial intelligence (AI) clouds. While hyperscalers offer global reach and full-service cloud ecosystems, neoclouds provide specialised, high-performance compute infrastructure tailored to AI training and deployment.
This surge in demand for AI acceleration has seen a surprising benefactor. According to Tiger Research, cryptocurrency mining firms, seeking to reduce their exposure to bitcoin’s volatile pricing, are redirecting their graphics processing unit (GPU) farms toward AI acceleration applications.
One example is the Australian bitcoin mining company, Iris Energy. In 2021/2022, Neel Khokhani, a Dubai-based fund manager, acquired shares in the small Australian datacentre for $1 per share. By assisting the company in leveraging its substantial physical assets to transition into an AI infrastructure provider, the share price surged to $63 by 2026. This transformation led to a $60m increase in the company’s valuation, which is now operating under the name Iren.
More choice
Before the emergence of neoclouds a few years ago, if an organisation wanted to work with AI, it had no choice but to go to a hyperscaler like Amazon Web Services (AWS) or Google. While the hyperscalers offer AI infrastructure as part of their vast public cloud services portfolio, Roy Illsley, chief analyst at Omdia, says the hyperscalers tend to be expensive and, as he recalls, a few years ago, there was very little choice other than Google’s AI offerings.
Analyst firm Gartner estimates that by 2030, neocloud providers will capture around 20% of the $267bn AI cloud market. Neoclouds are purpose-built cloud providers designed for GPU-intensive AI workloads. They are not a replacement for hyperscalers, but a structural correction to how AI infrastructure is built, bought and consumed. Their rise signals a deeper shift in the cloud market: AI workloads are forcing infrastructure to unbundle again.
In a recent Computer Weekly article, Mike Dorosh, a senior director analyst at Gartner, said IT buyers face three interrelated constraints, which influence their AI infrastructure decisions. First, there is what Dorosh calls cost opacity, which he says is rising as GPU pricing becomes increasingly bundled and variable, often inflated by overprovisioning and long reservation commitments that assume steady-state usage. Then there are supply bottlenecks, which he says constrain access to advanced AI accelerators. This results in long lead times, regional shortages and limited visibility into future availability. For Dorosh, the third area of concern for IT buyers is performance trade-offs, where virtualisation layers and shared tenancy reduce predictability for latency-sensitive training and inference workloads.
According to Dorosh, these pressures are no longer marginal. They create a market opening that neoclouds are designed to fill.
One example of a neocloud provider is CoreWeave, which the authors of the Rethinking AI sovereignty report say is undergoing a capacity expansion, having secured funding of $25bn since 2024. AI infrastructure buildout is also expanding through national cloud providers such as Humain (Saudi Arabia), G42 (United Arab Emirates), Outscale (France) and StackIT (Germany).
Another neocloud company that has been making headlines is Nscale, which has committed to delivering approximately 12,600 Nvidia GB300 GPUs at the Start Campus datacentre in Sines, Portugal, in the first quarter of 2026. This multi-year agreement sees Nscale offering Nvidia AI infrastructure services to Microsoft while providing European customers with sovereign AI within the European Union.
This deal builds on plans announced by Nscale and Microsoft in September 2025 to deliver the UK’s largest Nvidia AI supercomputer at Nscale’s Loughton AI Campus. The 50MW facility, scalable to 90MW, is expected to house approximately 23,000 Nvidia GB300 GPUs from the first quarter of 2027 to power Microsoft Azure services.
Gartner’s Neoclouds: The next offering arrow in the service provider quiver report notes that the consumption-based economics and transparent pricing offered by neocloud providers address the overprovisioning and hidden costs often associated with the offerings from hyperscalers. In fact, Gartner reports that by offering transparent, usage-based billing, IT buyers can expect to see cost savings of 60-70% on GPU instances compared with hyperscalers.
However, Dorosh says the more significant change is architectural rather than financial. Neoclouds encourage organisations to make explicit decisions about AI workload placement. Training, fine-tuning, inference, simulation and agent execution each have distinct performance, cost and locality requirements. Treating them as interchangeable cloud workloads is increasingly inefficient and often unnecessarily expensive.
As a result, AI infrastructure strategies are becoming inherently hybrid and multicloud by design – not as a by-product of supplier sprawl, but as a deliberate response to workload reality. The cloud market is fragmenting along functional lines, and neoclouds occupy a clear and growing role within that landscape.
“Neoclouds started as GPU as a service. If you needed GPUs, these companies bought or leased GPUs from Nvidia, and then they would slice them and sell them off to people in smaller groups and bundles,” says Omdia’s Illsley.
However, over time, neocloud providers have added software stacks and developed other services to meet the demand of IT buyers who need GPU power and the software stack required for AI training or AI inferencing.
Getting started on deploying AI workloads for inference or training is arguably not as simple as the one-click option offered on something like the AWS Marketplace, Illsley says the neocloud providers are maturing to a point where they have partnered with AI software providers and can therefore offer a full set of services to meet the requirements of IT buyers who need AI compute capacity. “They are saying that they have GPUs and now provide access through partnerships to the software to run AI workloads,” he says.
As an example, CoreWeave and Nvidia recently expanded their relationship to accelerate CoreWeave’s build-out of more than 5GW of AI factory capacity by 2030. Along with the hardware commitment, according to a market insight report from Macquarie Group, the agreement shows that CoreWeave is also working with Nvidia to incorporate its AI-native software within Nvidia’s reference architectures for Nvidia’s enterprise clients and cloud partners.
One neocloud benefit identified by Gartner is access for IT buyers to specialised hardware, since neoclouds tend to prioritise cutting-edge GPUs, often securing first-to-market access through strategic partnerships. They also cater to bare-metal performance and optimised networking, since neoclouds are able to eliminate the layers of server virtualisation needed in multi-tenanted hyperscaler installations. Instead, they are able to offer direct hardware access, which Gartner says reduces latency and makes it possible to deploy high-bandwidth connectivity such as NVLink and InfiniBand for optimal GPU-to-GPU communication.
Choosing between a neocloud and a hyperscaler
While they may have begun as GPU-as-a-service type offerings, the evolution of neoclouds means there is now less of a gap between their AI services and the full-blown AI platform offerings from the hyperscalers.
Clearly, hyperscalers will eventually offer more attractive pricing to compete with neoclouds, but as Gartner senior director analyst Rene Buest points out, neocloud providers are trying to deliver more predictable pricing.
“Hyperscalers are very transparent in terms of their pricing models, so pay as you go, but at the end of the month, you don’t really know what you will pay,” he says. In other words, when using hyperscaler IT infrastructure, the monthly cost of compute resources consumed cannot be determined in advance.
IT leaders can benefit, at least in Buest’s view, from 70% cost savings by choosing a neocloud over a hyperscaler. “They also provide instant direct access to advanced GPUs, which tend to outpace the hyperscalers in speed and transparency,” he says.
Buest says neoclouds are very niche, “providing purpose-built infrastructure for AI workloads”. This not only meets customer demand today, but also suggests that neoclouds will be viable in the foreseeable future.
Khokhani’s successful investment in the former bitcoin miner Iris Energy, now known as Iren, suggests that the long-term AI capacity contracts secured by neocloud providers indicate a stable and robust business model.
He says: “People still think of Iren through a bitcoin-mining lens, but that misses what the business has become. What attracted me was the transition to long-dated, contracted datacentre infrastructure. When you have multi-year take-or-pay style contracts with an investment-grade counterparty like Microsoft, the economic risk starts to resemble infrastructure credit rather than crypto volatility.”
Tech
I Did Not Catch Air on the Aventon Current Electric Mountain Bike, but I Could Have
While Aventon is known first and foremost as an ebike brand, the company started by making fixies in 2013. That gives it some bona fides when it comes to making enjoyable rides for experienced cyclists. (In addition to the Current ADV, there’s also a higher-end model, the Current EXP, with a more expensive carbon frame and better components.) Since its first venture into e-MTBs with the Ramblas in 2024, the company has continued to develop very nicely specced electric mountain bikes for the price.
The designers behind the newest iterations did a masterful job. The Current ADV looks 100 percent the part of contemporary mountain bike. With its 6061 aluminum frame, SRAM Eagle groupset, tubeless-ready Maxxis Minion tires wrapping a pair of double-walled 29-inch wheels, a 170-mm X Fusion Manic dropper post, a Rockshox Psylo Gold front suspension that boasts 150 mm of travel, and a Rockshox Deluxe Select+, it’d be easy to confuse the Current ADV for a traditional analog mountain bike.
Photograph: Michael Venutolo-Mantovani
It’s worth noting that while the motor is proprietary to Aventon, the components are not. It might be difficult to get your local bike shop to look at the battery and motor, but assuming those are fine, it won’t be hard to swap anything else out should you need to repair it.
Despite its design and ride feel, all of which can make you easily forget you’re riding electric, the Current ADV is a class 1 e-MTB (which can be toggled to a class 3 via the brand’s app), and one that gives hours and hours of riding on a single charge.
The 800-watt-hour battery is tucked neatly into the bike’s relatively small downtube, giving a claimed range of up to 105 miles. Of course, I didn’t get nearly that, as I was constantly switching through any of the Current ADV’s five power modes (Auto, Eco, Trail, Turbo, and a new, 30-second Boost Mode for extra torque on big hills). Still, the longest day I spent in the bike’s super-comfy Selle Royal SRX saddle was about three hours. In that time, the battery dropped only about 20 percent.
Eyes Up
The biggest flaw I found in the Current is small and seemingly simple, but it nonetheless had a major impact on my rides. That is the fact that, when clicking through power settings, the bike beeps, and all those beeps sound the same.
When I’m mountain biking (and probably when you’re mountain biking, too), the last thing I want to do is to take my eyes off the trail. Having those beeps be the exact same tone meant I instinctively kept looking down at the top-tube-mounted display to see which mode I was in.
Tech
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice
Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.
It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.
“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.
When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.
The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.
“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”
In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.
When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.
Tech
OpenAI ‘pauses’ Stargate UK: Sudden setback or calculated move? | Computer Weekly
OpenAI has paused plans for its Stargate UK investment, which was to take place in concert with artificial intelligence (AI) datacentre builder Nscale and in the government’s AI growth zones.
The Microsoft-backed company has cited concerns about rising energy costs as well as the regulatory environment in the UK, particularly in copyright.
Affected locations – should OpenAI’s “pause” become permanent – are in the government’s north eastern AI growth zone centred on north Tyneside and Blyth in Northumberland.
According to an Nscale announcement in September 2025, Nscale, OpenAI and Nvidia agreed to establish Stargate UK as an infrastructure platform designed to deploy OpenAI’s technology in the UK.
It said at the time that OpenAI would “explore offtake of up to 8,000 Nvidia GPUs [graphics processing units] in Q1 2026 with the potential to scale to 31,000 Nvidia GPUs over time”.
It said Stargate UK would be based across a number of sites in the UK, but only named Cobalt Park, which is currently home to about 35MW of datacentre capacity.
Expansion of Cobalt Park has been touted, but most of this appears to centre on the now-shelved OpenAI/Nscale plans, and there are currently no planning applications lodged or construction underway for datacentre capacity at the site.
Calculated pause?
That much of OpenAI’s plans have been hedged with conditional wording and lack of concrete progress is not lost on some industry watchers. Bill McCluggage – director of IT strategy and policy in the Cabinet Office and deputy government CIO from 2009 to 2012 – said OpenAI’s decision to pause its proposed Stargate datacentre in the north east looks less a sudden setback and more a calculated pause.
“The stated concern about uncertainty around UK copyright rules and high energy costs are real enough, particularly given the government’s fickle approach to copyright regulation and how power-hungry these facilities are,” he said. “But they are unlikely to be the whole story.
“With an IPO on the horizon, it is hardly surprising that OpenAI is tightening its risk profile, especially against a backdrop of rising infrastructure costs, supply chain fragility in advanced chips, and questions about the pace of AI commercial returns. Reports of delays and disagreements in similar US projects only reinforce that caution.”
McCluggage also suggested the move may be a means to apply pressure for clearer government support and policy certainty.
“In that light, the pause feels less like retreat and more like prudent positioning before committing to a multibillion-pound bet,” he said.
OpenAI has also cited concerns about “regulation”, in particular the UK government stance on copyright with regard to AI training. Here, the government had originally been set to allow AI training to be exempt from copyright, but then faced a backlash from creative sectors fronted by Elton John and Dua Lipa. In late March, the government adopted a holding position that barred open access to copyrighted works for AI training.
Liberal Democrat peer Lord Clement-Jones said: “This is disappointing news, but citing regulation as a reason for not proceeding with their investment in the UK is laughable given the European regulatory landscape and similar copyright issues. Energy costs and other wider economic risks may well have deterred OpenAI alongside potentially overstretched global investment plans.”
Call for clarity
Conservative peer Chris Holmes called for clarity around the issue, and the need for a UK AI Bill.
“What we all need when it comes to AI is clarity, consistency and a coherent approach,” he said. “From the government right now, this is not quite the case. By yet again ‘ducking’ the copyright issue last month they leave everyone in limbo, with a sub-optimal non-solution for all concerned.
“If the government really wants us to optimise the AI opportunity, they must bring forward a cross sector, cross economy AI Bill that brings clarity, consistency and coherence of approach which will benefit datacentre build, startup and scaleups, and a real sense of UK sovereign AI,” said Homes. “Sadly, it seems in the upcoming King’s Speech on 13 May, they have no intention of taking this clear positive action.”
OpenAI and the UK government signed a memorandum of understanding in July 2025 aimed at strategic partnership to deliver AI-driven growth.
At the time, OpenAI cited its use by big UK names that included the NHS, NatWest, Oxford University and Virgin Atlantic.
OpenAI was careful to label commitments as “non-binding”, but these included exploring use of AI in the public sector, developing UK sovereign AI capability and security research.
At the same time, OpenAI said it would increase its footprint in the UK from the current 100 staff.
-
Business1 week agoJaguar Land Rover sees sales recover after cyber attack
-
Uncategorized1 week ago
[CinePlex360] Please moderate: “Trump signals p
-
Entertainment7 days agoJoe Jonas shares candid glimpse into parenthood with Sophie Turner
-
Tech7 days agoOur Favorite iPad Is $50 Off
-
Sports7 days agoUConn Final Four run could trigger a $50M furniture giveaway for Massachusetts-based Jordan’s Furniture
-
Business7 days agoVideo: Why Is the Labor Market Stuck?
-
Entertainment7 days agoBlake Lively reacts to harassment claims dismissal against Justin Baldoni
-
Politics7 days agoIran can sustain Strait of Hormuz closure for years, will cut US military logistics: Official
