Connect with us

Tech

Implementation gap threatens progress in AI and 5G | Computer Weekly

Published

on

Implementation gap threatens progress in AI and 5G | Computer Weekly


There is a massive and clear opportunity for the telecom industry to capture the next wave of growth from artificial intelligence (AI-)driven services to private 5G and the internet of things (IoT), enabled by new capabilities. However, despite confidence in the industry that it can provide compelling AI and 5G use cases, most communication service providers (CSPs) are yet to begin implementing the capabilities required to deliver it.

That is according to Ericsson’s Breaking the cycle of missed opportunities global study, which was based on the opinion of 455 senior telecom executives and looked at how AI-driven applications place new demands on network performance and flexibility.

Ericsson noted that over recent decades, telecoms leaders have repeatedly identified promising new growth opportunities, only to fall short of turning that promise into sustained commercial impact. It stressed that value has been left on the table “again and again”, with others managing to react faster.

The research set out to explore a simple but critical question: is the industry genuinely better positioned to succeed this time? Encouragingly, the findings show that change is underway, with some CSPs already delivering tangible benefits, addressing long-standing challenges by investing in AI and automation technologies, exploring more agile ecosystems, and adopting cloud-native architectures.

The study showed that growth is no longer defined by a single use case or market but by a diverse mix of region-specific, sector-led and application-driven opportunities. Many of these demand greater speed, flexibility and collaboration than traditional operating models were designed to support.

The study found five key dynamics: there is no shortage of opportunity ahead; execution will determine who captures those opportunities; the industry has a clear view of where it has fallen short before; many of the capabilities needed to unlock future growth remain under-implemented; and closing the gap with industry front-runners will require simpler, more flexible deployment models.

Telecoms leaders were confident about future growth and clear on where opportunities lie. What remains less certain is whether existing operating models and capabilities are ready to support those ambitions at the pace and scale required.

Respondents showed strong alignment on both the opportunities telecoms failed to capitalise on and the reasons why. Legacy systems, slow decision-making, inconsistent investment and limited collaboration featured prominently. AI-driven operations, advanced 5G capabilities, cloud-native architectures and SaaS-based platforms were seen as essential enablers of future opportunity. Yet the uneven availability of capabilities such as 5G Standalone was limiting how developers and technology providers design new products.

The majority (90%) of companies were confident in their organisation’s ability to unlock new revenue opportunities. It also highlighted how the industry was clearly aligned on where the opportunity lies, namely: private 5G and enterprise connectivity ranks as the top growth area (49%); consumer/enterprise digital services with tailored performance (44%); and wide-area IoT connectivity (40%).

However, the research findings also cast a light on the fact that the deployment of several key enabling technologies is lagging behind the industry’s ambitions. As many as around 70% have not commenced implementation of the technologies they identify as critical to achieving that growth, with more than 80% saying future growth depends on scaling services rapidly and that the ability to experiment more easily would be a major advantage.

Two-thirds of companies have not commenced implementation of AI-driven network operations and 61% have not commenced implementation of advanced 5G capabilities, including 5G standalone and network slicing. Some 68% have not commenced adoption of SaaS-based IT platforms.

Concluding, Ericsson warned that a gap between belief and execution remains persistent and that history shows that this gap has repeatedly shaped outcomes for the telecoms sector.

It emphasised that the real challenge – and where leadership must now focus their efforts – is translating that vision into action at pace and scale. Legacy systems, ingrained behaviours and rigid operating models have often slowed progress, allowing others to move faster and capture value – and CSPs must not let the past repeat itself.

“The opportunity ahead for the telecom industry to capture the next wave of growth is clear, from AI-driven services to private 5G and IoT enabled by new capabilities,” said Razvan Teslaru, head of strategy, cloud software and services at Ericsson.

“While there is no single path to capturing that opportunity, CSPs are aligned in the capabilities required to deliver it. The challenge is that adoption of those capabilities remains limited, and this execution gap will ultimately determine who translates ambition into real growth. This will require more flexible approaches, with technology partners and new ecosystems enabling operators to move faster and unlock value.”  



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Inside the Race to Develop a Test for the Rare Andes Hantavirus

Published

on

Inside the Race to Develop a Test for the Rare Andes Hantavirus


As passengers return to the US from the cruise that saw a rare hantavirus outbreak, much of the country is lacking a basic public health tool: a test to diagnose the illness in the earliest stages of infection. Nebraska may be the first state with the ability to do so.

In just a few days, a lab at the University of Nebraska Medical Center in Omaha developed its own diagnostic test for the Andes virus in anticipation of receiving 16 American passengers from the ship.

“I believe we might be the only lab in the nation that has this test available at the moment,” Peter Iwen, director of the Nebraska Public Health Laboratory tells WIRED, referring to polymerase chain reaction (PCR) testing, which was important during the Covid-19 pandemic. Its ability to detect tiny quantities of the virus before patients have full-blown symptoms makes it crucial for identifying cases quickly, getting patients prompt medical treatment, and preventing the spread of disease.

The university’s medical center is home to a highly specialized biocontainment unit designed to care for patients with severe infectious diseases that lack vaccines or treatments. Staff members previously treated patients during the 2014 Ebola outbreak and cared for some of the first Americans diagnosed with Covid in 2020.

When Nebraska was notified that it would be receiving some of the passengers, Iwen contacted the US Centers for Disease Control and Prevention to see if it had tests on hand. He learned that the CDC has the ability to run a serological test, which looks for the presence of hantavirus antibodies. But people don’t develop antibodies until they are actively sick and their body has had time to mount an immune response.

Andrew Nixon, a spokesperson for the US Department of Health and Human Services, told WIRED that the CDC has a PCR test for the Andes virus but that it’s a research test that cannot be used for patient management. Research tests are used in scientific experiments, while diagnostic tests that are meant to confirm or rule out a disease in patients need to be rigorously tested, or validated, to make sure they are capable of producing consistent results. Nixon said the agency is working on validating its PCR test.

Iwen’s lab mobilized quickly to track down the materials needed to build and validate a PCR test from scratch. They called a lab in California—a state that has previously seen hantavirus cases—but their test was for a specific strain found in the US. Andes virus has previously only been detected in South America and isn’t found in rodents native to the US.

“Tests that we have available in the US will not detect that virus that’s found in South America,” he says, noting that the Andes virus is very different genetically from the primary hantavirus strain found in the US, known as the Sin Nombre virus.

The Nebraska team reached out to Steven Bradfute, a hantavirus scientist at the University of New Mexico. Frannie Twohig, a graduate student in Bradfute’s lab, had developed an Andes virus PCR test for research purposes as part of her PhD work. Bradfute’s lab also has genetic material of the Andes virus that’s not capable of causing disease which the Nebraska lab would need to validate its test.

On Friday, Bradfute shipped the genetic material and a box of chemical reagents needed to detect the virus in blood samples overnight to Nebraska. By Saturday morning, Iwen’s team had what it needed to start assembling and validating its test.

It was enough to run about 300 tests, which took all day Saturday and Sunday, Iwen says. His team added Andes genetic material in various concentrations to samples of healthy human blood to see if their test could detect it. Then, they compared the results to control samples. The team used up about a third of its tests on the validation process and now has the capacity to conduct a few hundred tests on patient samples.



Source link

Continue Reading

Tech

NASA’s Curiosity Rover Got Its Drill Stuck on a Rock. Here’s How They Freed It

Published

on

NASA’s Curiosity Rover Got Its Drill Stuck on a Rock. Here’s How They Freed It


While it has enabled many exciting discoveries, the Curiosity Rover has also encountered its share of setbacks. The latest left NASA engineers speechless.

On April 25, Curiosity drilled into a rock nicknamed “Atacama” to collect a sample. When the rover retracted the robotic arm after drilling, the entire rock unexpectedly lifted off the Martian surface—all 28.6 pounds of it. While other Curiosity drilling operations have caused cracks or breaks in the upper layers of Martian rocks during the rover’s nearly 14-year mission, this is the first time one has remained stuck to the sleeve that surrounds the drill’s rotating tip.

As the space agency itself recounts, it was the black-and-white obstacle-detection cameras mounted on the front of the rover’s chassis that captured this peculiar “accident” in a sequence of images that allowed engineers to get to work immediately to free it, moving its robotic arm and operating the drill repeatedly over several days.

Engineers initially tried to remove the rock by vibrating the drill, to no avail. On April 29, they adjusted the position of the robotic arm and tried vibration again, but only managed to knock some sand off the rock. On May 1, the team gave it another try by tilting the drill more, rotating and vibrating it, and spinning the drill bit. The team expected to have to repeat these operations several times, but instead the rock broke loose on the first attempt, shattering into a multitude of pieces when it hit the Martian soil.

NASA’s Curiosity rover was developed by the Jet Propulsion Laboratory and landed on Mars in August 2012 with the purpose of looking for evidence that the Red Planet might have once had conditions that could support microbial life. In 2020, it conducted an experiment in the Glen Torridon region within Gale Crater, an area rich in clay minerals that strongly indicate the presence of water in the past and that it collected using onboard instruments known as Sample Analysis on Mars.

This story originally appeared in WIRED Italia and has been translated from Italian.



Source link

Continue Reading

Tech

Data dive: Power grid data shows birth of AI in UK datacentres | Computer Weekly

Published

on

Data dive: Power grid data shows birth of AI in UK datacentres | Computer Weekly


Electricity grid “land grabs” to ensure capacity ahead of graphics processing unit (GPU) shipments. Additions to capacity as ever-larger datacentres switch on. And the arrival, deployment and “burning in” of Hopper and Blackwell GPUs. 

These are some of the things we can see in data from electricity grid provider UK Power Networks (UKPN), which provides electricity utilisation rates taken half hourly for 96 datacentre sites within its region. This stretches from the datacentre hotspots of west London and Docklands, south-eastwards to Kent, Surrey and Sussex, and includes all of Essex and East Anglia. 

Computer Weekly research conducted in March 2026 analysed Electricity Performance Certificate data to identify datacentre locations and capacities, finding 80 datacentres in the UKPN region with a combined capacity of 798MW.

The UKPN dataset starts at the beginning of 2023 and runs to April 2026. Altogether, it comprises almost 5.4 million rows and covers datacentres categorised by the voltage they import from the grid: extra-high voltage (12 sites), high voltage (60), and low voltage (24).

Utilisation ratios are calculated by comparing actual electricity import – measured half-hourly by smart meter – against the maximum capacity booked by the customer. 

Site voltage corresponds to the likely size of the datacentre. Low-voltage (LV) sites are typically 400V connections for smaller enterprises, edge datacentres and server rooms. High voltage (HV) in the UKPN data likely refers to 11kV or 33kV connections to colocation hubs and mid-range datacentres. Extra-high-voltage (EHV) connections are 33kV, 66kV, or 132kV, and cover hyperscale campuses and emerging artificial intelligence (AI) factories.

The average utilisation rate for all UKPN data is just over 20% of booked capacity. Extra-high-voltage sites use the least of their allotted supply (12%), while low-voltage sites use more (18%).

We have taken the data points and split them by voltage levels that correspond to datacentre size. These are then shown in a chart where dips, plateaus, spikes, and so on are visible. These correspond to real-world events that include activation of new datacentre capacity, increases in booked capacity in advance of AI GPU deployments, the heatwave of July and August 2025, actual deployment and “burning in” of AI datacentre infrastructure, and a rush to beat Ofgem’s “use it or lose it” directive in early 2026.

Bear in mind that the chart shows utilisation rate, so while in some cases the cause of a spike might be obvious – such as increased power draw for cooling during a heatwave – other changes might not be so obvious, such as an increase in booked capacity that changes the ratio.  

Now let’s look at some of the key events that show up in the data. 

Small sites can’t deal with the heatwave: July and August 2025

One of the most pronounced spikes in the green (low-voltage site) data occurred in July and August 2025, when meteorological data shows the UK faced four distinct heatwaves between June and August. Temperatures reached 35.8°C in Kent on 1 July, while August saw sustained high night-time temperatures. 

The spikes in the chart show the electrical signature of smaller air-cooled datacentres and server rooms desperately trying to keep temperatures down. Unlike large hyperscale sites that use liquid cooling, smaller sites are the most vulnerable to climate stress.

They typically rely on legacy direct expansion air-conditioning. When ambient temperatures exceed 30°C, these units draw two or three times their normal power just to maintain the status quo. Electricity utilisation ratio spikes here because total facility power skyrockets while IT load remains static, resulting in a temporary collapse of power usage effectiveness (PUE).

Capacity increases: Ratios decline – late 2023 into 2024

Much of the story revealed by the data is that of large-scale sites adding capacity, and booking more electricity supply in anticipation of deliveries and deployment of GPUs. This started to happen in late 2023 and into 2024. 

The sudden drop-off in September and October 2023 for the largest datacentres – the red line – is likely the result of capacity coming online. 

So, when a hyperscale site activates a new phase, its import capacity – the total power booked from the grid (the denominator) – jumps instantly. But because the IT load (the numerator) only populates as servers are physically racked and “burned in” over subsequent months, the utilisation percentage appears to crash.

At the same time, we see gradual (blue line) and more pronounced (green line) declines in late 2023 and into 2024. That’s a likely indication that larger, newer, more efficient facilities are pulling general-purpose workloads away from the older small and mid-range facilities.

CBRE’s forecast for large-scale colocation take-up in London in 2024 was forecast to hit 130MW, and as this new, efficient capacity came online, it “emptied” the less efficient sites.

The fact that the smallest datacentres (likely enterprise-owned or older retail colocation sites) took until May 2024 to settle from utilisation ratios of 0.2 to 0.15 indicates a longer migration period. Unlike the hyperscalers, which move workloads in massive, software-defined blocks, smaller organisations are more likely to be bound by physical hardware lifecycles. 

Sites that activated capacity in late 2023 included:

  • Iron Mountain’s LON-2, with the first phase of its eventual 27MW of capacity in Slough, was confirmed as operational at the end of 2023. Its grid capacity was likely booked into the UKPN system in September 2023 as part of its pre-commissioning phase.
  • Equinix’s LD11x/LD13 expansions, meanwhile, were specifically designed to lure the big three hyperscalers, and moved from construction to “available capacity” in late 2023.

GPU supply constraint and the ‘West London land grab’

From late 2023, the lead times on Nvidia GPU clusters became very lengthy, with Omdia reporting 36 to 52 weeks for H100-based servers. At the same time, datacentre operators scrambled for grid supply, often booking way beyond what they would immediately use so they could be ready to deploy Hopper GPUs when they finally arrived in mid- to late-2024. That’s another reason grid utilisation appears to plummet in late 2023 in the chart. 

In July 2022, the Greater London Authority (GLA) sent a warning to developers, stating that major new planning applications in Hillingdon, Ealing and Hounslow would face delays of up to a decade, with some connection dates pushed back as far as 2035 or 2037.

That breaking point was triggered by the extreme concentration of datacentres along the M4 corridor. By mid-2023, datacentres accounted for 18% of total demand in West London. Transmission-level capacity and local distribution reached full capacity because developers had legally “reserved” future power capacity and left zero headroom for new housing or industrial projects.

Financial reports from datacentre Real Estate Investment Trusts (REITs) like Equinix and Digital Realty back this up. In their 2023 annual reports, these firms noted record “backlog” levels, where capacity was signed and committed but not yet billing.

In the data, a high backlog means the distribution network operator (UKPN) has allocated the power, but the servers aren’t spinning, and this matches the 2023-2024 trough where utilisation ratios settled at a lower baseline compared with the pre-AI land grab.

The reason the ratio didn’t bounce back immediately is that AI density is more efficient than legacy density. A rack of H100s might draw 40kW, but it replaces dozens of legacy racks drawing 5kW each. As hopper GPUs finally arrived in mid- to late-2024, they filled that phantom capacity, but because grid capacity had been aggressively over-booked in 2023, utilisation ratios remained low. The industry effectively built a buffer that it is still filling today.

GPU deployment, AI datacentre burn-in: 2024 and 2025

The peaks of mid-2024 fit with the likely deployment of Nvidia Hopper (H100/H200) GPUs. The Hopper generation was the first GPU to hit a 700W Thermal Design Power (TDP) – ie, the wattage for which its cooling had to be designed. An HGX H100 node of eight GPUs draws roughly 10.2kW. The spikes in the data from late-2024 likely represent the initiation of large-scale training runs where thousands of these units synchronise their power draw.

These represented a shift in datacentre power dynamics, from the steady-state draws of the previous Ampere (A100) generation to highly volatile, high-density profiles.

Late 2025 marked a pivot from Hopper to Blackwell’s high-density liquid-cooled requirements. This transition is reflected in the UKPN telemetry as a distinct shift from steady-state power draw to the volatile, “peaky” plateaus of large-scale Blackwell training epochs. 

The “mountain range” in the chart beginning in November 2025 marks the power-on month for the UK’s first Nvidia Blackwell (B200) clusters. This is the signature of initial model training, which is an extremely intensive, non-stop process.

Each B200 GPU has a base 1,000W TDP, configurable up to 1,200W. The GB200 Grace-Blackwell Superchip, meanwhile – which shipped from late 2024 – mandated direct-to-chip liquid cooling to manage its extreme density. 

The smoking gun here is the launch of the Nebius AI Cloud cluster at Ark Data Centres’ Longcross Park in Surrey. This went live in November 2025 with several thousand Nvidia Blackwell GPUs and a 16MW signature. 

The EHV line remains elevated and jagged through March 2026, reflecting the high, sustained draw of “epochs” and “checkpoints” during frontier model pre-training.

One final spike: Use it so they don’t lose it?

The giant spike in late March 2026 coincides with the Ofgem Demand Connections Reform deadline of 13 March 2026. 

In the face of massive increases in electricity demand, not least by datacentre operators – and with the demand queue soaring to 125GW by June 2025 – Ofgem had proposed tougher financial tests and “use it or lose it” rules to clear the queue. Large-scale operators with parked capacity were incentivised to show power draw to prove their projects were “viable” and “strategically important” before the new rules could claw back their unutilised megawatts.



Source link

Continue Reading

Trending