Connect with us

Tech

What businesses need to fix now to avoid expensive 6G lock-ins | Computer Weekly

Published

on

What businesses need to fix now to avoid expensive 6G lock-ins | Computer Weekly


5G has significantly improved sectors such as manufacturing, healthcare and logistics through higher speed, lower latency and the ability to simultaneously connect a vast number of devices. However, 5G roll outs are still incomplete in many regions across the world, with core performance and infrastructure issues persisting. 

Despite this, several enterprises are already preparing for 6G, the next generation of mobile connectivity, even though the technology is still in the applied research and development phase. Official standards are expected to be fully determined by around 2029, under the 3GPP Release 21, according to the European Parliament. This has raised a number of important questions for organisations.

Are enterprises jumping the gun on 6G preparation when key challenges around 5G performance and infrastructure remain unsolved? And could early investment result in expensive architectural lock-ins down the line, once standards are fully finalised? 

“6G is not simply about streaming richer AR experiences. It is about transforming every sensor, robot and AI [artificial intelligence] system into an active node in a unified, adaptive digital nervous system,” says Khaled Elbehiery, professor at the Open Institute of Technology.

Why 6G is not just a mobile upgrade 

As a system-level shift, 6G is far more than simply an incremental mobile connectivity upgrade. Rather than treating AI-driven networks, edge computing, sensing and communications as add-on features, as with 5G, 6G is expected to fundamentally integrate them into core architecture

For enterprises, this could considerably change the role of connectivity, as data transmission signals will also be used for environment monitoring, motion detection and automation support, especially in smart traffic management, industrial robotics and emergency services.

6G will also introduce higher levels of network automation, with self-optimising networks handling real-time resource allocation, network configuration and inference management. Predictive maintenance is expected to help resolve traffic spikes and network failures before they happen as well, which will boost reliability.

As an “edge-native” architecture, 6G will have distributed intelligence and low latency, which can significantly advance remote surgery, augmented reality and on-device model training.

“The real value of 6G lies in device density, deterministic latency and integrated sensing, not headline gigabit rates. Despite early warnings about cyber security risks, many early discussions lack a strong emphasis on security by design,” says Elbehiery.

Basically, given the widespread impact of 6G on core infrastructure, treating it only as “faster 5G” could be a crucial mistake. 

Where enterprises are already getting 6G wrong 

By looking at 6G only as incremental connectivity, several enterprises are already making investment and infrastructure planning decisions based on near-term feature expectations that may not exist when practical deployments come into effect.

One of the most immediate mistakes is mis-timing commercialisation and investment strategies. By failing to devise early revenue generating and monetisation strategies and still relying on 5G’s “build and they will come” mindset, enterprises risk treating 6G as a frantic race. This can lead enterprises to over-investment in premature technology before standards are fully defined, rather than developing practical deployments for the 2030s.

Organisations could also make architectural and infrastructure mistakes. By relying on single vendors and closed architecture, enterprises will vastly increase the risk of expensive and complicated lock-ins down the line, especially since 6G standards are still in flux.

There are inflated expectations that 6G will fix 5G shortcomings such as dead zones as well. However, in reality, bridge technologies such as 5G-Advanced (3GPP Release 18+) will likely continue to be needed. Similarly, 6G is unlikely to eliminate the need for Wi-Fi through universal coverage, due to persistent practical constraints such as high frequencies struggling to penetrate walls. 

“Many organisations out there are deploying edge compute in a way that is optimal for current 5G use cases, without thinking about what those environments are going to need to do in terms of interoperability across several networks and locations in a 6G world,” says Tomas Novosad, consumer technology analyst and founder of Fibre in my Area. 

Organisations are also under-accounting for system complexity and governance gaps in 6G, especially when it comes to the massive integrations, high energy and hardware needs the technology requires. Additionally, AI-native narratives have been overhyped at times, particularly around early 6G applications. While standards are expected to heavily integrate AI for beam management and energy efficiency, it is still likely to remain optional for critical infrastructure in the future.

With the rise of AI-manipulated network management, data ethics, privacy and accountability questions will become more complex and need to be addressed as well. 

The real danger: lock-in risk 

Currently, the biggest risk of early 6G decisions is lock-in, which can come in many forms. The most common is supplier lock-in, when enterprises continue to rely on established infrastructure providers to reduce integration failures. However, this can often result in high-cost, multi-year maintenance contracts and dependence on single ecosystems, which can be slow and complicated to reverse once deployed.

Cloud infrastructure highlights a similar problem, where data, networking and orchestration layers become less portable over time, embedded in single environments. When this happens, migrating to another provider becomes a full architectural rebuild rather than a quick technical transition.

Despite increased policy support, enterprises are still hesitant to fully embrace Open RAN, mainly due to commercial pressures. This often means that supplier diversification remains largely theoretical at scale.

Another risk is committing too early to pre-standardised 6G technologies (pre-3GPP Release 21). This can lead to infrastructure being built on assumptions that later standards do not hold up, necessitating expensive replacements or redesign once standards are fully determined.

Early architecture and spectrum decisions may also cause long-term rigidity at the physical layer. 6G is likely to need much more mid-band spectrum than 5G. However, committing infrastructure to current assumptions, could cause incompatibility with potentially higher-frequency and denser deployments years later.

Given the accelerating shift towards a multi-network future, which will include 6G, Wi-Fi, satellite systems and private networks, overinvestment in any single network model could trap organisations into structures that no longer align with how connectivity is delivered. 

“The most expensive 6G mistake will not be buying the wrong radio. It will be building a network that cannot evolve to support innovation and the explosion in service demand without a procurement crisis,” warns Leid Zejnilovic, co-academic director of the digital data design institute at Nova SBE

What enterprises need to fix for 6G right now 

To minimise the chances of expensive 6G lock-ins and long-term rigidity, organisations need to take some concrete steps right now. “First, audit where current vendors control data gravity and operational workflows,” says Zejnilovic. “Second, insist on open interfaces for telemetry, policy and automation in new contracts. Third, put governance around AI-driven network changes in place now, before those tools become too embedded to challenge.”

A key step is building architectural resilience through network-agnostic, hybrid connectivity design, prioritising cost control over optimisation-driven investments. This requires modular, AI-aware architectures that separate connectivity from application logic and use portable interfaces to enable flexibility across both local and global environments.

Ideally, systems should be able to work across 5G, 6G, Wi-Fi and private networks, rather than focusing too narrowly on only one. This is because no single connectivity layer is expected to remain dominant or stable long enough to back long-term infrastructure decisions.

Private networks and edge deployments should be treated as useful targeted tools, not default architecture, as overuse can cause more fragmentation and operational complexity than they solve. This is particularly when they are deployed without clear workload-dependent criteria.

Organisations must prepare for automated and AI-driven networks as well. Observability, governance and accountability will all become much more critical as network operations become more autonomous. This is likely to shift risk from connectivity failure to hard-to-audit automation layers. 

“The question is no longer just whether the network can optimise itself,” Zejnilovic adds. “It is whether the enterprise can explain, constrain and reverse those optimisations when they affect performance, resilience or compliance. Auditability and human override will become core control points.”

Similarly, enterprises should avoid chasing incremental headline speed improvements and investments on the back of supplier-driven “6G-native” narratives, to minimise chances of future expensive rebuilds once standards are fully formed. 

From 5G to 6G: an uneven transition

Instead of a clean shift, the path from 5G to 6G will likely be messy, uneven and highly fragmented, impacted by geopolitical, commercial and technical limitations.

One of the main reasons for this is 5G’s ongoing underperformance in several areas and not being fully monetised, despite significant infrastructure investments. This has made funding for yet another massive infrastructure upgrade for 6G harder to obtain currently. As a result, rather than a reset, the technology is being seen more conservatively as an incremental extension.

Increasing focus on 5G-Advanced as a bridge to 6G has further complicated the transition. Instead of a clear shift, networks are expected to evolve in overlapping phases, which could extend a hybrid environment and slow large-scale adoption. 

Significant technical challenges – such as building much denser infrastructure and increasing energy capacity – remain. Both deployment complexity and costs are likely to be higher than previous generations.

Geopolitical fragmentation is shaping 6G development too. Digital sovereignty bids and competing standards across the US, Europe and China could hinder interoperability, creating a multi-speed global roll-out instead.

Most importantly, 6G still doesn’t have a clear commercial driver, unlike previous generations. There is no “killer app” or immediate need that supports widespread adoption, making demand uncertain, despite expectations of advanced sensing and holographic communication.

These factors highlight a transition which will be defined by coexistence, not replacement. As such, organisations need to be prepared to operate across overlapping connectivity generations, rather than anticipating a single, clear transition to 6G.

What 6G readiness looks like

6G isn’t about early adoption, but rather about avoiding potentially regrettable and expensive decisions today. Disciplined, architecture-first enterprises which focus on hybrid connectivity and network-agnostic systems will be most able to adapt as standards become defined.

However, organisations waiting for a clean reset and swayed by overhyped “6G-native” narratives risk losing more in premature investments that will be difficult to reverse down the line.

“6G is more than just another generation of wireless technology; it is a redefinition of how digital systems connect, compute and coordinate across the planet and beyond,” Elbehiery concludes. “Organisations that recognise this early will design for flexibility, openness and intelligence. Those that do not risk locking themselves into architectures that cannot evolve with the future.”

The real challenge is not preparing for 6G but learning to adapt to a messy, overlapping transition.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Papa Johns Is Getting Into Drone Delivery—but Not for Pizza

Published

on

Papa Johns Is Getting Into Drone Delivery—but Not for Pizza


Starting today, eager customers of the US pizza restaurant chain Papa Johns living in one corner of southern North Carolina will have the opportunity to receive their food from the sky, thanks to a new collaboration with Alphabet’s drone company, Wing. But Papa Johns’ signature pizzas won’t be on offer. Instead, drone-loving North Carolinians will have to choose between three kinds of sandwiches, a newer product for the fast-food chain: Philly cheesesteak, chicken bacon ranch, or steak and mushroom varieties.

Drone deliveries are popping up in more communities across the US and the world. Questions about the long-term economics and regulatory picture around unmanned aerial vehicles persist, but Wing boasts partnerships with Walmart, Panera, and DoorDash and is delivering through the sky to customers in four metro areas: Atlanta, Charlotte, Dallas-Fort Worth, and Houston. (In 2019, Wing received the US Federal Aviation Administration’s first certificate allowing a drone delivery company to operate in the country.) Competing drone companies, including Zipline, Amazon Prime Air, and Flytrex, fly packages, medical supplies, and Chipotle burritos in select communities across countries like Ghana, Japan, and the US.

But until very recently, drone operators have struggled to fly full-size pizzas. For companies hoping to break into the food delivery space, this is unfortunate: 11 percent of the US population eats a slice on any given day, according to the US Department of Agriculture. In a fast-diversifying restaurant industry, getting them to customers is still big business. But the realities of physics, engineering, and the restaurant business conspire to make pizzas a challenge for drones.

Flying Pizzas

Traditionally, pizza is the experimental tech delivery of choice. The familiar and cheap cheese-sauce-bread combo has been loaded onto self-driving cars and autonomous sidewalk delivery vehicles and has been assembled by robots. It’s a fast and satisfying option, especially for busy families tight on time. And theoretically, a great fit for automated drones, among one of the faster delivery options—people love fresh, piping-hot pizza.

But transporting one by drone requires some extra work, says Wing CEO Adam Woodworth. “Pizza comes in a very different box, with a big, flat surface area,” he says. They’re not naturally aerodynamic. Also, “you don’t want a pizza tilted.”

Wing’s relatively lightweight drones are engineered to carry three specific package sizes; right now, pizza boxes aren’t one of them. Woodworth says a new design is on the horizon. “I want to see pizzas coming at me from the sky,” he says.

Flytrex, an Israel-based drone delivery company, announced late last month that it had finally solved the problem. In collaboration with rival pizza chain Little Caesars, the company began delivering via drone up to two large pizzas (16 inches each), plus sodas and bread, in Wylie, Texas, a suburb of Dallas. The leap comes courtesy of a much bigger new drone, capable of carrying up to 8.8 pounds for four miles.

Courtesy of Flytrex



Source link

Continue Reading

Tech

Chevron Wants a School District Tax Break for a Data Center Power Plant in Texas

Published

on

Chevron Wants a School District Tax Break for a Data Center Power Plant in Texas


A major oil company is seeking a state tax break in Texas worth hundreds of millions of dollars to build a massive power plant. The energy won’t be going to residential customers, though. Instead, the gas plant will be used to power a data center whose eventual tenant could be Microsoft.

Chevron subsidiary Energy Forge One has filed an application with the State Comptroller’s board to obtain a tax abatement for a power plant it’s building in West Texas. In late January, the comptroller’s office made a recommendation to support the application’s approval—the first such approval under the program for a power plant intended solely for data center use.

In March, following news reports that Microsoft was looking into purchasing power from the Energy Forge project, Chevron said that it had entered into an “exclusivity agreement” with Microsoft and Engine 1, an investment fund involved in the project. In January, Microsoft pledged to be a “good neighbor” in communities where it is building data centers, including promising to pay a “full and fair share of local property taxes.”

The potential tax abatement for the project comes as big tech companies are battling rising public fury about data centers and electricity costs. It also comes as lawmakers start to cast a more critical eye on ballooning incentives for data centers, some of which have cost some states—including Texas—$1 billion or more each year.

Chevron spokesperson Paula Beasley told WIRED in an email that all tax incentives under consideration for the Energy Forge project “apply solely to the power generation facility” to “support new energy infrastructure, and do not extend to any future data center facilities that may be served.” Beasley also said that there is currently “no definitive agreement” with Microsoft for this power plant.

“Microsoft is in discussions with Chevron,” Rima Alaily, Microsoft’s corporate vice president and general counsel for infrastructure, said in a statement to WIRED. “No commercial terms have been finalized, and there is no definitive agreement at this time.”

Chevron is applying for a tax abatement for the project under Texas’ Jobs, Energy, Technology, and Innovation (JETI) Act. Passed in 2023, the program is intended to incentivize businesses to build large infrastructure projects in the state in exchange for guarantees to bring jobs and revenue. Accepted projects get a cap set on the amount of taxable property they can be charged through local school district taxes.

The Pecos-Barstow-Toyah school board approved the project’s application at a meeting in February. The state pays for the tax abatement, so the school district itself does not lose out on any money.

According to documents from the state, the Chevron project could net more than $227 million in savings for the company over a 10-year period, depending on the eventual size of the project and investment. The application says the plant will provide “over 25 permanent, full-time jobs,” though there’s no requirement to do so because it’s considered an electricity generation facility.

The planned gas plant won’t connect to the grid, instead providing “electricity for direct consumption by a data center,” according to its application. So-called behind-the-meter gas plants have become increasingly popular for data center developers facing yearslong waits to connect to the grid. According to data from nonprofit Global Energy Monitor, the US at the start of the year had nearly 100 gigawatts of gas-fired power in the development pipeline solely to power data centers, with several more massive gas projects announced since the data was published.

A WIRED analysis of less than a dozen power plants being constructed to explicitly serve data centers, including the Chevron project, found that these power plants are permitted to emit more greenhouse gases than many small- to medium-size countries. The Energy Forge plant alone could emit more than 11.5 million tons of CO2 equivalent annually—more than the country of Jamaica emitted in 2024. Beasley told WIRED that the plant “is being designed to comply with applicable environmental regulations, including all applicable federal and state air quality standards.”



Source link

Continue Reading

Tech

CUDA Proves Nvidia Is a Software Company

Published

on

CUDA Proves Nvidia Is a Software Company


Forgive me for starting with a cliché, a piece of finance jargon that has recently slipped into the tech lexicon, but I’m afraid I must talk about “moats.” Popularized decades ago by Warren Buffett to refer to a company’s competitive advantage, the word found its way into Silicon Valley pitch decks when a memo purportedly leaked from Google, titled “We Have No Moat, and Neither Does OpenAI,” fretted that open-source AI would pillage Big Tech’s castle.

A few years on, the castle walls remain safe. Apart from a brief bout of panic when DeepSeek first appeared, open-source AI models have not vastly outperformed proprietary models. Still, none of the frontier labs—OpenAI, Anthropic, Google—has a moat to speak of.

The company that does have a moat is Nvidia. CEO Jensen Huang has called it his most precious “treasure.” It is not, as you might assume for a chip company, a piece of hardware. It’s something called CUDA. What sounds like a chemical compound banned by the FDA may be the one true moat in AI.

CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say “KOO-duh.” So what is this all-important treasure good for? If forced to give a one-word answer: parallelization.

Here’s a simple example. Let’s say we task a machine with filling out a 9×9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column—one from 1×1 to 1×9, another from 2×1 to 2×9, and so on—for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity—7×9 = 9×7—they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts.

Nvidia’s GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon’s scrotum should jiggle at 60 frames per second.

CUDA is not a programming language in itself but a “platform.” I use that weasel word because, not unlike how The New York Times is a newspaper that’s also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations—added up, they make GPUs, in industry parlance, go brrr.

A modern graphics card is not just a circuit board crammed with chips and memory and fans. It’s an elaborate confection of cache hierarchies and specialized units called “tensor cores” and “streaming multiprocessors.” In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won’t run any faster without a capable head chef deftly assigning tasks—as CUDA does for GPU cores.

To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more—a cherry pitter, a shrimp deveiner—which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let’s say the task is peeling garlic. An unoptimized GPU would go: “Peel the skin with your fingernails.” CUDA can instruct: “Smash the clove with the flat of a knife.” PTX lets you dictate every sub-instruction: “Lift the blade 2.35 inches above the cutting board, make it parallel to the clove’s equator, and strike downward with your palm at a force of 36.2 newtons.”



Source link

Continue Reading

Trending