Connect with us

Tech

Edge AI: Business cost, risk and control | Computer Weekly

Published

on

Edge AI: Business cost, risk and control | Computer Weekly


Over the past few years, edge artificial intelligence (AI) has quickly transformed from a niche technology to a vital and strategic necessity. This is mainly because it helps resolve or minimise some of the key bottlenecks of traditional cloud-based AI. These include data volume, latency, privacy and cost, among others, while allowing companies to make instant decisions to keep up with modern and increasingly automated operations. 

As a result, the deployment of edge AI is no longer only a technical architecture choice, but one that is actively reshaping risk, cost, compliance and responsibility for enterprises. Businesses are increasingly choosing to store sensitive information mainly on local networks, instead of relying on cloud providers, which has further driven the growth of edge AI.

Rather than asking whether or not to adopt edge AI, the crucial question for most companies is how to do so without creating new security, cost and governance issues. As a relatively new technology still, several companies risk implementing edge AI simply to jump on the AI bandwagon, without being fully aware of which situations can most benefit from it. 

“Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity,” notes Michaël Bikard, professor of strategy at the Insead business school. “Edge AI can work well locally while producing fragile outcomes at the system level. Historically, that’s when failures occur. Not because the technology fails, but because it is trusted too early, before institutions, organisations and governance are ready.”

As such, understanding the consequences of edge AI deployment is paramount to deciding long-term strategy. 

Why businesses are moving from cloud-first to hybrid

Businesses are increasingly choosing a more hybrid AI approach over a cloud-first strategy, driven mainly by larger and more complex AI workloads. Many firms have also been disappointed by the savings achieved by adopting a full public cloud strategy, instead being faced with sharply surging operational costs. 

These costs, exacerbated by data-heavy applications, mainly arose from moving large datasets to and from the cloud and between providers. Surprise fees and unpredictable bills have further strained IT budgets and complicated budgeting and forecasts.  

Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity
Michaël Bikard, Insead

On the other hand, with edge AI, companies can run stable and predictable workloads on-premise much cheaper than in the cloud. 

Latency is another overarching concern. Edge AI can often be better than the cloud to minimise latency for applications which need real-time, high-speed processing. These include operational control systems and local analytics, among others. 

In highly regulated industries such as finance and healthcare, some data may only be stored within certain jurisdictions, which has further driven the shift to edge AI or on-premise solutions.

Major, single cloud providers can also come with supplier lock-ins, while multicloud environments are increasingly complicated to manage, also leading to hybrid approaches.

A hybrid strategy lets companies use public cloud to train and update applications which need to scale fast, while keeping high-volume, sensitive or stable data on-premise. This allows organisations to balance agility, cost efficiency and operational resilience, especially in a global context where real-time intelligence is increasingly valuable. 

Edge AI business drivers: What’s real and what’s noise 

At present, most businesses using edge AI have adopted the technology due to practical operational needs. Successful deployments have focused on solving specific, cloud-only limitations, rather than trying to overhaul entire company tech infrastructures.

The need for real-time decision-making has primarily driven edge AI adoption, especially in sectors like infrastructure, logistics, manufacturing and transport. This is especially as latency can have far-reaching operational and financial consequences, which the technology can help significantly in cutting down. 

Applying edge AI to these sectors helps companies process data closer to where it is generated, which enables them to react faster during times of lost central connectivity.

The technology also helps organisations dealing with sensitive data stay legally and financially compliant in jurisdictions with especially strict data storage laws. 

For companies working on critical operations, edge AI can greatly improve operational resilience by making sure that data and intelligence are distributed throughout a number of locations. This helps reduce dependence on centralised systems, which in turn decreases the impact of outages.

However, some business drivers are vastly overestimated when it comes to influencing the need to implement edge AI. The biggest of these is short-term cost savings. Edge AI can certainly cut down on transfer and cloud data consumption costs in the long-run.

However, it initially needs significant capital expenditure, mainly in the form of hardware device upgrades. There are also ongoing maintenance, monitoring and software update costs following implementation. In some cases, integration with legacy systems may be slower than expected and businesses may have to hire specialised labour as well. Edge AI systems also use considerable amounts of power, leading to higher energy bills.

These factors can all cause costs to be higher in the first few months, requiring businesses to have a long-term view when it comes to seeing strategic benefits from edge AI.

Another notion that is often overestimated is edge AI being able to deliver anything like “super-intelligence”, by running huge, complicated models like datacentre graphics processing units. However, given current computing and power restrictions in most cases, this scenario is highly unlikely at the moment.

Similarly, expectations of businesses being able to switch entirely to edge AI, instead of a hybrid approach, are also unrealistic, mainly because of practical deployment, integration and maintenance limitations across various locations. 

How edge AI is changing security, governance and ownership

As edge AI becomes more embedded in hybrid business tech strategies, risk management, enterprise security and governance are also changing, moving away from centralised IT control. These areas are now being shaped by local operational teams taking increasingly autonomous decisions, factoring in the real-time conditions of critical physical infrastructure.

Rising edge AI usage could heighten security concerns as well, as it widens organisational attack surfaces through multiple distributed devices and infrastructure. These then need to be protected, monitored and updated equally, following a set of standard guidelines, despite each of them presenting their own unique limitations. 

AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure
Florian Stahl, Mannheim Business School

“AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure,” remarks Florian Stahl, chair of quantitative marketing and consumer analytics at Mannheim Business School.

Patch management can pose more issues with edge AI as well, with thousands of endpoints and vulnerabilities causing potential delays and discrepancies in maintenance. 

With edge AI being all about local deployments, more questions around version control, oversight and audit issues can arise. This means that companies may need to maintain more in-depth and regular records about data inputs, decision-making processes and operational factors. Highly regulated industries may especially demand evidence trails and seek greater accountability, which can impact company reputations and licences. 

“Real-time AI systems, particularly those based on machine learning, often operate as ‘black boxes’, making it difficult to explain or audit decisions when failures occur. This lack of transparency is problematic in infrastructures where accountability and post-incident analysis are essential,” Stahl adds. 

As autonomous decisions taken locally can have very real financial, safety and compliance consequences, businesses may be compelled to take accountability far more seriously if they choose to use edge AI. 

Senior leadership may also need to adapt centralised organisational and governance models to a more distributed intelligence strategy, all while keeping costs low. 

These factors have led to edge AI becoming a structural change just as much as a technical one, impacting how and where decisions are taken, how risk is evaluated and overall accountability.

What leaders should consider before implementing edge AI 

Given the considerable initial investment required by most edge AI models, leaders should prioritise long-term strategic impact, rather than the hype of the latest technology. This means that while evaluating company-readiness, apart from timing, the potential scope of the intended edge AI model is paramount.

The biggest factor to consider is which processes or systems are most likely to benefit from using edge AI first and which can wait for a few more months. Ideally, businesses should prioritise any processes where latency, operational risk and data locality are most critical. By doing this, organisations can spread out costs while testing new deployments in a relatively lower-risk manner. 

“Importantly, organisations should evaluate AI deployments not only through efficiency metrics, but also through risk-adjusted performance indicators, recognising that marginal efficiency gains are rarely justified if they introduce disproportionate systemic or ethical risks,” Stahl advises.

The next question is: to scale or not to scale? In several cases, a pilot edge AI deployment is either enough for the short-term, does not deliver the expected results, or highlights many hidden costs and operational issues. 

In these cases, decision-makers need to evaluate whether it is worth taking the risk to scale, which will need more investment, specialised skills and manpower.

However, knowing when not to use edge AI, and when it could cause more harm than good, is equally important for businesses. This is primarily in cases where data volumes are still low, latency is not crucial, or the company does not have the means to appropriately handle several distributed endpoints.

“Edge AI should not be deployed in sectors where use cases are broad, stakes are high, and the consequences of errors are poorly understood,” Insead’s Bikard states. “That combination usually signals a timing problem rather than a technological one. In open, highly interconnected environments, even small mistakes can cascade before organisations have time to respond.”

In such cases, exercising strategic restraint is far more instrumental to long-term value. 

From tech choice to organisational shift

Ultimately, implementing edge AI models should be primarily focused on delivering long-term, strategic value, rather than a trend-based decision. This is especially true if latency and real-time data analysis pose real risks. Businesses need to consider that edge AI use is likely to reshape everything from cost structures and decision-making to autonomy and risk, and prepare accordingly.

“There are real potential gains from using AI for predictive maintenance, but those gains rarely come from the technology alone. For AI to pay off, the surrounding organisation – its incentives, culture, structures and skills – must also adapt. Predictions only create value if people are empowered to act on them,” Bikard concludes. 

Enterprises that treat edge AI as an entire operational shift, rather than an independent feature to be tacked onto legacy systems, will inevitably be able to take advantage of it better in the long run.  



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

AI-Designed Drugs by a DeepMind Spinoff Are Headed to Human Trials

Published

on

AI-Designed Drugs by a DeepMind Spinoff Are Headed to Human Trials


Google DeepMind’s AlphaFold has already revolutionized scientists’ understanding of proteins. Now, the ability of the platform to design safe and effective drugs is about to be put to the test.

Isomorphic Labs, the UK-based biotech spinoff of Google DeepMind, will soon begin human trials of drugs designed by its Nobel Prize–winning AI technology. “We’re gearing up to go into the clinic,” Isomorphic Labs president Max Jaderberg said on April 16 at WIRED Health in London. “It’s going to be a very exciting moment as we go into clinical trials and start seeing the efficacy of these molecules.”

Jaderberg did not elaborate on the timeline, but it’s later than the company had planned to initiate human studies. Last year, CEO Demis Hassabis said it would have AI-designed drugs in clinical trials by the end of 2025.

Isomorphic Labs was founded in 2021 as a spinoff from Alphabet’s AI research subsidiary, Google DeepMind. The company uses DeepMind’s AlphaFold, a groundbreaking AI platform that predicts protein structures, for drug discovery.

Built from 20 different amino acids, proteins are essential for all living organisms. Long strings of amino acids link together and fold up to make a protein’s three-dimensional structure, which dictates the protein’s function. Researchers had tried to predict protein structures since the 1970s, but this was a painstaking process given the astronomically high number of possible shapes a protein chain can take.

That changed in 2020, when DeepMind’s Hassabis and John Jumper presented stunning results from AlphaFold 2, which uses deep-learning techniques. A year later, the company released an open-source version of AlphaFold available to anyone.

In 2024, DeepMind and Isomorphic Labs released AlphaFold 3, which advanced scientists’ understanding of proteins even further. It moved beyond modeling proteins in isolation to predicting other important molecules, such as DNA and RNA, and their interactions with proteins.

“This is exactly what you need for drug discovery: You need to see how a small molecule is going to bind to a drug, how strongly, and also what else it might bind to,” Hassabis told WIRED at the time.

Since its release, the AlphaFold platform has been able to predict the structure of virtually all the 200 million proteins known to researchers and has been used by more than 2 million people from 190 countries. The breakthrough earned Hassabis and Jumper the Nobel Prize for chemistry in 2024, with the Nobel committee noting that AlphaFold has enabled a number of scientific applications, including a better understanding of antibiotic resistance and the creation of images of enzymes that can decompose plastic.

Earlier this year, Isomorphic Labs announced an even more powerful tool, what it calls IsoDDE, its proprietary drug-design engine. In a technical paper, the company touts that the platform more than doubles the accuracy of AlphaFold 3.

The startup has formed partnerships with Eli Lilly and Novartis to work together on AI drug discovery and is also advancing its own “broad and exciting pipeline of new medicines” in oncology and immunology, Jaderberg said.

“The exciting thing about the molecules that we’re designing is because we have so much more of an understanding about how these molecules work, we’ve engineered them to be very, very potent,” Jaderberg told the audience at WIRED Health. “You can take them at a much lower dose, and they’ll have lower side effects, off target effects.”

Last year, Isomorphic appointed a chief medical officer and announced it had raised $600 million in its first funding round to gear up for clinical trials. Meanwhile, the company has been building a clinical development team. Its mission is to “solve all disease.”

“It’s a crazy mission,” Jaderberg said. “But we really mean it. We say it with a straight face, because we believe this should be possible.”



Source link

Continue Reading

Tech

London Marathon runners get AI to go the extra mile | Computer Weekly

Published

on

London Marathon runners get AI to go the extra mile | Computer Weekly


With huge crowds set to descend on London for the city’s iconic marathon this weekend, IT services provider Tata Consultancy Services (TCS), in partnership with Neurun, has launched a map-based tool powered by artificial intelligence (AI) to help participants and spectators navigate the event.

TCS RunConcierge is said to act as a “digital brain” for the London Marathon, bringing together official guidance, route support and course information in real time – a useful tool for this mass participation event, which saw more than 56,000 runners cross the finish line in 2025 and hundreds of thousands of spectators lining the 26.2-mile route.

Powered by Google Gemini, the platform is designed to deliver instant and reliable guidance for users, whether that be runners seeking information about start line logistics or the location of drinks stops – which will be very much needed with wall-to-wall sunshine forecast on the day – or supporters wishing to locate the best spot from which to cheer on participants or travel as quickly as possible between viewing points.

Users can see their current location on the map, ask for directions to key event destinations and access pre-loaded routes with direct links to Google Maps navigation. The tool also suggests personalised follow-up questions and features voice activation to enable hands-free use on the move. And with 60 languages supported, visitors from all over the world will be able to benefit from the event guidance.

For runners specifically, the immersive 3D map includes an elevation tracker, which could help them plan their strategy.

The partnership between TCS and Neurun is said to be built on a foundation of continuous innovation. New back-end capabilities include a self-serve admin portal that allows event organisers to manage RunConcierge independently, as well as a unique internal AI agent that tests the platform to help maintain content quality and identify improvements

Vinay Singhvi, head of UK and Ireland at Tata Consultancy Services, described the London Marathon as a monumental event, for which its goal is to use technology to make the experience as seamless and enjoyable as possible.

“Our partnership with Neurun allows us to innovate at pace, and the enhanced TCS RunConcierge is a prime example of how we are using AI to solve complex logistical challenges, providing runners and spectators with a trusted companion for the moments that matter most,” he said.

Neurun founder Cade Netscher said its partnership with TCS had been instrumental in developing the RunConcierge tool for the world’s most prestigious marathons, with previous successful deployments at the Sydney and New York City events.

“For London, we’ve integrated the latest AI advancements to create our most powerful and user-friendly version yet. We are excited to see how it helps thousands of people enjoy a more connected and stress-free marathon weekend,” he said.

Separately, in a demonstration of digital healthcare technology in action, TCS has created a digital twin of a para-athlete’s heart, which uses sensors and AI to monitor her heart during training sessions.

The para-athlete, Milly Pickles, is aiming to complete the London Marathon in under four-and-a-half hours next year, and is harnessing digital healthtech to reach her goal.



Source link

Continue Reading

Tech

Why Do I Like Dyson’s PencilVac So Much?

Published

on

Why Do I Like Dyson’s PencilVac So Much?


The vacuum connects to Dyson’s app, where you’ll find resources such as how to empty the dustbin and wash the filter, but not much else. It can tell you how long your last vacuuming session was, but no other details, so it’s not as interesting or as informative as the data you’d get from a robot vacuum.

Fluffy Face

Photograph: Nena Farrell

Image may contain Indoors Interior Design Racket Sport Tennis and Tennis Racket

Photograph: Nena Farrell

This vacuum’s full name is the Dyson PencilVac Fluffycones, aptly named for the four fluffy cones inside the vacuum head. Dyson’s previous recent stick vacuums all have the Fluffy Optic cleaner head for vacuuming hard floors. While both have a fluffy roller bar, the Fluffycones have a conical shape that Dyson says will detangle and remove hair rather than the hair getting stuck all around it. It did detangle hair for me, but when I vacuumed up larger portions of hair from my bathroom floor (a place where many a stray hair comes to die at the hands of my hairbrush, comb, and towel), it actually bunched up the hair into a ball and spat it back out a few times before finally sucking it up into the dustbin.

Video: Nena Farrell

While the hair results weren’t great, I did love this vacuum for sucking up the cat litter that constantly plagues my home. It did a great job with flour on my hard floors and a solid job with dry oats, but it occasionally just bumped the oats around instead of immediately sucking them up. I was even able to quickly run it over the top of my carpet, but rolling back and forth on the carpet a bunch did stop the cones.

The head is designed to move in just about any direction. The cones make it easy to swivel around, and the green illuminating lights on the front and back help you spot any debris you might otherwise miss. With its compact size that fits in tricky corners, the PencilVac finally lets me vacuum up all the litter around the base of my toilet and pedestal sink. It’s part of what makes me reach for this vacuum over and over, even after my robot vacuum cleaned the day before.

Forward Momentum

Image may contain Baseball Baseball Bat Sport Baton Stick Racket Tennis and Tennis Racket

Photograph: Nena Farrell

Do I think this vacuum replaces Dyson’s existing cordless options? No. But Dyson has other new vacuums planned that could do that. This vacuum has a specific design for a specific use: smaller homes with entirely hard floors. There’s an accessibility opportunity here, too. This lightweight vacuum can be much easier to use for folks with mobility and strength restrictions. The magnetic charging base also makes it easy to store and access for a variety of people, whether they struggle with fine motor skills or can’t bend over and grab the vacuum.



Source link

Continue Reading

Trending