Over the past few years, edge artificial intelligence (AI) has quickly transformed from a niche technology to a vital and strategic necessity. This is mainly because it helps resolve or minimise some of the key bottlenecks of traditional cloud-based AI. These include data volume, latency, privacy and cost, among others, while allowing companies to make instant decisions to keep up with modern and increasingly automated operations.
As a result, the deployment of edge AI is no longer only a technical architecture choice, but one that is actively reshaping risk, cost, compliance and responsibility for enterprises. Businesses are increasingly choosing to store sensitive information mainly on local networks, instead of relying on cloud providers, which has further driven the growth of edge AI.
Rather than asking whether or not to adopt edge AI, the crucial question for most companies is how to do so without creating new security, cost and governance issues. As a relatively new technology still, several companies risk implementing edge AI simply to jump on the AI bandwagon, without being fully aware of which situations can most benefit from it.
“Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity,” notes Michaël Bikard, professor of strategy at the Insead business school. “Edge AI can work well locally while producing fragile outcomes at the system level. Historically, that’s when failures occur. Not because the technology fails, but because it is trusted too early, before institutions, organisations and governance are ready.”
As such, understanding the consequences of edge AI deployment is paramount to deciding long-term strategy.
Why businesses are moving from cloud-first to hybrid
Businesses are increasingly choosing a more hybrid AI approach over a cloud-first strategy, driven mainly by larger and more complex AI workloads. Many firms have also been disappointed by the savings achieved by adopting a full public cloud strategy, instead being faced with sharply surging operational costs.
These costs, exacerbated by data-heavy applications, mainly arose from moving large datasets to and from the cloud and between providers. Surprise fees and unpredictable bills have further strained IT budgets and complicated budgeting and forecasts.
Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity Michaël Bikard, Insead
On the other hand, with edge AI, companies can run stable and predictable workloads on-premise much cheaper than in the cloud.
Latency is another overarching concern. Edge AI can often be better than the cloud to minimise latency for applications which need real-time, high-speed processing. These include operational control systems and local analytics, among others.
In highly regulated industries such as finance and healthcare, some data may only be stored within certain jurisdictions, which has further driven the shift to edge AI or on-premise solutions.
Major, single cloud providers can also come with supplier lock-ins, while multicloud environments are increasingly complicated to manage, also leading to hybrid approaches.
A hybrid strategy lets companies use public cloud to train and update applications which need to scale fast, while keeping high-volume, sensitive or stable data on-premise. This allows organisations to balance agility, cost efficiency and operational resilience, especially in a global context where real-time intelligence is increasingly valuable.
Edge AI business drivers: What’s real and what’s noise
At present, most businesses using edge AI have adopted the technology due to practical operational needs. Successful deployments have focused on solving specific, cloud-only limitations, rather than trying to overhaul entire company tech infrastructures.
The need for real-time decision-making has primarily driven edge AI adoption, especially in sectors like infrastructure, logistics, manufacturing and transport. This is especially as latency can have far-reaching operational and financial consequences, which the technology can help significantly in cutting down.
Applying edge AI to these sectors helps companies process data closer to where it is generated, which enables them to react faster during times of lost central connectivity.
The technology also helps organisations dealing with sensitive data stay legally and financially compliant in jurisdictions with especially strict data storage laws.
For companies working on critical operations, edge AI can greatly improve operational resilience by making sure that data and intelligence are distributed throughout a number of locations. This helps reduce dependence on centralised systems, which in turn decreases the impact of outages.
However, some business drivers are vastly overestimated when it comes to influencing the need to implement edge AI. The biggest of these is short-term cost savings. Edge AI can certainly cut down on transfer and cloud data consumption costs in the long-run.
However, it initially needs significant capital expenditure, mainly in the form of hardware device upgrades. There are also ongoing maintenance, monitoring and software update costs following implementation. In some cases, integration with legacy systems may be slower than expected and businesses may have to hire specialised labour as well. Edge AI systems also use considerable amounts of power, leading to higher energy bills.
These factors can all cause costs to be higher in the first few months, requiring businesses to have a long-term view when it comes to seeing strategic benefits from edge AI.
Another notion that is often overestimated is edge AI being able to deliver anything like “super-intelligence”, by running huge, complicated models like datacentre graphics processing units. However, given current computing and power restrictions in most cases, this scenario is highly unlikely at the moment.
Similarly, expectations of businesses being able to switch entirely to edge AI, instead of a hybrid approach, are also unrealistic, mainly because of practical deployment, integration and maintenance limitations across various locations.
How edge AI is changing security, governance and ownership
As edge AI becomes more embedded in hybrid business tech strategies, risk management, enterprise security and governance are also changing, moving away from centralised IT control. These areas are now being shaped by local operational teams taking increasingly autonomous decisions, factoring in the real-time conditions of critical physical infrastructure.
Rising edge AI usage could heighten security concerns as well, as it widens organisational attack surfaces through multiple distributed devices and infrastructure. These then need to be protected, monitored and updated equally, following a set of standard guidelines, despite each of them presenting their own unique limitations.
AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure Florian Stahl, Mannheim Business School
“AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure,” remarks Florian Stahl, chair of quantitative marketing and consumer analytics at Mannheim Business School.
Patch management can pose more issues with edge AI as well, with thousands of endpoints and vulnerabilities causing potential delays and discrepancies in maintenance.
With edge AI being all about local deployments, more questions around version control, oversight and audit issues can arise. This means that companies may need to maintain more in-depth and regular records about data inputs, decision-making processes and operational factors. Highly regulated industries may especially demand evidence trails and seek greater accountability, which can impact company reputations and licences.
“Real-time AI systems, particularly those based on machine learning, often operate as ‘black boxes’, making it difficult to explain or audit decisions when failures occur. This lack of transparency is problematic in infrastructures where accountability and post-incident analysis are essential,” Stahl adds.
As autonomous decisions taken locally can have very real financial, safety and compliance consequences, businesses may be compelled to take accountability far more seriously if they choose to use edge AI.
Senior leadership may also need to adapt centralised organisational and governance models to a more distributed intelligence strategy, all while keeping costs low.
These factors have led to edge AI becoming a structural change just as much as a technical one, impacting how and where decisions are taken, how risk is evaluated and overall accountability.
What leaders should consider before implementing edge AI
Given the considerable initial investment required by most edge AI models, leaders should prioritise long-term strategic impact, rather than the hype of the latest technology. This means that while evaluating company-readiness, apart from timing, the potential scope of the intended edge AI model is paramount.
The biggest factor to consider is which processes or systems are most likely to benefit from using edge AI first and which can wait for a few more months. Ideally, businesses should prioritise any processes where latency, operational risk and data locality are most critical. By doing this, organisations can spread out costs while testing new deployments in a relatively lower-risk manner.
“Importantly, organisations should evaluate AI deployments not only through efficiency metrics, but also through risk-adjusted performance indicators, recognising that marginal efficiency gains are rarely justified if they introduce disproportionate systemic or ethical risks,” Stahl advises.
The next question is: to scale or not to scale? In several cases, a pilot edge AI deployment is either enough for the short-term, does not deliver the expected results, or highlights many hidden costs and operational issues.
In these cases, decision-makers need to evaluate whether it is worth taking the risk to scale, which will need more investment, specialised skills and manpower.
However, knowing when not to use edge AI, and when it could cause more harm than good, is equally important for businesses. This is primarily in cases where data volumes are still low, latency is not crucial, or the company does not have the means to appropriately handle several distributed endpoints.
“Edge AI should not be deployed in sectors where use cases are broad, stakes are high, and the consequences of errors are poorly understood,” Insead’s Bikard states. “That combination usually signals a timing problem rather than a technological one. In open, highly interconnected environments, even small mistakes can cascade before organisations have time to respond.”
In such cases, exercising strategic restraint is far more instrumental to long-term value.
From tech choice to organisational shift
Ultimately, implementing edge AI models should be primarily focused on delivering long-term, strategic value, rather than a trend-based decision. This is especially true if latency and real-time data analysis pose real risks. Businesses need to consider that edge AI use is likely to reshape everything from cost structures and decision-making to autonomy and risk, and prepare accordingly.
“There are real potential gains from using AI for predictive maintenance, but those gains rarely come from the technology alone. For AI to pay off, the surrounding organisation – its incentives, culture, structures and skills – must also adapt. Predictions only create value if people are empowered to act on them,” Bikard concludes.
Enterprises that treat edge AI as an entire operational shift, rather than an independent feature to be tacked onto legacy systems, will inevitably be able to take advantage of it better in the long run.
The Ember Smart Mug 2 is niche, but it has a loyal following. Even though we think there are better mug warmers on the market, Ember is like Apple AirPods or Kleenex. People want what they want. Right now, for Mother’s Day, the Ember Smart Mug 2 is on sale for just under $100, a 30 percent discount and a match of the very best price we’ve tracked. You can save at Amazon, Best Buy, and the manufacturer’s website.
This smart mug is probably overkill. It has a smartphone app that notifies you when your coffee reaches the ideal temperature, and its onboard light also provides a visual indicator that your brew is ready. It intelligently adjusts power usage to keep your drink warm when you’re nearby, and turns off when you’re not around. The self-heating mug is on sale in a few variations—10 or 14 ounces, in blue, white, black, and purple.
The mug offers up to 80 minutes of powered heating time, or you can pop it on the included charging coaster to keep the battery going all day. And you don’t need the smartphone app unless you want to precisely dictate your coffee temperature—the mug defaults to 135 degrees Fahrenheit without your specific input.
Our main gripe is that this proprietary warming system is not dishwasher safe. You need to hand-wash each component, and ensure you do so carefully, because the items are not cheap to replace. But if Mom has been putzing around the house drinking perpetually microwaved coffee, perhaps an upgrade is in order. We have additional recommendations in our guide to the Best Coffee Warmers. You may also want to check our related stories on the Best Espresso Machines, Best Coffee Machines, and Best Pod Coffee Makers.
Google DeepMind’s AlphaFold has already revolutionized scientists’ understanding of proteins. Now, the ability of the platform to design safe and effective drugs is about to be put to the test.
Isomorphic Labs, the UK-based biotech spinoff of Google DeepMind, will soon begin human trials of drugs designed by its Nobel Prize–winning AI technology. “We’re gearing up to go into the clinic,” Isomorphic Labs president Max Jaderberg said on April 16 at WIRED Health in London. “It’s going to be a very exciting moment as we go into clinical trials and start seeing the efficacy of these molecules.”
Jaderberg did not elaborate on the timeline, but it’s later than the company had planned to initiate human studies. Last year, CEO Demis Hassabis said it would have AI-designed drugs in clinical trials by the end of 2025.
Isomorphic Labs was founded in 2021 as a spinoff from Alphabet’s AI research subsidiary, Google DeepMind. The company uses DeepMind’s AlphaFold, a groundbreaking AI platform that predicts protein structures, for drug discovery.
Built from 20 different amino acids, proteins are essential for all living organisms. Long strings of amino acids link together and fold up to make a protein’s three-dimensional structure, which dictates the protein’s function. Researchers had tried to predict protein structures since the 1970s, but this was a painstaking process given the astronomically high number of possible shapes a protein chain can take.
That changed in 2020, when DeepMind’s Hassabis and John Jumper presented stunning results from AlphaFold 2, which uses deep-learning techniques. A year later, the company released an open-source version of AlphaFold available to anyone.
In 2024, DeepMind and Isomorphic Labs released AlphaFold 3, which advanced scientists’ understanding of proteins even further. It moved beyond modeling proteins in isolation to predicting other important molecules, such as DNA and RNA, and their interactions with proteins.
“This is exactly what you need for drug discovery: You need to see how a small molecule is going to bind to a drug, how strongly, and also what else it might bind to,” Hassabis told WIRED at the time.
Since its release, the AlphaFold platform has been able to predict the structure of virtually all the 200 million proteins known to researchers and has been used by more than 2 million people from 190 countries. The breakthrough earned Hassabis and Jumper the Nobel Prize for chemistry in 2024, with the Nobel committee noting that AlphaFold has enabled a number of scientific applications, including a better understanding of antibiotic resistance and the creation of images of enzymes that can decompose plastic.
Earlier this year, Isomorphic Labs announced an even more powerful tool, what it calls IsoDDE, its proprietary drug-design engine. In a technical paper, the company touts that the platform more than doubles the accuracy of AlphaFold 3.
The startup has formed partnerships with Eli Lilly and Novartis to work together on AI drug discovery and is also advancing its own “broad and exciting pipeline of new medicines” in oncology and immunology, Jaderberg said.
“The exciting thing about the molecules that we’re designing is because we have so much more of an understanding about how these molecules work, we’ve engineered them to be very, very potent,” Jaderberg told the audience at WIRED Health. “You can take them at a much lower dose, and they’ll have lower side effects, off target effects.”
Last year, Isomorphic appointed a chief medical officer and announced it had raised $600 million in its first funding round to gear up for clinical trials. Meanwhile, the company has been building a clinical development team. Its mission is to “solve all disease.”
“It’s a crazy mission,” Jaderberg said. “But we really mean it. We say it with a straight face, because we believe this should be possible.”
Security leaders should be turning offensive AI cyber tools on their own systems before threat actors do, exploiting the innate defenders’ advantage to attain the high ground and increase their chances of withstanding a cyber attack.
So says Yinon Costica, co-founder of Google-owned Wiz, who, speaking at Google Cloud Next in Las Vegas, argued that defenders can win against attackers by using AI to exploit an advantage that may not appear obvious at first glance, that of context.
“The same AI model can obviously produce very different results based on the context that we feed into it,” said Costica. “Now, attackers hopefully have much less context about us while as defenders we do have a lot of context about our environments that we can share with the model.
“If, as defenders, we take the first movers’ advantage and we use the AI against ourselves, with the context we have, we actually stand a chance to win…. But we need to act fast,” he said.
“We need to start using AI against ourselves as much as possible, whether it’s to scan attack surfaces, scan code, scan anything, in order to be the first one to see the results and not to wait for the bad guys to do it before us.”
As speed becomes ever more of the essence in cyber security, Costica conceded that this would be a challenge for defenders – but noted that the tools to do this are rapidly becoming available. To try to help, Wiz unveiled three new AI agents at Google Cloud Next – red, green and blue – which are named for the human cyber teams they are designed to help.
“What agents allow us to do is really to get to the next level of acceleration [and] automation of security work,” said Costica.
The red agent is designed to assist red team penetration testing work by probing deep into its owners’ IT estate, identifying potential exposures, such as application programming interfaces (APIs), end-of-life edge networking kit or operational technology (OT) assets, and runs penetration tests on them. The green agent follows on by automating the triage process, something that can take ages for humans. Finally, the blue agent acts as a detective, doing the investigative work that can also be a lengthy process for human teams.
“These three agents together form a layer that is autonomous and automated. Its not revolutionary in that it aligns closely to how security teams have been working for many years, but now it allows each team to automate their workflows,” said Costica.
“It’s like living in the future in the eyes of security teams because it means that from the moment they find a risk, they can automate the process to find who owns it and deliver the code fix to complete and redeploy to production.”
A little over a month on from the closure of the $32bn acquisition of Wiz – Google’s largest purchase to date – the two organisations reaffirmed their commitment to providing a unified security platform, retaining Wiz’s brand, that will enhance the speed with which customers detect, prevent and respond to threats, especially emerging ones created using AI.
They duo also claim their combined capability will accelerate adoption of multicloud security and spur more confidence in innovation around cloud and AI. Wiz’s products are also to continue to be made available across other platforms, including Amazon Web Services (AWS), Microsoft Azure and Oracle Cloud. It also announced support for Databricks and agent studios like AWS Agentcore, Microsoft Azure Copilot Studio, and Salesforce Agentforce, as well as Gemini Enterprise Agent Platform of course, and continues to support security ecosystems with integrations to the outer layer of the cloud, including Google Cloud Apigee, Cloudflare AI Security for Apps, and the Vercel platform.
Behind the scenes, Wiz has also updated how it integrates security detections from Wiz Defend with Google Security Operations and Mandiant Threat Defence to make life easier for human analysts.
And it announced new capabilities to secure the AI-native deployment cycle. These include scanning vibe coded applications for issues; AI-generated code scanning and vulnerability remediation; agent-based remediation allowing teams to automate remediation workflows; and an AI bill of materials (AI-BOM) to keep on top of the use of shadow AI for coding.