The key question regarding edge artificial intelligence (AI) is no longer about its vast business potential, but about where it can be most efficient and deliver faster, measurable results. Early uses across the manufacturing, retail and infrastructure sectors have to date focused on issues such as predictive machine maintenance, tailored, localised analytics in retail stores, and grid monitoring.
However, cost constraints, latency and data residency continue to require careful consideration by organisations looking to scale edge AI strategies.
“Early deployments should focus on narrow beachhead use cases where ethical, legal and security risks are limited – or clearly outweighed by the benefits,” observes Michaël Bikard, professor of strategy at the Insead business school. “That’s how new technologies have historically entered safety-sensitive domains.”
Edge AI is being used practically right now
Many global businesses have adopted edge AI in some capacity. However, most deployments remain relatively small and highly specialised, prioritising speed, reliability and energy efficiency over huge, datacentre-like models. They also depend significantly on human oversight and intervention.
Current models focus on minimising edge AI’s limitations, rather than ultra-sophisticated models. Most deployments are still hybrid, with humans handling most of the training and performance evaluation, while the model handles local inference.
Edge AI systems are optimised to recommend the best course of action, rather than make fully independent decisions. In highly regulated or safety-critical businesses, humans still have the final say.
Successful deployments highlight that edge AI is more about ensuring that reliable decisions can be taken closer to where the data is generated, rather than more intelligent technology itself.
What’s working: Predictive machine maintenance in manufacturing
Schneider Electric believes it has significantly advanced the industrial internet of things (IIoT) by using edge AI for real-time predictive maintenance on the factory floor, through local controllers, servers and devices. This is designed to improve operational efficiency while strengthening data security and decreasing latency as well.
The company uses edge AI systems to analyse factors such as real-time temperature, vibration and performance to predict machine issues before they occur, which helps decrease production stoppages.
It also employs edge AI for automated inspection and image-based barcode reading, which improves product quality. The Cognex AI-based technology can detect objects and shapes, allowing conveyor cameras to automatically reject flawed products.
Predictive maintenance succeeds when AI is embedded into operational workflows and decision processes, not deployed as a standalone analytics layer Himanshu Rai, director at IIM Indore
Schneider Electric also focuses on enhanced autonomous machine control through its EcoStruxure Automation Expert virtualised controller system. This connects shop floor IoT devices to edge controllers. The company also uses edge AI to grow yield by analysing variables in real time and reducing waste.
Automotive giant Renault has also deployed edge AI tools for predictive manufacturing maintenance. This is mainly achieved by supervising welding robots to ensure that welding defects and failure anticipations are flagged in real time, to minimise downtime.
Renault’s Industrial Metaverse uses edge AI heavily to analyse real-time data from 12,000 connected machines, which strengthens production lines. This is said to have helped Renault Group save €270m in 2023. Similarly, Renault’s autonomous control systems conduct visual inspections through edge AI, further freeing up operator time.
“Predictive maintenance has emerged as one of the most commercially successful AI use cases; however, technology alone is insufficient. Stalled or underperforming deployments may cite poor data integration, fragmented ownership, or constraints from legacy systems as root causes,” says Himanshu Rai, director at IIM Indore. “Predictive maintenance succeeds when AI is embedded into operational workflows and decision processes, not deployed as a standalone analytics layer.”
Real-time inventory tracking and decreasing food waste in retail
Fast fashion retailer H&M has partnered with Avassa in using edge AI to modernise in-store facilities, streamline operational efficiency and improve customer experience. Another focus is making sure applications keep working even when connectivity is down.
One of the biggest uses of edge AI is through RFID-enabled tracking, a highly accurate system allowing inventory to be tracked with real-time data. This helps staff find in-store items immediately, significantly cutting down on customer wait times.
Other in-store edge AI deployments include smart mirrors in fitting rooms, which connect to local networks and deliver product recommendations. They let buyers see which items are in stock in real time and ask for other sizes if required, without having to leave the fitting room, which considerably enhances the customer experience.
Customers can look for items using photos through the TensorFlow Lite edge AI system on the H&M app, too, further speeding up performance.
H&M is partnering with Honeywell to use edge AI to optimise lighting, heating and air-conditioning across 90 European stores as well. By gathering data from smart meters and sensors, the system improves real-time energy usage, decreasing costs and carbon footprint at the same time.
Similarly, grocery giant Tesco has leaned heavily into edge AI with a recent three-year partnership with Mistral AI to optimise its supply chain and reduce food wastage. One of the models employs dynamic expiry pricing. The system evaluates expiry dates and how fresh produce is, and automatically reduces prices for items expiring soon.
This has helped bring Tesco a step closer to its goal of reducing food waste, with wastage levels across UK operations down by 45% in 2025, compared with 2016/2017 levels. Another major deployment is real-time logistics and shipments tracking across more than 3,000 locations through solar-powered sensors. Tesco also saves 100,000 miles per week by using AI to search for the most efficient delivery routes.
Edge AI is used for product demand prediction as well, improving fresh produce shelf life, which decreases the risk of overstocking. This reduces the need for manual checking and improves inventory management across the board.
Self-checkout processes have been upgraded with edge AI too, with stores now including smart systems with cameras that use AI and computer vision to monitor real-time packaging behaviour and flag incorrectly scanned items.
Grid monitoring and maintenance in energy and infrastructure
Siemens Energy is successfully revolutionising legacy grid infrastructure to active, intelligent networks through edge AI, enabling them to automatically handle rising demand and fluctuating renewable energy levels.
The process includes AI systems such as substations, transformers and sensors, which allows predictive grid maintenance and real-time decision-making. Online sensor devices, such as the Sensformer advanced unit, keep tabs on high-voltage equipment and transformers.
Edge AI flags irregularities in temperature, vibration and torque through local data analysis. Operators can then maintain machines as per their current condition, rather than routine checks, avoiding expensive surprise downtimes.
Some sensors are virtual physics-informed neural networks (PINNs), used to virtually predict hotspots on things like transformer bushings without physical sensors.
New technologies gain traction not by being universally superior, but by outperforming the status quo in narrow contexts. In infrastructure, that often means environments requiring continuous, real-time monitoring at a scale or speed that humans or centralised systems cannot sustain Michaël Bikard, Insead
Another edge AI deployment, dynamic line rating (DLR), analyses line data factors like wind speed and temperature in real time and remits current transmission line capacities. Unlike potentially conservative static numbers, this unlocks 10% to 15% of additional capacity more than 90% of the time.
Siemens also implemented intelligent substations for a hybrid approach, which processes data locally and only shares relevant information to the cloud, improving bandwidth.
“New technologies gain traction not by being universally superior, but by outperforming the status quo in narrow contexts. In infrastructure, that often means environments requiring continuous, real-time monitoring at a scale or speed that humans or centralised systems cannot sustain,” Bikard observes.
Similarly, Ørsted uses edge AI systems for wind farm optimisation, by analysing data from thousands of turbine sensors, which optimises predictive maintenance too. It also monitors and analyses localised weather patterns like cloud cover and sun intensity, using the technology to better boost battery storage utilisation and solar energy production.
Edge AI failures
Despite several successful edge AI deployments in the past few years, there are some models which have failed – often very publicly. McDonald’s AI-driven voice ordering trial, deployed across about 100 drive-throughs, was one such case. The fast-food chain launched a three-year partnership with IBM for this project in 2021, which ended in 2024 following several bad reviews.
Viral, embarrassing social media videos posted by customers highlighted the system misunderstanding orders, sometimes resulting in hundreds of dollars’ worth of food being included. Mistakes such as adding bacon to ice cream were also common.
Other problems included issues with background noise, different human accents and dialects, and unusual local requests.
What drove success – and where models broke down
Successful edge AI deployments across Schneider Electric, Tesco and Siemens Energy, among others, had one common trait: they all focused on extremely narrow processes, within broader organisational structures. Launched in very controlled environments, they only scaled incrementally, after rigorous testing and iterations.
“Each stage generates learning, not just about performance, but about failure modes, governance and acceptable risk. Those lessons make it possible to move from tightly controlled settings to more complex environments,” Bikard points out.
These models also have a very clear ownership and accountability structure, with specific people being responsible for outcomes or issues. These include operators, supervisors, production line managers, shop managers or similar.
Data quality and domain expertise are more critical than algorithmic sophistication Florian Stahl, Mannheim Business School
Constant human supervision meant that any issues or downtime with the models could be immediately addressed with minimal repercussions. A hybrid approach between cloud and edge AI was consistently prioritised as well.
Successful deployments did not involve any absolutely critical processes either. Even in cases of predictive maintenance, both on factory floors and grids, their purpose was mainly to speed up and optimise the process, rather than take over completely.
On the other hand, one of the biggest pitfalls of the McDonald’s model was taking human oversight almost completely out of the loop and giving the system more autonomy than it was designed to handle as a pilot project. This led to serious mistakes, such as adding hundreds of dollars of extra food to orders going nearly unchecked, with customers having little recourse.
Another mistake was launching the initial trial across around 100 locations, instead of a few, well-monitored locations, and introducing far too much data at once through various human accents.
The model in question was also ill-suited to handling open-ended inputs, the kind which should be expected in a fast food restaurant, which sees a high volume of personalised requests.
Finally, McDonald’s being a well-recognised global brand also meant the company had a very small margin of error to launch new features before being potentially criticised by clients, thus requiring far more testing before launch.
“One key lesson is that data quality and domain expertise are more critical than algorithmic sophistication,” observes Florian Stahl, chair of quantitative marketing and consumer analytics at Mannheim Business School. Many early failures can be traced to poorly labelled data, sensor drift, or insufficient understanding of underlying physical processes.
What’s next?
As successful edge AI use cases increase, businesses are likely to move away from isolated experiments to more widespread deployments, through cameras, sensors, robots and other machines.
This may decrease cloud reliance while speeding up decision-making at the edge. However, the fundamental principle driving successful deployments will remain the same.
The most successful edge AI models will still be those that address highly specific tasks and scale incrementally, while having clear oversight, ownership and accountability structures, even if the number of endpoints grows.
“Framing adoption as a human-versus-AI contest misses where the real opportunities lie. What matters instead is identifying situations where existing solutions are clearly insufficient,” Bikard concludes.
The base of the lamp has two slider buttons. One toggle adjusts the warmth, from cold white light all the way to red. One adjusts the intensity, from ultra-bright down to a glareless glow. Hard taps on each button skip ahead, while holding the toggle down on one side or another adjusts the light settings quite slowly—slowly enough I at first sometimes question whether it’s happening.
The maximum brightness is 1,000 lumens—the approximate intensity of a 75-watt incandescent bulb. At this brightness, the battery lasts about five hours. At a lower intensity, this can extend to as long as a dozen hours.
Red light therapy is, of course, the province of TikTok as much as science—a field where wild exaggerations live alongside legitimate uses and benefits. For every sleep study showing that red light is superior to blue light when it comes to melatonin levels, there’s another showing that red light is associated with “negative emotions” before bed.
So I can only offer my own experience, which is that Edge Light Go’s red reading light offers me a pleasant liminal space between awake time and sleepy time, one not offered by a basic nightstand lamp. It allows me to sort of bask in a darkroom space that still lets me see and read, and drift off a little easier.
If I fall asleep, the light has an automatic 25-minute shut-off, which means I won’t do what I far too often do, which is drift off while reading and then wake up, alarmed, to a room filled with bright light in the middle of the night.
Caveats and Quirks
Photograph: Matthew Korfhage
This said, for all the virtues of portability, the Edge Light Go does not boast a base that’s heavy enough to stop the lamp from tipping over if I bend it forward from its lowest hinge. This can be an annoyance when trying to use the lamp as a reading light from a bedside table or the arm of a couch.
For decades, engineering teams treated code like a vintage Ferrari – expensive to build, painstakingly maintained and too precious to ever throw away. Every line represented a significant investment of human capital and time, and has led to a culture where code was cherished and its longevity was a marker of success.
But at the AWS Summit in London this week, Ryan Cormack, principal engineer at online used car marketplace Motorway, consigned that philosophy to the scrapyard. In the age of agentic artificial intelligence (AI-)driven software development, he says, engineering teams can become more productive and are able to build, revise and maintain code at speeds previously unthinkable.
In this article, we look at Motorway’s radical shift from manual coding to an AI-first development pipeline powered by AWS Kiro. Cormack talks about how the company achieved a 4x increase in engineering output, the challenges that come with the ability to produce more code, why the future of software development lies in treating code as disposable, and the core benefits of codifying organisational culture into AI steering files.
The mindset shift: Disposability vs polish
The most profound change at Motorway is speed of delivery but also a psychological break from the past. Historically, writing code was a “time-expensive process”, Cormack says, adding: “We wanted to have code that was so good that we could cherish it for years to come, because we had invested so much time into making it.”
But since starting to use Kiro – AWS’s agentic AI-capable IDE – that mindset became a bottleneck. “We shifted away from, ‘We need the most well-polished code for every line we write, all the time’, because we can rewrite it again tomorrow at a speed that’s never been possible before,” says Cormack.
This has led to a strategy of “evaluation over production”. Motorway now generates vast amounts of code – a million lines a month – much of which may never reach a customer, says Cormack. Instead, it is used to test and evaluate multiple different ways to solve a problem before committing to it.
The lesson for other organisations is clear. Don’t aim for a perfect first pass. Use AI to cycle through iterations, then use human expertise to refine exactly what you want from the options the AI helps provide.
Managing the ‘volume crisis’: Rigour over speed
While a 4x increase in output sounds like an engineering dream, it creates a real “review bottleneck”. If you write 400% more code but maintain 100% manual review processes, the system collapses. To combat this, Motorway hollowed out the “manual middle” of the development process and moved human energy to the ends of the process – namely, the spec and the review.
“We find ourselves spending more time planning code and the whole process up front, and a little bit more time reviewing what comes out,” Cormack says. “But we lose all this time in the middle where we previously had to manually write all the code.”
To ensure AI doesn’t just produce any code but “Motorway code”, the team utilises “steering files”. These files augment the AI’s system prompts with the company’s specific DNA. They are specific to Kiro and are markdown documents that contain instructions, standards and preferences to guide the AI behaviour and coding style.
They include, for example, naming conventions that standardise how application programming interfaces (APIs) are labelled across Motorway’s 7,500-dealer network, and design patterns that enforce specific software architectures.
By injecting these rules via the AI, generated code looks and feels like it was written by a veteran Motorway engineer.
And AI isn’t just used for the build; it’s used for the full lifecycle. “We need to use AI to help us debug, analyse, understand, and evaluate systems as they run,” Cormack adds, noting that agents now monitor logs and metrics to help humans manage a massive fleet of services.
The ‘Kiro’ engine and model agnosticism
A critical component of Motorway’s success is that Kiro acts as an agentic loop rather than just a simple “autocomplete” tool.
“Kiro knows how our CI pipelines work,” says Cormack. “It knows how our infrastructure is code-driven and it knows how our internal applications work together. It’s able to help guide us every step of the way.
“We’re using Kiro across our full software development lifecycle. Our product and UX teams can ship real prototypes into our customers’ hands quicker than we’ve ever been able to before. What would take weeks now takes hours.”
His team can leverage its model agnosticism too. Cormack explained they aren’t locked into a single LLM: “We use Kiro with Claude’s latest Opus 4.7 model, we use it with some of the open weight models, things like Meta’s Llama models … we’re able to selectively pick the LLM that we know is going to be able to best perform the specific task.”
This flexibility helps to mitigate the risk of hallucinations. Motorway relies on a spec-driven approach where the AI must think through the problem and generate a technical design before writing a single line.
“It will help us write automated tests that are able to prove that each of these points has been accurately done,” Cormack says. This means the AI provides its own proof of work before a human ever touches it.
Legacy transition from Heroku to AWS
Motorway wasn’t always this agile. The company was “born in the cloud”, on Heroku, which Cormack acknowledges was “great for scaling and getting going”. But as the company grew, it hit friction points.
The transition to AWS was driven by a need for “flexibility, adaptability, and scalability”, says Cormack, who views their Kiro-enabled AI-first pipeline as the ultimate tool for such transitions.
If he were to do things all over again, Cormack says he would “adopt this model of thinking much earlier on”. The ability to use AI to map migration logic and service dependencies would have saved months of manual effort during the move off their legacy platform, he believes.
Lessons for the boardroom
For organisations that want to replicate Motorway’s 250% increase in deployment frequency, Cormack warns against automating the grind of coding without also automating the rigour of testing.
“If you try to build just by writing code faster, it doesn’t solve the problems,” he says. “I don’t think our customers necessarily want code; they want features and functionality.”
The winners of the AI era won’t be the ones who write the most code, but the ones who build the most rigorous frameworks to manage its disposability.
As Cormack says: “Kiro’s now writing over a million lines of code for us every single month. So, before we start any new piece of work, our engineering team chooses Kiro to help understand exactly what it is that we want to build.
“The rigour at the start of this process helps enable the precision we want in our engineering at the end. So, every piece of work that we do starts with a spec, understanding the intent of what it is that we’re building and why.”
The 5G mobile market is moving beyond its initial land-grab phase and into a period shaped more by network quality, architectural maturity and service differentiation, according to a study from the Global mobile Suppliers Association (GSA).
The State of the market report – from the industry association representing companies in the global mobile ecosystem engaged in the supply of infrastructure, semiconductors, test equipment, devices, applications and support services – was based on market data taken up until the end of March 2026.
Among the key findings of the research was the underlying dynamic that global 5G expansion is still advancing, but the story is no longer just about adding more launches to the map, and the more meaningful story is how it is broadening.
It reported that 392 operators have now launched 5G networks, up 14% from March 2025, reflecting 44% of total LTE and 5G networks. Spectrum was found to remain as the essential enabler of the next phase of 5G growth, and beyond that, 6G.
Indeed, the study showed that over the past year, 11 5G auctions have been completed across the world, for an average price of $663.4m. And as of the end of March 2026, there were 4,256 announced 5G devices in the market, up 24% from last year. In comparison, total LTE devices totalled 29,024.
5G Standalone was becoming the clearest marker of market maturity. Some 95 operators had launched a 5G Standalone service, highlighting a growth of 42% since the first quarter of 2025. Development of 5G Advanced networks was seen to still be at an early stage, but the GSA stressed that its growth rate makes it one of the clearest signals of where the market is heading next. In total, 35 operators are investing in 5G Advanced, an increase of 71% since 2025. Of these operators and providers, 11 have launched a service.
Looking at one of the key use cases of 5G networks, one the industry has long held to offer future prosperity, the study found that private mobile networks continue to demonstrate that 5G’s opportunity extends well beyond public consumer services. The manufacturing vertical is a strong adopter of mobile private networks, with 374 identified customer deployments, followed by the education and academic research sector, with 169 customers deploying it.
Yet despite the prospects from private 5G, the GSA’s report identified Fixed Wireless Access (FWA) as one of 5G’s strongest and most visible commercial success stories. The study found 394 operators who have launched a 5G fixed wireless service, with another 29 investing in the technology, an increase of 59% since June 2025.
The report also tracked the rapid growth of satellite-enabled mobile connectivity, which it said is moving from experiment to early commercial reality. Some 97 operators are investing in satellite-to-cell phone connectivity, and eight available chipsets are compatible with the technology.
Commenting on the study’s findings, Joe Barrett, president of the GSA, said: “The global 5G market is entering a more selective and strategic phase of development … This shift is most clearly visible in 5G Standalone, which now underpins much of the industry’s next wave of innovation, including 5G RedCap, network slicing and more advanced enterprise offers … These trends all point to a market that is no longer defined simply by how many 5G networks exist, but by what those networks are becoming.
“5G in 2026 will be shaped by standalone adoption, ecosystem readiness and the ability of operators to translate technical capability into commercial value.”