There’s a reason the first successful smart specs look like they’re from the 1950s. That extra thickness isn’t just retro flair, it’s hiding a processor and a battery. But that technical constraint creates a creative opportunity: In the right frame, smart tech can disappear. And transforming functional, medical-grade eyewear—like prescription glasses—into stylish, mass-market fashion accessories is exactly what EssilorLuxotica does best.
Still, blending smart tech with high fashion isn’t without risk. Do those two worlds really want to share a nose bridge? “Meta and EssilorLuxottica hope this collaboration will be one of the first successful attempts to integrate high-tech applications, like AI, into luxury products,” says Quillin, “While the Ray-Ban partnership appears successful so far, it’s unclear whether consumers will embrace tech features built into high-end products like Prada, Chanel, or Versace eyewear.”
Meta, for its part, is betting on convergence. The company sees a future where fashion and tech are inseparable. In a note titled “Personal Superintelligence,” Zuckerberg imagined a future where “personal devices like glasses that understand our context—because they can see what we see, hear what we hear and interact with us throughout the day—will become our primary computing devices.” That vision of AI-integrated eyewear shows just how deeply Meta believes the future will be both wearable and always on.
We might see the first glimpse of Zuckerberg’s wearable future as soon as September. Bloomberg reports that this is when Meta will unveil its latest smart glasses, complete with a heads-up display, that will supposedly drop later this fall with a starting price of around $800.
The Competitive Firewall
Still, while Meta may have taken the first credible swing at consumer-grade smart specs, it’s hardly alone. Google has quietly rebooted its wearable ambitions after the much-memed demise of Google Glass, acquiring smart-glasses startup North in 2020, and reportedly working with manufacturers like Samsung and Qualcomm to develop an XR (extended reality) ecosystem.
Then, in July, Google doubled down with a $100 million investment in Gentle Monster, the South Korean eyewear brand known for its fashion-forward, tech-ready designs. Together, they’re developing a next-gen pair of smart glasses that will supposedly fuse AI capabilities with high-end aesthetics—less cyborg, more catwalk.
Apple, true to form, is trying to play the long game. The Vision Pro was never meant to live on your face full-time, but it’s a stepping stone. In choosing to tackle the far trickier problem of fully immersive VR first, Apple effectively bet on the wrong horse—pouring effort into a technology that’s dazzled reviewers but hasn’t won over the average consumer.
Courtesy of Google
Meta, by contrast, staked out a beachhead with simpler AR glasses that looked like something people might actually wear in public. Now, reports from Bloomberg and The Information suggest Apple is working on lighter, more wearable AR glasses, though they may be years from release. When they do arrive, Apple will have the advantage of polished software and its own global retail footprint, while Meta is racing to secure the same distribution advantage via EssilorLuxottica.
Snap CEO Evan Spiegel, meanwhile, has long bet on AR. Snap has invested more than $3 billion over the past 11 years to build its own AR platform. Meta, by contrast, spends more than that every quarter through its Reality Labs division, which is focused on both AR and VR—but still, Snap’s persistence underscores just how long the runway is for this market.
HOKA’s max-stacked Rocket X Trail combines road race shoe energy with boosted grip from a 3-mm lugged outsole. If you’re looking for a fast shoe to go on the attack, this is it. It’s also fantastic for all round comfort. In testing, I laced up the Rocket X Trail and ran 3 hours (just short of 19 miles) fresh out of the box, across roads, forest gravel trails, some grass and through some serious water. It delivered efficiency and energy whether I was moving at marathon pace or with heavier, tired, ragged footfalls in the latter miles.
The rockered, supercritical midsole uses HOKA’s liveliest foam, similar to those you find in its race-ready road shoes, along with a carbon plate. That combines for a really fun ride that’s smooth, springy and fast and really consistent. It’s also highly cushioned, so you will sacrifice a lot of ground feel for that big stack springy softness. It’s also less stable over very lumpy terrain. But on open, flat, runnable mixed terrain, it’s excellent.
The lightweight uppers have a race-shoe-ready feel and after running through ankle-deep flooded sections, they shed water really quickly. This is a pricey road-to-trail shoe, it’s versatile and there’s plenty of winter road potential, too.
It’s always pleasing to see an array of physical buttons, and you get sizable ones too. You’re not going to miss these wide flat ones even when picking the pace up. The silicone strap has a nice stretch to it and while the button clasp is a bit awkward to get into place, this watch does not budge.
Suunto has jumped on the flashlight trend, with an LED light strip sat on the front of the case. You can adjust brightness levels and there’s SOS and alert modes to emit a very noticeable pulsating light pattern. This is a light I found useful rooting around indoors as well as on nighttime outings.
The biggest change is the introduction of a 1.5-inch, 466 x 466 AMOLED display. This replaces the dull, albeit very visible, memory-in-pixel (MIP) display. Suunto also ditched the solar charging that did require spending a significant amount of time outside to reap its battery benefits.
Adding AMOLED screens to outdoor watches has been contentious. The older MIP displays are just more power-efficient. The Vertical 2 is down by about 10 days from the older Vertical for what Suunto calls daily use.
Still, even if you’re putting its tracking and mapping features to use, you’re not going to be reaching for the charger every few days. After two hours of tracking in optimal GPS mode, the battery only dropped by 2 to 3 percent. The battery drop outside of tracking is also small and the standby performance is excellent as well.
Software Updates
Photograph: Michael Sawh
A more streamlined set of smartwatch features helps reserve battery for when it really matters. Unfortunately, I probably got better battery life because you don’t get phone notifications or responses if it’s paired to an iPhone instead of an Android. There’s also no onboard music player, but you do get a pretty slick set of music playback controls that are accessible during tracking.
Over the past few years, edge artificial intelligence (AI) has quickly transformed from a niche technology to a vital and strategic necessity. This is mainly because it helps resolve or minimise some of the key bottlenecks of traditional cloud-based AI. These include data volume, latency, privacy and cost, among others, while allowing companies to make instant decisions to keep up with modern and increasingly automated operations.
As a result, the deployment of edge AI is no longer only a technical architecture choice, but one that is actively reshaping risk, cost, compliance and responsibility for enterprises. Businesses are increasingly choosing to store sensitive information mainly on local networks, instead of relying on cloud providers, which has further driven the growth of edge AI.
Rather than asking whether or not to adopt edge AI, the crucial question for most companies is how to do so without creating new security, cost and governance issues. As a relatively new technology still, several companies risk implementing edge AI simply to jump on the AI bandwagon, without being fully aware of which situations can most benefit from it.
“Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity,” notes Michaël Bikard, professor of strategy at the Insead business school. “Edge AI can work well locally while producing fragile outcomes at the system level. Historically, that’s when failures occur. Not because the technology fails, but because it is trusted too early, before institutions, organisations and governance are ready.”
As such, understanding the consequences of edge AI deployment is paramount to deciding long-term strategy.
Why businesses are moving from cloud-first to hybrid
Businesses are increasingly choosing a more hybrid AI approach over a cloud-first strategy, driven mainly by larger and more complex AI workloads. Many firms have also been disappointed by the savings achieved by adopting a full public cloud strategy, instead being faced with sharply surging operational costs.
These costs, exacerbated by data-heavy applications, mainly arose from moving large datasets to and from the cloud and between providers. Surprise fees and unpredictable bills have further strained IT budgets and complicated budgeting and forecasts.
Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity Michaël Bikard, Insead
On the other hand, with edge AI, companies can run stable and predictable workloads on-premise much cheaper than in the cloud.
Latency is another overarching concern. Edge AI can often be better than the cloud to minimise latency for applications which need real-time, high-speed processing. These include operational control systems and local analytics, among others.
In highly regulated industries such as finance and healthcare, some data may only be stored within certain jurisdictions, which has further driven the shift to edge AI or on-premise solutions.
Major, single cloud providers can also come with supplier lock-ins, while multicloud environments are increasingly complicated to manage, also leading to hybrid approaches.
A hybrid strategy lets companies use public cloud to train and update applications which need to scale fast, while keeping high-volume, sensitive or stable data on-premise. This allows organisations to balance agility, cost efficiency and operational resilience, especially in a global context where real-time intelligence is increasingly valuable.
Edge AI business drivers: What’s real and what’s noise
At present, most businesses using edge AI have adopted the technology due to practical operational needs. Successful deployments have focused on solving specific, cloud-only limitations, rather than trying to overhaul entire company tech infrastructures.
The need for real-time decision-making has primarily driven edge AI adoption, especially in sectors like infrastructure, logistics, manufacturing and transport. This is especially as latency can have far-reaching operational and financial consequences, which the technology can help significantly in cutting down.
Applying edge AI to these sectors helps companies process data closer to where it is generated, which enables them to react faster during times of lost central connectivity.
The technology also helps organisations dealing with sensitive data stay legally and financially compliant in jurisdictions with especially strict data storage laws.
For companies working on critical operations, edge AI can greatly improve operational resilience by making sure that data and intelligence are distributed throughout a number of locations. This helps reduce dependence on centralised systems, which in turn decreases the impact of outages.
However, some business drivers are vastly overestimated when it comes to influencing the need to implement edge AI. The biggest of these is short-term cost savings. Edge AI can certainly cut down on transfer and cloud data consumption costs in the long-run.
However, it initially needs significant capital expenditure, mainly in the form of hardware device upgrades. There are also ongoing maintenance, monitoring and software update costs following implementation. In some cases, integration with legacy systems may be slower than expected and businesses may have to hire specialised labour as well. Edge AI systems also use considerable amounts of power, leading to higher energy bills.
These factors can all cause costs to be higher in the first few months, requiring businesses to have a long-term view when it comes to seeing strategic benefits from edge AI.
Another notion that is often overestimated is edge AI being able to deliver anything like “super-intelligence”, by running huge, complicated models like datacentre graphics processing units. However, given current computing and power restrictions in most cases, this scenario is highly unlikely at the moment.
Similarly, expectations of businesses being able to switch entirely to edge AI, instead of a hybrid approach, are also unrealistic, mainly because of practical deployment, integration and maintenance limitations across various locations.
How edge AI is changing security, governance and ownership
As edge AI becomes more embedded in hybrid business tech strategies, risk management, enterprise security and governance are also changing, moving away from centralised IT control. These areas are now being shaped by local operational teams taking increasingly autonomous decisions, factoring in the real-time conditions of critical physical infrastructure.
Rising edge AI usage could heighten security concerns as well, as it widens organisational attack surfaces through multiple distributed devices and infrastructure. These then need to be protected, monitored and updated equally, following a set of standard guidelines, despite each of them presenting their own unique limitations.
AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure Florian Stahl, Mannheim Business School
“AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure,” remarks Florian Stahl, chair of quantitative marketing and consumer analytics at Mannheim Business School.
Patch management can pose more issues with edge AI as well, with thousands of endpoints and vulnerabilities causing potential delays and discrepancies in maintenance.
With edge AI being all about local deployments, more questions around version control, oversight and audit issues can arise. This means that companies may need to maintain more in-depth and regular records about data inputs, decision-making processes and operational factors. Highly regulated industries may especially demand evidence trails and seek greater accountability, which can impact company reputations and licences.
“Real-time AI systems, particularly those based on machine learning, often operate as ‘black boxes’, making it difficult to explain or audit decisions when failures occur. This lack of transparency is problematic in infrastructures where accountability and post-incident analysis are essential,” Stahl adds.
As autonomous decisions taken locally can have very real financial, safety and compliance consequences, businesses may be compelled to take accountability far more seriously if they choose to use edge AI.
Senior leadership may also need to adapt centralised organisational and governance models to a more distributed intelligence strategy, all while keeping costs low.
These factors have led to edge AI becoming a structural change just as much as a technical one, impacting how and where decisions are taken, how risk is evaluated and overall accountability.
What leaders should consider before implementing edge AI
Given the considerable initial investment required by most edge AI models, leaders should prioritise long-term strategic impact, rather than the hype of the latest technology. This means that while evaluating company-readiness, apart from timing, the potential scope of the intended edge AI model is paramount.
The biggest factor to consider is which processes or systems are most likely to benefit from using edge AI first and which can wait for a few more months. Ideally, businesses should prioritise any processes where latency, operational risk and data locality are most critical. By doing this, organisations can spread out costs while testing new deployments in a relatively lower-risk manner.
“Importantly, organisations should evaluate AI deployments not only through efficiency metrics, but also through risk-adjusted performance indicators, recognising that marginal efficiency gains are rarely justified if they introduce disproportionate systemic or ethical risks,” Stahl advises.
The next question is: to scale or not to scale? In several cases, a pilot edge AI deployment is either enough for the short-term, does not deliver the expected results, or highlights many hidden costs and operational issues.
In these cases, decision-makers need to evaluate whether it is worth taking the risk to scale, which will need more investment, specialised skills and manpower.
However, knowing when not to use edge AI, and when it could cause more harm than good, is equally important for businesses. This is primarily in cases where data volumes are still low, latency is not crucial, or the company does not have the means to appropriately handle several distributed endpoints.
“Edge AI should not be deployed in sectors where use cases are broad, stakes are high, and the consequences of errors are poorly understood,” Insead’s Bikard states. “That combination usually signals a timing problem rather than a technological one. In open, highly interconnected environments, even small mistakes can cascade before organisations have time to respond.”
In such cases, exercising strategic restraint is far more instrumental to long-term value.
From tech choice to organisational shift
Ultimately, implementing edge AI models should be primarily focused on delivering long-term, strategic value, rather than a trend-based decision. This is especially true if latency and real-time data analysis pose real risks. Businesses need to consider that edge AI use is likely to reshape everything from cost structures and decision-making to autonomy and risk, and prepare accordingly.
“There are real potential gains from using AI for predictive maintenance, but those gains rarely come from the technology alone. For AI to pay off, the surrounding organisation – its incentives, culture, structures and skills – must also adapt. Predictions only create value if people are empowered to act on them,” Bikard concludes.
Enterprises that treat edge AI as an entire operational shift, rather than an independent feature to be tacked onto legacy systems, will inevitably be able to take advantage of it better in the long run.