The global fight for internet infrastructure control has heated up, driven by more international competition, increasing cyber attacks and instances of economic espionage. Following the Russia-Ukraine war and escalating US-China tensions, countries are now rushing to protect data flows and vulnerable critical infrastructure for the coming years. Rising concerns about dependence on foreign-controlled hosting, internet protocols (IPs) and peering are also emerging.
Furthermore, the increasing cost of internet connectivity, driven by the near depletion of IPv4 addresses, as well as the growing frequency of routing attacks such as Border Gateway Protocol (BGP) hijacks, have also heightened the need for countries in the European Union (EU) to focus on digital sovereignty. After years of outsourcing and bureaucracy, many are still in the draft strategy stage.
However, through a series of coordinated and innovative strategies – including IPv6 deployment, local control of IP space, private sector alignment and peering networks – Lithuania has been taking a highly proactive approach to future-proofing its internet infrastructure, improving digital sovereignty and national resilience.
How Lithuania is building internet infrastructure resilience
Lithuania’s post-Soviet past has played a significant role in shaping its bid for digital autonomy, which relies on viewing internet infrastructure as a state asset. A tech-first governance model combines public-private partnerships, infrastructure policy and national security.
Back in the early 2000s, the country was already investing significantly in nationwide digital identity, e-government services and secure infrastructure for public data. Now it is doubling down on IPv6 deployment at scale as part of a strategy to future-proof its internet infrastructure. And the country is actively trying to encourage full IPv6 adoption, when IPv6 adoption across Europe has been relatively slower so far.
This shift is likely to help decrease dependence on almost depleted IPv4 addresses, while securing long-term address availability. IPv6 networks are also more efficiently structured, with better redundancy and shorter routing paths, strengthening resistance to disruptions and failures.
“With globally unique addresses, IPv6 restores end-to-end connectivity, enabling more transparent communication and better performance. This eliminates the need for current complex workarounds like NAT due to IPv4 address limitations,” says Martin Butler, professor of digital transformation at Vlerick Business School. “This gives nations more control over their network infrastructure and supports the scale needed for future digital services.”
Leasing out dormant IP addresses
Lithuania is taking strategic control of its IP address space by leasing out dormant IP addresses through private sector companies like IPXO. The company claims to have the world’s largest IPv4 leasing market, with more than 300 million leasable IPs across all regional internet registries (RIRs).
IPXO’s co-founder, Vincentas Grinius, believes that out of 4.3 billion IP addresses, 25% are not visible on the internet at all, with a considerable portion of the remainder being badly managed.
With globally unique addresses, IPv6 restores end-to-end connectivity, enabling more transparent communication and better performance Martin Butler, Vlerick Business School
“It’s not about the shortage, it’s about how efficiently that resource is utilised. A lot of enterprises have a legacy space that some of them forgot about. Some of them have legacy networks where they have a different system and they are locked within those,” he says.
“Our aim is to step into a deeper understanding of how we can defragment their networks and give them that single source of truth. It’s to help enterprises optimise their networks and remove the hurdle of multiple tools,” adds Grinius.
Butler emphasises that as countries strive to achieve greater digital sovereignty, controlling data flows and IP address space has become vital.
“Local routing policies enable governments and ISPs [internet service providers] to align their network operations with domestic laws, enhance visibility in critical sectors, and reduce dependency on foreign infrastructure. These actions strengthen resilience and help mitigate security risks such as route hijacking,” he says.
Not only can this generate additional revenue, but it could also reduce the need to lease address space from foreign companies, while curbing black market leasing and IP hijacking.
Another step is building up routing and peering infrastructure by enhancing BGP route filtering, growing internet exchange points and supporting domestic peering. This helps decrease latency, keep traffic local and control the risks of foreign routing dependency, which is vital for both national security and performance.
Simultaneously, Lithuania is developing top-tier response infrastructure through sector-specific cyber protocols and its National Computer Emergency Response Team (CERT-LT), in partnership with NRD Cyber Security. This allows the country to export CERT design, cyber security frameworks and routing strategies to other countries, further strengthening its cyber resilience leadership.
Apart from IPXO and NRD Cyber Security, the Lithuanian government consistently funds, supports and partners with several other private sector firms and business incubators, such as Hostinger, Tesonet, Telesoftas and Kaunas Tech Park.
By designing and operating domestic core stack services, these companies can significantly decrease the need for global hyperscalers, while being aligned with sovereign goals.
According to Eiviltas Paraščiakas, head of communications at Hostinger, one of the company’s main advantages over hyperscalers such as Amazon Web Services (AWS) and Google Cloud is speed. He said this unlocks lots of options, such as adapting to technology trends, delivering minimum viable products and experimenting with products.
He believes competitors would struggle to launch a product in a few weeks, as Hostinger did with its Horizons AI app platform, which simplifies web application development.
Kaunas Tech Park plays a key role in seeding and supporting Lithuania’s early-stage tech startups and scaleups. These work across cyber security, cloud-native and hosting technologies, the internet of things (IoT) and edge networking, among other areas. Through this collaborative system, Lithuania can scale up its digital infrastructure much faster than many of its EU peers.
What Europe could learn from Lithuania
One of the key takeaways from Lithuania’s internet infrastructure approach is that true sovereign digital resilience comes from first mastering control of the invisible but essential building blocks. Lithuania treats routing infrastructure, IP space, Domain Name System (DNS) and hosting as national and strategic assets, not just technical private sector tools. As such, the long-term resilience of these assets can be baked into the national digital agenda and routinely monitored and encouraged by the Ministry of the Economy and Innovation.
One of the key takeaways from Lithuania’s internet infrastructure approach is that true sovereign digital resilience comes from first mastering control of the invisible but essential building blocks
In contrast, several EU countries still outsource core infrastructure to foreign telecoms operators or hyperscalers. While their digital agendas are full of intention, they lag in implementation. Another lesson is to utilise dormant IP assets for leasing revenue, which can then be used for public infrastructure, research and development, and scientific ventures. This effectively reduces digital waste and decreases the internet’s carbon footprint.
Lithuania also demonstrates the benefits of fostering public-private tech partnerships with companies like IPXO, Tesonet, Hostinger and NRD Cyber Security. These firms highlight the multifold benefits of policy support, building products that strengthen national autonomy, like a global IP leasing marketplace, encrypted access and domestic hosting. By doing the same, the UK and EU could significantly reduce reliance on Chinese or US firms and enhance domestic internet infrastructure capabilities.
Lithuania’s strategy of exporting cyber resilience through sovereign infrastructure models could help other EU members and the UK develop themselves as global digital leaders as well. The country demonstrates the benefits of agility during initial-stage implementation of new internet infrastructure policies, through rapid deployment of IPv6 at scale, integrating national cyber architecture and changing registry policies. In addition, this could allow it to be much better equipped to deal with fast-evolving digital threats, unlike the UK, which is still bogged down by fragmented policies and red tape.
The challenges ahead
Yet even though Lithuania is making significant strides in internet infrastructure resilience, some hurdles remain. Butler points out that local IP space control and sovereign routing policies have their drawbacks: “Excessive centralisation or opaque filtering can undermine the internet’s open, distributed nature. Mandating that traffic stay within borders may reduce efficiency, increase latency and risk fragmentation outcomes that weaken rather than strengthen digital infrastructure.”
Yet despite impressive roll-out, Lithuanian IPv6 adoption across enterprise networks, consumer ISPs and regional governments is still somewhat patchy. This is mainly because several services and devices still depend heavily on IPv4.
Awareness of the benefits of IP address leasing is also slow, with Grinius noting: “It took us a lot of effort to educate the market that leasing is good and safe, if you have a safe environment to do that. A lot of the companies or government institutions, non-governmental organisations, have that old thinking, where you can’t do anything with the IP addresses within the third-party networks. We tried to change that because technologies are advancing, things are introduced faster and faster.”
With the country mainly relying on a few major firms, such as IPXO and Hostinger, for internet infrastructure, there is also a systemic risk in case of strategy changes or consolidation. A lack of domestic hyperscalers also means that some critical workloads still depend on foreign infrastructure, which can slow full digital sovereignty.
Similarly, Lithuania’s talent pool is currently seeing a high amount of brain drain to countries including the UK, Germany and the US, which often offer better salaries. This can have far-reaching impacts on sovereign infrastructure projects.
HOKA’s max-stacked Rocket X Trail combines road race shoe energy with boosted grip from a 3-mm lugged outsole. If you’re looking for a fast shoe to go on the attack, this is it. It’s also fantastic for all round comfort. In testing, I laced up the Rocket X Trail and ran 3 hours (just short of 19 miles) fresh out of the box, across roads, forest gravel trails, some grass and through some serious water. It delivered efficiency and energy whether I was moving at marathon pace or with heavier, tired, ragged footfalls in the latter miles.
The rockered, supercritical midsole uses HOKA’s liveliest foam, similar to those you find in its race-ready road shoes, along with a carbon plate. That combines for a really fun ride that’s smooth, springy and fast and really consistent. It’s also highly cushioned, so you will sacrifice a lot of ground feel for that big stack springy softness. It’s also less stable over very lumpy terrain. But on open, flat, runnable mixed terrain, it’s excellent.
The lightweight uppers have a race-shoe-ready feel and after running through ankle-deep flooded sections, they shed water really quickly. This is a pricey road-to-trail shoe, it’s versatile and there’s plenty of winter road potential, too.
It’s always pleasing to see an array of physical buttons, and you get sizable ones too. You’re not going to miss these wide flat ones even when picking the pace up. The silicone strap has a nice stretch to it and while the button clasp is a bit awkward to get into place, this watch does not budge.
Suunto has jumped on the flashlight trend, with an LED light strip sat on the front of the case. You can adjust brightness levels and there’s SOS and alert modes to emit a very noticeable pulsating light pattern. This is a light I found useful rooting around indoors as well as on nighttime outings.
The biggest change is the introduction of a 1.5-inch, 466 x 466 AMOLED display. This replaces the dull, albeit very visible, memory-in-pixel (MIP) display. Suunto also ditched the solar charging that did require spending a significant amount of time outside to reap its battery benefits.
Adding AMOLED screens to outdoor watches has been contentious. The older MIP displays are just more power-efficient. The Vertical 2 is down by about 10 days from the older Vertical for what Suunto calls daily use.
Still, even if you’re putting its tracking and mapping features to use, you’re not going to be reaching for the charger every few days. After two hours of tracking in optimal GPS mode, the battery only dropped by 2 to 3 percent. The battery drop outside of tracking is also small and the standby performance is excellent as well.
Software Updates
Photograph: Michael Sawh
A more streamlined set of smartwatch features helps reserve battery for when it really matters. Unfortunately, I probably got better battery life because you don’t get phone notifications or responses if it’s paired to an iPhone instead of an Android. There’s also no onboard music player, but you do get a pretty slick set of music playback controls that are accessible during tracking.
Over the past few years, edge artificial intelligence (AI) has quickly transformed from a niche technology to a vital and strategic necessity. This is mainly because it helps resolve or minimise some of the key bottlenecks of traditional cloud-based AI. These include data volume, latency, privacy and cost, among others, while allowing companies to make instant decisions to keep up with modern and increasingly automated operations.
As a result, the deployment of edge AI is no longer only a technical architecture choice, but one that is actively reshaping risk, cost, compliance and responsibility for enterprises. Businesses are increasingly choosing to store sensitive information mainly on local networks, instead of relying on cloud providers, which has further driven the growth of edge AI.
Rather than asking whether or not to adopt edge AI, the crucial question for most companies is how to do so without creating new security, cost and governance issues. As a relatively new technology still, several companies risk implementing edge AI simply to jump on the AI bandwagon, without being fully aware of which situations can most benefit from it.
“Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity,” notes Michaël Bikard, professor of strategy at the Insead business school. “Edge AI can work well locally while producing fragile outcomes at the system level. Historically, that’s when failures occur. Not because the technology fails, but because it is trusted too early, before institutions, organisations and governance are ready.”
As such, understanding the consequences of edge AI deployment is paramount to deciding long-term strategy.
Why businesses are moving from cloud-first to hybrid
Businesses are increasingly choosing a more hybrid AI approach over a cloud-first strategy, driven mainly by larger and more complex AI workloads. Many firms have also been disappointed by the savings achieved by adopting a full public cloud strategy, instead being faced with sharply surging operational costs.
These costs, exacerbated by data-heavy applications, mainly arose from moving large datasets to and from the cloud and between providers. Surprise fees and unpredictable bills have further strained IT budgets and complicated budgeting and forecasts.
Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity Michaël Bikard, Insead
On the other hand, with edge AI, companies can run stable and predictable workloads on-premise much cheaper than in the cloud.
Latency is another overarching concern. Edge AI can often be better than the cloud to minimise latency for applications which need real-time, high-speed processing. These include operational control systems and local analytics, among others.
In highly regulated industries such as finance and healthcare, some data may only be stored within certain jurisdictions, which has further driven the shift to edge AI or on-premise solutions.
Major, single cloud providers can also come with supplier lock-ins, while multicloud environments are increasingly complicated to manage, also leading to hybrid approaches.
A hybrid strategy lets companies use public cloud to train and update applications which need to scale fast, while keeping high-volume, sensitive or stable data on-premise. This allows organisations to balance agility, cost efficiency and operational resilience, especially in a global context where real-time intelligence is increasingly valuable.
Edge AI business drivers: What’s real and what’s noise
At present, most businesses using edge AI have adopted the technology due to practical operational needs. Successful deployments have focused on solving specific, cloud-only limitations, rather than trying to overhaul entire company tech infrastructures.
The need for real-time decision-making has primarily driven edge AI adoption, especially in sectors like infrastructure, logistics, manufacturing and transport. This is especially as latency can have far-reaching operational and financial consequences, which the technology can help significantly in cutting down.
Applying edge AI to these sectors helps companies process data closer to where it is generated, which enables them to react faster during times of lost central connectivity.
The technology also helps organisations dealing with sensitive data stay legally and financially compliant in jurisdictions with especially strict data storage laws.
For companies working on critical operations, edge AI can greatly improve operational resilience by making sure that data and intelligence are distributed throughout a number of locations. This helps reduce dependence on centralised systems, which in turn decreases the impact of outages.
However, some business drivers are vastly overestimated when it comes to influencing the need to implement edge AI. The biggest of these is short-term cost savings. Edge AI can certainly cut down on transfer and cloud data consumption costs in the long-run.
However, it initially needs significant capital expenditure, mainly in the form of hardware device upgrades. There are also ongoing maintenance, monitoring and software update costs following implementation. In some cases, integration with legacy systems may be slower than expected and businesses may have to hire specialised labour as well. Edge AI systems also use considerable amounts of power, leading to higher energy bills.
These factors can all cause costs to be higher in the first few months, requiring businesses to have a long-term view when it comes to seeing strategic benefits from edge AI.
Another notion that is often overestimated is edge AI being able to deliver anything like “super-intelligence”, by running huge, complicated models like datacentre graphics processing units. However, given current computing and power restrictions in most cases, this scenario is highly unlikely at the moment.
Similarly, expectations of businesses being able to switch entirely to edge AI, instead of a hybrid approach, are also unrealistic, mainly because of practical deployment, integration and maintenance limitations across various locations.
How edge AI is changing security, governance and ownership
As edge AI becomes more embedded in hybrid business tech strategies, risk management, enterprise security and governance are also changing, moving away from centralised IT control. These areas are now being shaped by local operational teams taking increasingly autonomous decisions, factoring in the real-time conditions of critical physical infrastructure.
Rising edge AI usage could heighten security concerns as well, as it widens organisational attack surfaces through multiple distributed devices and infrastructure. These then need to be protected, monitored and updated equally, following a set of standard guidelines, despite each of them presenting their own unique limitations.
AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure Florian Stahl, Mannheim Business School
“AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure,” remarks Florian Stahl, chair of quantitative marketing and consumer analytics at Mannheim Business School.
Patch management can pose more issues with edge AI as well, with thousands of endpoints and vulnerabilities causing potential delays and discrepancies in maintenance.
With edge AI being all about local deployments, more questions around version control, oversight and audit issues can arise. This means that companies may need to maintain more in-depth and regular records about data inputs, decision-making processes and operational factors. Highly regulated industries may especially demand evidence trails and seek greater accountability, which can impact company reputations and licences.
“Real-time AI systems, particularly those based on machine learning, often operate as ‘black boxes’, making it difficult to explain or audit decisions when failures occur. This lack of transparency is problematic in infrastructures where accountability and post-incident analysis are essential,” Stahl adds.
As autonomous decisions taken locally can have very real financial, safety and compliance consequences, businesses may be compelled to take accountability far more seriously if they choose to use edge AI.
Senior leadership may also need to adapt centralised organisational and governance models to a more distributed intelligence strategy, all while keeping costs low.
These factors have led to edge AI becoming a structural change just as much as a technical one, impacting how and where decisions are taken, how risk is evaluated and overall accountability.
What leaders should consider before implementing edge AI
Given the considerable initial investment required by most edge AI models, leaders should prioritise long-term strategic impact, rather than the hype of the latest technology. This means that while evaluating company-readiness, apart from timing, the potential scope of the intended edge AI model is paramount.
The biggest factor to consider is which processes or systems are most likely to benefit from using edge AI first and which can wait for a few more months. Ideally, businesses should prioritise any processes where latency, operational risk and data locality are most critical. By doing this, organisations can spread out costs while testing new deployments in a relatively lower-risk manner.
“Importantly, organisations should evaluate AI deployments not only through efficiency metrics, but also through risk-adjusted performance indicators, recognising that marginal efficiency gains are rarely justified if they introduce disproportionate systemic or ethical risks,” Stahl advises.
The next question is: to scale or not to scale? In several cases, a pilot edge AI deployment is either enough for the short-term, does not deliver the expected results, or highlights many hidden costs and operational issues.
In these cases, decision-makers need to evaluate whether it is worth taking the risk to scale, which will need more investment, specialised skills and manpower.
However, knowing when not to use edge AI, and when it could cause more harm than good, is equally important for businesses. This is primarily in cases where data volumes are still low, latency is not crucial, or the company does not have the means to appropriately handle several distributed endpoints.
“Edge AI should not be deployed in sectors where use cases are broad, stakes are high, and the consequences of errors are poorly understood,” Insead’s Bikard states. “That combination usually signals a timing problem rather than a technological one. In open, highly interconnected environments, even small mistakes can cascade before organisations have time to respond.”
In such cases, exercising strategic restraint is far more instrumental to long-term value.
From tech choice to organisational shift
Ultimately, implementing edge AI models should be primarily focused on delivering long-term, strategic value, rather than a trend-based decision. This is especially true if latency and real-time data analysis pose real risks. Businesses need to consider that edge AI use is likely to reshape everything from cost structures and decision-making to autonomy and risk, and prepare accordingly.
“There are real potential gains from using AI for predictive maintenance, but those gains rarely come from the technology alone. For AI to pay off, the surrounding organisation – its incentives, culture, structures and skills – must also adapt. Predictions only create value if people are empowered to act on them,” Bikard concludes.
Enterprises that treat edge AI as an entire operational shift, rather than an independent feature to be tacked onto legacy systems, will inevitably be able to take advantage of it better in the long run.