Having spent the best part of a year and a half working to unify its products and tools with those of its new owners Cisco, Splunk is using its annual Splunk.conf event in Boston, Massachusetts, to showcase a number of future developments, beginning with the introduction of the new Cisco Data Fabric platform.
Speaking to reporters in advance of the show’s opening keynote on Monday 8 September, Splunk senior vice president and general manager for EMEA, Petra Jenner, reflected on a busy year and said there were a lot of positive aspects to the deal.
“While we still have our own identity we are working more closely together to achieve better customer experiences,” she said. “One of the key priorities for us is to ensure that customers are really supportive. They see that we are collaborating from a technical point of view.”
Jenner said that prior to Splunk’s acquisition by Cisco, while it had had a strong and growing presence in markets such as the UK, France and Germany, there had been a recognition that it needed to invest in growth.
Cisco’s money has been a catalyst for this investment, not only in the UK but also helping open up more business in countries such as Saudi Arabia and the United Arab Emirates (UAE), said Jenner.
“The impact the acquisition had for the Splunk EMEA team has been extremely good. We have joint customer engagements and there are core initiatives going on so that customers can really leverage the joint Splunk and Cisco, not only the product but also the overall convergence,” said Jenner.
“It also suits very well the technology trends [that are] happening,” she added. “In regard to AI the platform approach is getting more important.”
Jenner also reaffirmed Splunk’s commitment to its IT channel partners both in the security and observability fields, saying it has doubled the numbers on its books. She added that drawing on the strength of Cisco partners – with all the myriad possible networking certifications available – that may not have previously considered Splunk, may help make the platform concept an easier sell to customers looking to do more.
Data Fabric turns machine data into actionable intel
Splunk.conf kicked off on Monday evening with the launch of Cisco Data Fabric, which promises to “transform machine data into AI-ready actionable intelligence”.
On the basis that AI has led to a surge in machine data, but that said data is still largely siloed, fragmented, and hardly ever used, Splunk said Cisco Data Fabric to enable customers to make better decisions, reduce their operational risk, and innovate around AI, for example by helping train custom models, powering agentic workflows, or correlating various streams of machine and business data.
Among some of Data Fabric’s features are the Time Series Foundation Model, which will power pattern analysis and temporal reasoning on time series data to enable anomaly detection, forecasting and root cause analysis, driving proactive operations and easing incident response.
Meanwhile, Cisco AI Canvas, also integrating with Splunk Cloud Platform, will provide an AI agent to orchestrate analysis workflows and workspaces for team collaboration. Splunk described this as a “virtual war room experience” that will let teams glean more in-depth insight, work together in real-time, and make decisions better.
These capabilities will be coming on stream over the next few months, with a few slated for 2026.
Kamal Hathi, Splunk senior vice president and general manager of Splunk, said machine data was now the heartbeat of digital organisations and characterised Splunk as a “heart rate monitor”.
“Our goal is to give customers the fastest, most secure path from data to action,” said Hathi.
“By embedding AI across the platform and embracing open standards, we’re not just helping organisations analyze information faster – we’re enabling them to anticipate change, scale innovation without unnecessary complexity, and deliver digital services that are more resilient, adaptive, and responsive to the needs of their users.”
IDC senior research director of cloud data management, Archana Venkatraman, said Data Fabric addressed a critical pain point – the need to quickly and safely unify vast streams of machine data in the service of resilience.
“By enabling a federated approach that eliminates data movement, it provides a pragmatic solution for organisations operationalising AI at scale,” she said.
“Its focus on real-time search, coupled with a repository for AI-ready data, provides tangible value by reducing complexity and time to insights. This unified architecture is a strong step toward helping customers build more resilient and trustworthy AI systems.”
Searching for Snowflakes
Also on the docket is the launch of Splunk Federated Search for Snowflake, a new platform integration empowering users to connect, query and combine operational and business data across Splunk and Snowflake environments.
Some of its key capabilities include unlimited onboarding of Snowflake data in Splunk; federated queries whereby users can write SPL-like queries to search Snowflake data direct from Splunk; next-gen federation capabilities to combine datasets for more impactful context and insight; and more efficient querying, letting users leverage Snowflake analytics for partial queries before performing final data joins in Splunk.
These capabilities, and others, are slated for a July 2026 release.
Kornit Digital Ltd. (NASDAQ: KRNT) (“Kornit” or the “Company”), a global pioneer in sustainable, on-demand digital fashion and textile production technologies, today announced a major industry milestone: the commercial launch of its groundbreaking digital footwear solution for sports and athleisure markets.
After two years of intensive development and close collaboration with leading global brands, together with its customers, the company is unveiling its complete footwear solution at ITMA Asia + CITME Singapore 2025, marking a turning point for digital production in footwear. For the first time, Kornit technology has crossed the milestone of more than one million pairs of sports shoes sold globally under leading brands, proving that digital footwear manufacturing has moved beyond concept and is now a fully scaled commercial reality.
Kornit Digital has launched its revolutionary digital footwear solution at ITMA Asia + CITME Singapore 2025, marking a major step in sustainable, on-demand footwear production.
The solution allows direct digital printing on technical fabrics, combining design flexibility, precision, and durability.
The company is exhibiting at Hall 6 Stand C204 at the event.
A Massive Market Opportunity for Digital Transformation, and Kornit is Just Getting Started
The addressable market Kornit is targeting represents roughly one billion decorated shoe uppers each year across the global sports and athleisure footwear industry. This is a massive and fast-growing segment shaped by consumer demand for variety, innovation, and personalization. Kornit’s technology directly addresses the key challenges of this market, including design limitations, long development cycles, and overproduction, by replacing complex analog decoration with a single-step digital workflow that delivers durability, flexibility, and limitless design freedom. Kornit’s patented technology enables high-quality, durable prints directly on technical fabrics used in footwear, combining precision, sustainability, and performance in one streamlined process. This innovation redefines how footwear is designed and produced, shifting from traditional mass-production methods to agile, efficient, and creative digital workflows that allow brands to create on demand.
Following successful deployments with two leading footwear manufacturers in China, Kornit is expanding globally with additional customers in Vietnam and in Germany, setting a new standard for agility, creativity, and sustainability across the world’s leading footwear hubs. Ronen Samuel, Chief Executive Officer at Kornit Digital said:
“The footwear industry is undergoing a profound transformation. Through close collaboration with visionary partners and relentless innovation, we have developed a fully digital solution that redefines how shoes are designed, produced, and delivered. What started as a concept is now being adopted at scale, with leading global brands. Kornit has always been about pushing boundaries, and this milestone marks a new era for digital manufacturing and sustainable growth.”
Customer feedback highlights that Kornit’s digital solution has dramatically accelerated footwear development and unlocked creative freedom. What took months now happens in days, enabling brands to respond faster to trends and deliver distinctive, high-performance products with consistency and efficiency.
Kornit’s footwear solution also sets new standards for sustainability. The process requires no water, uses minimal energy, and enables local, near-shore production—reducing waste, inventory, and carbon footprint while allowing brands to produce only what is sold.
Looking ahead, Kornit’s next-generation patented footwear technology will be introduced at Techtextil 2026 in Frankfurt, showcasing new specialized polymers and expanded material compatibility that will further enhance performance and scalability.
Visit Kornit at ITMA Asia + CITME Singapore 2025, Hall 6 Stand C204, to experience how creativity is replacing complexity and digital is replacing analog, empowering the footwear industry to move at the speed of imagination, with Kornit leading the way.
Note: The headline, insights, and image of this press release may have been refined by the Fibre2Fashion staff; the rest of the content remains unchanged.
One of the most notable cloud technology trends in 2025 was the (seemingly) overnight emergence of the neocloud category of cloud providers, which specialise in the provision of niche, sovereign cloud and artificial intelligence (AI) infrastructure services.
Neocloud providers, which include the likes of Nscale, CoreWeave and Carbon3.ai, are having a somewhat disruptive impact on the market by making huge commitments to build out hyperscale datacentres in support of the UK government’s AI growth agenda.
These providers are also taking up capacity in colocation datacentres that some of the hyperscale cloud giants previously committed to renting space in, before pulling out, as they seek to rapidly build their footprint in the UK, particularly.
As reported by Computer Weekly, real estate consultancy CBRE pinpointed lower hyperscaler demand for colocation capacity in the first nine months of 2025. In the aggregate, future AI-ready datacentre capacity was contracted for a total of 414MW, versus 133MW in the comparable 2024 period.
A chunk of that will be to neocloud providers offering purpose-built AI services, such as bare metal or graphics processing units (GPUs) as a service (GPUaaS) or inference with pay-as-you-go pricing. But should enterprises be betting on neocloud? With AI infrastructure investments underpinning a Gartner forecast that annual enterprise IT revenues will see a 10.8% surge from 2025 to reach $6.2tn (£4.5tn) by the end of 2026, few want to be left behind.
Mark Boost, CEO at cloud provider Civo, thinks some may have reasonable concerns about neoclouds, despite – or even because of – the vast investments in train. “The problem is there is too much hype right now. And with neocloud, you’re having companies that may be well capitalised but still have little experience in running cloud services.”
They might tick multiple financial boxes and successfully procure datacentre space or GPUs, but that might be their limit. They might not be able to offer a mature, wide ecosystem of products and services. That may or may not be fine, depending on what IT buyers need. Some may be building themselves up in this space, by going down an open source route, for example, but it can represent a risk for customers to consider.
“Your hyperscalers, your CoreWeaves and so on, do have a more mature ecosystem. But then, for sovereign infrastructure, beyond them, you’re really limited for choice,” says Boost. “Only a few have some form of software stack. Others are scrambling around to do it. Of course, if you do just want to buy a few GPUs and nothing else, they can hand you the keys and you’re on your own.”
Support needs for AI workloads
Many enterprises need far more than that in terms of support, however, especially with the rise of AIOps and MLOps. Most organisations looking to benefit from AI and machine learning (ML) need a partner that can supply the required level and cadence of support. “There’s a consultancy and professional services element to consider,” says Boost. “And sovereignty is becoming a bigger and bigger thing. People have been burned. They crave control.”
In summary, organisations need transparency around how data will be managed, stored and priced. They need to tread carefully when choosing cloud providers.
Neoclouds can raise the same sovereignty questions as traditional clouds. Do you really control your data? Enrico Signoretti, Cubbit
Enrico Signoretti, vice-president of product and partnerships at cloud storage firm Cubbit, adds that many neoclouds are just specialised clouds, operated or using a tech stack that’s largely based overseas. “[This means] they can raise the same sovereignty questions as traditional cloud,” he says. “Do you really control your data?”
For sovereign AI, you need “home-grown champions”. European countries need to scale and fund their own new AI factories. The viable path is architectures that keep data sovereignty next to the GPU through encryption and the right data orchestration and governance. Otherwise, an enterprise’s data, which is its most important asset, remains exposed to risks linked to extraterritorial laws, he says.
Thomas King, chief technology officer of internet exchange DE-CIX, says neocloud providers have competed so far by offering cheap GPUs for AI training. Rapid innovation in AI servers travels hand-in-hand with depreciation, which is estimated to be three to five times faster than for traditional hardware.
“Usually, they are a lot cheaper because they focus on AI workloads only. They are not general-purpose cloud providers,” he says.
The risk to the customer partly depends on the risk of provider lock-in that restricts long-term agility. That said, modern IT infrastructures usually have a lot of virtualisation in place. Moving from one provider to another is a lot easier than it was 10 years ago, says King.
Additionally, moving to AI inference workloads instead of training is likely to prove more profitable. Training can be done cheaply, where land and power are affordable and datacentres are easy to build. But when you’re doing more, you need quality connectivity.
“When it’s about using the AI models, neoclouds supported very closely can provide inference with very low latency,” he says. “In this case, you are usually also in an environment where you’re not only going with one AI provider anyway. You need to find the right mix to serve your customers best.”
In addition, organisations do not usually go out of business overnight, with many neocloud firms publicly traded, which means regular market announcements. Warning signs, such as not keeping up with new GPU versions, mean you could start migrating elsewhere.
“If you do your IT infrastructure right, and build in the risk that your neocloud provider might go out of business, it shouldn’t be too hard to move your infrastructure,” he says.
With the European Union’s proposed Cloud and AI Development Act, which is set to come into effect this year, neocloud providers may be able to offer control of data processing locations and ensure jurisdiction-aware interconnection and data pathways, he adds.
Expansion tipped to continue
Estimates by Synergy Research suggest the doubling of the neocloud sector in the past year could be followed by further expansion at 69% per year through to 2030.
“AI is a killer application for edge computing,” DE-CIX’s King says. “You have complex AI models. [Applications] need to be close to the user, because working on doing the calculations on the AI model already takes time. You can’t spend a lot of time on the transmission of the data back and forth.”
Traditional hyperscale providers are also moving in a similar direction because a new market is developing, even if not as fast, with the return on investment (ROI) not being realised as quickly as many had hoped.
“There are a lot of pros, including high margins in the inference space,” says King. “Not everyone will survive. But, in the end, everybody is looking into how we can make use of AI, and we are still in the beginning.”
Suresh Vasudevan, CEO of AI platform provider Clockwork.io, notes that datacentre lifecycles run to 10 or 15 years, while GPU technology depreciates in four to six years. However, long-term contracts with foundation model builders or hyperscalers may reduce any risk.
In many cases, neoclouds can offer lower GPU pricing, more predictable access to high-end capacity in a supply-constrained market, and sometimes bare metal environments where enterprises can bring and tune their own software stack for higher utilisation. When GPU supply is tight, guaranteed access to capacity and cost control can outweigh ecosystem convenience – although integration friction and enterprise readiness requirements cannot be underestimated.
“Ultimately, the choice comes down to workload profile and economics,” adds Vasudevan.
Consider independent benchmarks
Every neocloud will describe itself as enterprise-grade, so look for measurable operating data on the infrastructure reliability, utilisation, power, cooling and the like. Consider independent benchmarks like ClusterMAX from SemiAnalysis for useful comparative transparency, Vasudevan urges. “Enterprises should press for hard numbers,” he says. “What is your measured cluster-level availability? How often do interruptions occur at 1,000-GPU scale? What does your SLA truly guarantee?”
Enterprises should press for hard numbers. What is your measured cluster-level availability? How often do interruptions occur at 1,000-GPU scale? What does your SLA truly guarantee? Suresh Vasudevan, Clockwork.io
Four or five nines availability is expected in traditional central processing unit (CPU) environments, he points out. However, large GPU clusters can experience multiple disruptive interruptions per day. Failures are part of operating at scale, but must be consistently and efficiently managed. “The second differentiator is diagnostics. When jobs slow down or fail, does the provider offer deep, actionable telemetry to isolate the problem quickly? Without strong observability, GPU hours are lost and ROI erodes,” says Vasudevan.
Hyperscale won’t be going away. For organisations with multi-year cloud commitments and significant data gravity, there are financial and practical incentives to continue building within that environment. “Hyperscalers bring breadth. They offer a deeply integrated ecosystem of microservices – identity, databases, security, networking and observability – that already sits alongside an enterprise’s existing data estate,” adds Vasudevan.
CBRE’s dataset also recorded notable activity in the Nordics, where there are lower-cost renewable energy options. Power requirements may have more influence on the lease structures than square footage, and CBRE has also noted that neoclouds are attracting more interest where there are fewer hyperscale availability zones.
Kevin Restivo, director and head of datacentre research for Europe at CBRE, says that generally, colocation providers may be offering space to neoclouds under different terms than those offered to hyperscalers. “The deals we see in the market between neoclouds and datacentre providers are typically shorter in length,” he says. “And contract terms change depending upon the amount of capacity contracted.”
Meanwhile, rent prices of late are sometimes well above inflation. So it can be worth paying a premium and having shorter-term deals, the pay-off being greater flexibility and ability to migrate, as well as speed to market. “Neoclouds are trying to build out their infrastructure,” he says. “They need their kit in datacentres, and they need to do it quickly. Capacity is, as I like to say, an increasingly precious commodity in Europe and worldwide.”
Through 2026, the supply bottleneck for compute-intensive workloads looks confirmed, relative to the perceived demand for access to GPUs, he adds.
Of course, if that demand does not eventuate, there may be a need for fewer providers down the track. For now, neocloud will continue to play a key role in the datacentre landscape, by virtue of the capacity in train – that is, under construction for cloud purposes. “The real question is what enterprises make of AI services,” says Restivo. “Because there is great anticipation about investment in AI services on the part of European enterprises.”
For most enterprises to move forward and begin employing AI at scale, the markets are going to need to see more early adopters succeed, demonstrating benefits and productivity.
Do you like having a second screen with your computer setup? What if your laptop could carry a second screen for you? That’s the idea behind Lenovo’s latest proof of concept, the ThinkBook Modular AI PC, announced at Mobile World Congress in Barcelona.
At MWC 2026, Lenovo trotted out three concepts. While it’s unclear whether any of them will become real, purchasable products, there’s some unique utility here, and a peek at how computing experiences could change in the future.
A Laptop With a Built-In Portable Screen
The ThinkBook Modular AI PC has a second screen hanging magnetically off the back of the laptop, and it can show content to people sitting in front of you.
Photograph: Julian Chokkattu
This is with the second screen removed from the back and placed in front of the main display. The keyboard is removable and works via Bluetooth.
Photograph: Julian Chokkattu
As someone with a multi-screen setup at home and a fondness for portable monitors, the ThinkBook Modular AI PC appeals to me the most. At first glance, it looks like a normal laptop. Take a look behind, and you’ll notice there’s a second screen magnetically hanging off the back of the laptop, like a koala carrying a baby on its back.
The screen is connected to the laptop using pogo-pin connectors, so you can use it in this state to display content to people in front of you, say, if you were making a presentation during a meeting. Alternatively, you can pop this second screen off, remove a hidden kickstand resting under the laptop, and magnetically attach it to the 14-inch screen so that you have a traditional portable monitor experience. (You’ll need to connect this to the laptop via a USB-C cable in this orientation.)
If you don’t have the desk space for that orientation, you can always remove the keyboard from the base and pop the second screen there—it’ll auto-connect to the laptop via the pogo pins, and you’ll be able to use the Bluetooth keyboard to type on a dual-screen setup that resembles the Asus ZenBook Duo. The whole system is a fantastically portable method of improving productivity on the go, and the laptop isn’t too thick or cumbersome.