Coats Digital is delighted to announce that VTL Group, one of the largest vertically integrated textile manufacturers in the Mediterranean region, has adopted Coats Digital’s GSDCost solution to standardise production methods, increase productivity, and improve pricing accuracy across its Tunisian operations. The initiative is already showing a significant impact, with VTL reducing standard minute values (SMVs) by 15–20% and increasing line output by 10% across its three, key sewing facilities.
With over 5,000 employees and 3,000 sewing machines across 90 sewing lines, VTL Group specialises in jersey knits and denim, producing up to 20 million garments per year for world-renowned brands such as Lacoste, Adidas, G-Star, Hugo Boss, Replay and Paul & Shark. The company operates six garment production units, along with dedicated facilities for screen printing, knitting, dyeing and textile finishing. This extensive vertical integration gives VTL complete control over quality, lead-times and cost-efficiency, which is vital for meeting the stringent demands of its global customer base.
VTL Group has adopted Coats Digital’s GSDCost to standardise production, boost productivity, and improve pricing accuracy across its Tunisian operations.
The solution cut SMVs by 15–20 per cent, raised line output by 10 per cent, and enhanced planning, cost accuracy, and customer confidence, enabling competitive pricing, lean operations, and stronger relationships with global fashion brands.
Prior to implementing GSDCost, VTL calculated capacity and product pricing using data from internal time catalogues stored in Excel. This approach led to inconsistent and inaccurate cost estimations, causing both lost contracts due to inflated production times and reduced margins from underestimations. In some cases, delays caused by misaligned time predictions resulted in increased transportation costs and operational inefficiencies that impacted customer satisfaction.
Hichem Kordoghli, Plant Manager, VTL Group, said: “Before GSDCost, we struggled with inconsistent operating times that directly impacted our competitiveness. We lost orders when our timings were too high and missed profits when they were too low. GSDCost has transformed the way we approach planning, enabling us to quote confidently with accurate, reliable data. We’ve already seen up to 20% reductions in SMVs, a 10% rise in output, and improved customer confidence. It’s a game-changer for our sales and production teams.”
Since adopting GSDCost across 50 sewing lines, VTL Group has been able to establish a reliable baseline for production planning and line efficiency monitoring. This has led to a more streamlined approach to managing load plans and forecasting. Importantly, GSDCost has given the business the flexibility to align pricing more effectively with actual production realities, contributing to greater customer satisfaction and improved profit margins.
Although it’s too early to determine the exact financial impact, VTL Group has already realised improvements in pricing flexibility and competitiveness thanks to shorter product times and better planning. These gains are seen as instrumental in enabling the company to pursue more strategic orders, reduce wasted effort and overtime, and maintain the high expectations of leading global fashion brands.
Hichem Kordoghli, Plant Manager, VTL Group, added: “GSDCost has empowered our teams with reliable data that has translated directly into real operational benefits. We are seeing more consistent line performance, enhanced planning precision, and greater confidence across departments. These improvements are helping us build stronger relationships with our brand partners, while setting the foundation for sustainable productivity gains in the future.”
The company now plans to expand usage across an additional 30 lines in 2025, supported by a second phase of GSD Practitioner Bootcamp training to strengthen in-house expertise and embed best practices throughout the production environment. A further 10 lines are expected to follow in 2026 as part of VTL’s phased rollout strategy.
Liz Bamford, Customer Success Manager, Coats Digital, commented: “We are proud to support VTL Group in their digital transformation journey. The impressive improvements in planning accuracy, quoting precision, and cross-functional alignment are a testament to their commitment to innovation and excellence. GSDCost is helping VTL set a new benchmark for operational transparency and performance in the region, empowering their teams with the tools needed for long-term success.”
GSDCost, Coats Digital’s method analysis and pre-determined times solution, is widely acknowledged as the de-facto international standard across the sewn products industry. It supports a more collaborative, transparent, and sustainable supply chain in which brands and manufacturers establish and optimise ‘International Standard Time Benchmarks’ using standard motion codes and predetermined times. This shared framework supports accurate cost prediction, fact-based negotiation, and a more efficient garment manufacturing process, while concurrently delivering on CSR commitments.
Key Benefits and ROI for VTL Group
15–20% reduction in SMVs across 50 production lines
10% productivity increase across key sewing facilities
More competitive pricing for strategic sales opportunities
Improved cost accuracy and quotation flexibility
Standardised time benchmarks for future factory expansion
Enhanced planning accuracy and load plan management
Greater alignment with lean and sustainable manufacturing goals
Increased brand confidence and satisfaction among premium customers
Note: The headline, insights, and image of this press release may have been refined by the Fibre2Fashion staff; the rest of the content remains unchanged.
HOKA’s max-stacked Rocket X Trail combines road race shoe energy with boosted grip from a 3-mm lugged outsole. If you’re looking for a fast shoe to go on the attack, this is it. It’s also fantastic for all round comfort. In testing, I laced up the Rocket X Trail and ran 3 hours (just short of 19 miles) fresh out of the box, across roads, forest gravel trails, some grass and through some serious water. It delivered efficiency and energy whether I was moving at marathon pace or with heavier, tired, ragged footfalls in the latter miles.
The rockered, supercritical midsole uses HOKA’s liveliest foam, similar to those you find in its race-ready road shoes, along with a carbon plate. That combines for a really fun ride that’s smooth, springy and fast and really consistent. It’s also highly cushioned, so you will sacrifice a lot of ground feel for that big stack springy softness. It’s also less stable over very lumpy terrain. But on open, flat, runnable mixed terrain, it’s excellent.
The lightweight uppers have a race-shoe-ready feel and after running through ankle-deep flooded sections, they shed water really quickly. This is a pricey road-to-trail shoe, it’s versatile and there’s plenty of winter road potential, too.
It’s always pleasing to see an array of physical buttons, and you get sizable ones too. You’re not going to miss these wide flat ones even when picking the pace up. The silicone strap has a nice stretch to it and while the button clasp is a bit awkward to get into place, this watch does not budge.
Suunto has jumped on the flashlight trend, with an LED light strip sat on the front of the case. You can adjust brightness levels and there’s SOS and alert modes to emit a very noticeable pulsating light pattern. This is a light I found useful rooting around indoors as well as on nighttime outings.
The biggest change is the introduction of a 1.5-inch, 466 x 466 AMOLED display. This replaces the dull, albeit very visible, memory-in-pixel (MIP) display. Suunto also ditched the solar charging that did require spending a significant amount of time outside to reap its battery benefits.
Adding AMOLED screens to outdoor watches has been contentious. The older MIP displays are just more power-efficient. The Vertical 2 is down by about 10 days from the older Vertical for what Suunto calls daily use.
Still, even if you’re putting its tracking and mapping features to use, you’re not going to be reaching for the charger every few days. After two hours of tracking in optimal GPS mode, the battery only dropped by 2 to 3 percent. The battery drop outside of tracking is also small and the standby performance is excellent as well.
Software Updates
Photograph: Michael Sawh
A more streamlined set of smartwatch features helps reserve battery for when it really matters. Unfortunately, I probably got better battery life because you don’t get phone notifications or responses if it’s paired to an iPhone instead of an Android. There’s also no onboard music player, but you do get a pretty slick set of music playback controls that are accessible during tracking.
Over the past few years, edge artificial intelligence (AI) has quickly transformed from a niche technology to a vital and strategic necessity. This is mainly because it helps resolve or minimise some of the key bottlenecks of traditional cloud-based AI. These include data volume, latency, privacy and cost, among others, while allowing companies to make instant decisions to keep up with modern and increasingly automated operations.
As a result, the deployment of edge AI is no longer only a technical architecture choice, but one that is actively reshaping risk, cost, compliance and responsibility for enterprises. Businesses are increasingly choosing to store sensitive information mainly on local networks, instead of relying on cloud providers, which has further driven the growth of edge AI.
Rather than asking whether or not to adopt edge AI, the crucial question for most companies is how to do so without creating new security, cost and governance issues. As a relatively new technology still, several companies risk implementing edge AI simply to jump on the AI bandwagon, without being fully aware of which situations can most benefit from it.
“Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity,” notes Michaël Bikard, professor of strategy at the Insead business school. “Edge AI can work well locally while producing fragile outcomes at the system level. Historically, that’s when failures occur. Not because the technology fails, but because it is trusted too early, before institutions, organisations and governance are ready.”
As such, understanding the consequences of edge AI deployment is paramount to deciding long-term strategy.
Why businesses are moving from cloud-first to hybrid
Businesses are increasingly choosing a more hybrid AI approach over a cloud-first strategy, driven mainly by larger and more complex AI workloads. Many firms have also been disappointed by the savings achieved by adopting a full public cloud strategy, instead being faced with sharply surging operational costs.
These costs, exacerbated by data-heavy applications, mainly arose from moving large datasets to and from the cloud and between providers. Surprise fees and unpredictable bills have further strained IT budgets and complicated budgeting and forecasts.
Edge AI attracts a lot of enthusiasm because it enables real-time, autonomous decisions. However, the real danger is a false sense of technological maturity Michaël Bikard, Insead
On the other hand, with edge AI, companies can run stable and predictable workloads on-premise much cheaper than in the cloud.
Latency is another overarching concern. Edge AI can often be better than the cloud to minimise latency for applications which need real-time, high-speed processing. These include operational control systems and local analytics, among others.
In highly regulated industries such as finance and healthcare, some data may only be stored within certain jurisdictions, which has further driven the shift to edge AI or on-premise solutions.
Major, single cloud providers can also come with supplier lock-ins, while multicloud environments are increasingly complicated to manage, also leading to hybrid approaches.
A hybrid strategy lets companies use public cloud to train and update applications which need to scale fast, while keeping high-volume, sensitive or stable data on-premise. This allows organisations to balance agility, cost efficiency and operational resilience, especially in a global context where real-time intelligence is increasingly valuable.
Edge AI business drivers: What’s real and what’s noise
At present, most businesses using edge AI have adopted the technology due to practical operational needs. Successful deployments have focused on solving specific, cloud-only limitations, rather than trying to overhaul entire company tech infrastructures.
The need for real-time decision-making has primarily driven edge AI adoption, especially in sectors like infrastructure, logistics, manufacturing and transport. This is especially as latency can have far-reaching operational and financial consequences, which the technology can help significantly in cutting down.
Applying edge AI to these sectors helps companies process data closer to where it is generated, which enables them to react faster during times of lost central connectivity.
The technology also helps organisations dealing with sensitive data stay legally and financially compliant in jurisdictions with especially strict data storage laws.
For companies working on critical operations, edge AI can greatly improve operational resilience by making sure that data and intelligence are distributed throughout a number of locations. This helps reduce dependence on centralised systems, which in turn decreases the impact of outages.
However, some business drivers are vastly overestimated when it comes to influencing the need to implement edge AI. The biggest of these is short-term cost savings. Edge AI can certainly cut down on transfer and cloud data consumption costs in the long-run.
However, it initially needs significant capital expenditure, mainly in the form of hardware device upgrades. There are also ongoing maintenance, monitoring and software update costs following implementation. In some cases, integration with legacy systems may be slower than expected and businesses may have to hire specialised labour as well. Edge AI systems also use considerable amounts of power, leading to higher energy bills.
These factors can all cause costs to be higher in the first few months, requiring businesses to have a long-term view when it comes to seeing strategic benefits from edge AI.
Another notion that is often overestimated is edge AI being able to deliver anything like “super-intelligence”, by running huge, complicated models like datacentre graphics processing units. However, given current computing and power restrictions in most cases, this scenario is highly unlikely at the moment.
Similarly, expectations of businesses being able to switch entirely to edge AI, instead of a hybrid approach, are also unrealistic, mainly because of practical deployment, integration and maintenance limitations across various locations.
How edge AI is changing security, governance and ownership
As edge AI becomes more embedded in hybrid business tech strategies, risk management, enterprise security and governance are also changing, moving away from centralised IT control. These areas are now being shaped by local operational teams taking increasingly autonomous decisions, factoring in the real-time conditions of critical physical infrastructure.
Rising edge AI usage could heighten security concerns as well, as it widens organisational attack surfaces through multiple distributed devices and infrastructure. These then need to be protected, monitored and updated equally, following a set of standard guidelines, despite each of them presenting their own unique limitations.
AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure Florian Stahl, Mannheim Business School
“AI systems can perform exceptionally well under conditions similar to their training data, yet fail abruptly under rare, extreme, or novel scenarios – precisely the situations that matter most in critical infrastructure,” remarks Florian Stahl, chair of quantitative marketing and consumer analytics at Mannheim Business School.
Patch management can pose more issues with edge AI as well, with thousands of endpoints and vulnerabilities causing potential delays and discrepancies in maintenance.
With edge AI being all about local deployments, more questions around version control, oversight and audit issues can arise. This means that companies may need to maintain more in-depth and regular records about data inputs, decision-making processes and operational factors. Highly regulated industries may especially demand evidence trails and seek greater accountability, which can impact company reputations and licences.
“Real-time AI systems, particularly those based on machine learning, often operate as ‘black boxes’, making it difficult to explain or audit decisions when failures occur. This lack of transparency is problematic in infrastructures where accountability and post-incident analysis are essential,” Stahl adds.
As autonomous decisions taken locally can have very real financial, safety and compliance consequences, businesses may be compelled to take accountability far more seriously if they choose to use edge AI.
Senior leadership may also need to adapt centralised organisational and governance models to a more distributed intelligence strategy, all while keeping costs low.
These factors have led to edge AI becoming a structural change just as much as a technical one, impacting how and where decisions are taken, how risk is evaluated and overall accountability.
What leaders should consider before implementing edge AI
Given the considerable initial investment required by most edge AI models, leaders should prioritise long-term strategic impact, rather than the hype of the latest technology. This means that while evaluating company-readiness, apart from timing, the potential scope of the intended edge AI model is paramount.
The biggest factor to consider is which processes or systems are most likely to benefit from using edge AI first and which can wait for a few more months. Ideally, businesses should prioritise any processes where latency, operational risk and data locality are most critical. By doing this, organisations can spread out costs while testing new deployments in a relatively lower-risk manner.
“Importantly, organisations should evaluate AI deployments not only through efficiency metrics, but also through risk-adjusted performance indicators, recognising that marginal efficiency gains are rarely justified if they introduce disproportionate systemic or ethical risks,” Stahl advises.
The next question is: to scale or not to scale? In several cases, a pilot edge AI deployment is either enough for the short-term, does not deliver the expected results, or highlights many hidden costs and operational issues.
In these cases, decision-makers need to evaluate whether it is worth taking the risk to scale, which will need more investment, specialised skills and manpower.
However, knowing when not to use edge AI, and when it could cause more harm than good, is equally important for businesses. This is primarily in cases where data volumes are still low, latency is not crucial, or the company does not have the means to appropriately handle several distributed endpoints.
“Edge AI should not be deployed in sectors where use cases are broad, stakes are high, and the consequences of errors are poorly understood,” Insead’s Bikard states. “That combination usually signals a timing problem rather than a technological one. In open, highly interconnected environments, even small mistakes can cascade before organisations have time to respond.”
In such cases, exercising strategic restraint is far more instrumental to long-term value.
From tech choice to organisational shift
Ultimately, implementing edge AI models should be primarily focused on delivering long-term, strategic value, rather than a trend-based decision. This is especially true if latency and real-time data analysis pose real risks. Businesses need to consider that edge AI use is likely to reshape everything from cost structures and decision-making to autonomy and risk, and prepare accordingly.
“There are real potential gains from using AI for predictive maintenance, but those gains rarely come from the technology alone. For AI to pay off, the surrounding organisation – its incentives, culture, structures and skills – must also adapt. Predictions only create value if people are empowered to act on them,” Bikard concludes.
Enterprises that treat edge AI as an entire operational shift, rather than an independent feature to be tacked onto legacy systems, will inevitably be able to take advantage of it better in the long run.