Fashion
What legal challenges does the fashion industry face in the age of generative AI?
Published
December 10, 2025
From safeguarding intellectual property to securing their own use of artificial intelligence, the fashion industry is still finding its feet with AI. Unsurprisingly, the topic took centre stage at the Assises Juridiques de la Mode, du Luxe et du Design, held in Paris on December 9 and organised by Lexposia.
“In 2024, we submitted 2.5 million reports of counterfeit content to platforms,” explained Nicolas Lambert, LVMH’s director of online brand protection. “That’s nothing new, but AI has made it increasingly easy to generate infringing content. At the moment, for example, we’re seeing a proliferation of online ads for counterfeit Advent calendars from Sephora, Dior and other group brands.”
Alexandre Menais, general counsel for the L’Oréal group, was also on hand to bear witness to this acceleration. In his view, the growing presence of this new technology calls for fresh thinking about interactions between the company and the machine, and in particular how those interactions are used.
“With an intelligent agent, the question arises of who owns that interaction,” stressed the legal expert. “One of the risks I see is that the rules companies set, which mandate the use of closed AI, will be widely flouted. Many employees will be tempted to test AI outside the established framework.”
Christiane Féral-Schuhl, a lawyer specialising in this field, identified this risk as well. For the former bar chair and former president of the Conseil National des Barreaux, it is urgent to raise employees’ awareness of the differences between a closed AI, trained on creations and data for which rights‑holders have given their consent, and an open AI system. The latter dispenses with rights‑holders’ consent by relying on the “text and data mining” (TDM) exception.
“These AIs are ogres that swallow up all this ‘training data’, and to counter this you can build your own AI system, using protected data within a controlled framework. If an employee prefers to use an open system, they feed the machine and, in effect, share their work and creations with others — including their competitors — who may exploit it to produce infringing works.”
Féral-Schuhl also emphasised the questions to be asked of AI tool suppliers. Some stipulate in their terms that a customer’s work may be used to improve the service for all customers — which, in a creative context, should obviously be prohibited.
Frédéric Rose runs IMKI, which designs bespoke generative AI for brands such as The Kooples and G-Star. The specialist notes that AI is becoming more sophisticated. “It will soon be able to draft patterns and technical execution files,” he estimates. “It’s already getting more and more precise, and is becoming capable of specifying materials, fabric weights (grammage) or stitching types.”
This level of detail now makes it possible to spot counterfeits — for rights‑holders and consumers alike.
“Some AIs have safeguards and refuse to respond, but others give you suggestions on where to find the best dupes,” said Lambert. “Between the AI and the customer, it’s a private channel that I can’t investigate. But maybe tomorrow AI will be able to identify suspicious behaviour. Perhaps we need to imagine, as with YouTube, a DMCA‑style mechanism (a rights‑holder takedown mechanism, editor’s note) preventing an AI from pointing users to a counterfeit product.”
“And if AI is exploited for creative purposes, we also need to define red lists of iconic elements, specific signatures, which could lead a creation to resemble that of an established brand,” said Féral-Schuhl.
She also points to the emergence of “watermarking” (or digital tattooing) of data used to train AI, which could in time be subject to copyright protection and prevent its use in AI agents’ creative processes. This comes on top of “information tagging” that records the date and place of AI‑generated creations.
The vice‑president of French unicorn Mirakl, which develops marketplaces for major retailers, Hugo Weber, for his part, spoke about the contribution AI could make to already highly efficient algorithms.
“Amazon Prime is not a logistics issue: if you’re delivered the next day, it’s because in 95% of cases your purchase was already in shipping, because the algorithm is very efficient,” summarised the specialist.
He also cautioned against turning the Shein case into a trial of marketplaces, pointing out that European, American and Chinese players all have different notions of responsibility.
The Shein case was also raised by Benoît Loutrel, chair of the online platforms working group at ARCOM (Autorité de Régulation de la Communication Audiovisuelle et Numérique).
“We’re moving from preventive action by regulators to enforcement action by the courts. I think that the next stage will involve civil law, particularly in the case of artificial intelligence,” said the specialist.
Faced with the rise of ARCOM equivalents in other European countries, he hopes to see French digital sovereignty anchored within the broader European Union framework now taking shape.
This article is an automatic translation.
Click here to read the original article.
Copyright © 2025 FashionNetwork.com All rights reserved.