Connect with us

Tech

UK copyright unfit for protecting creative workers from AI | Computer Weekly

Published

on

UK copyright unfit for protecting creative workers from AI | Computer Weekly


Widespread concern about the use of creative works to train artificial intelligence (AI) systems has prompted the UK government to begin exploring how the country’s copyright rules can be changed to satisfy the complex, often conflicting demands of both the creative and tech sectors.

As it stands, the government is due to publish a report and impact assessment of each of the four options available on 18 March 2026, which were set out in a previous consultation that ran from December 2024 to February 2025.

The options being assessed include keeping copyright and related laws remain as they are; strengthening copyright to require licenses in all cases; implementing a broad data mining exemption for AI companies; or creating a more limited data mining exemption that allows copyright holders to reserve their rights, underpinned by measures to promote and support greater transparency from developers.

However, given structural imbalances within existing copyright markets – which favour giant corporations over individual creators – it is unclear to what extent the AI-related reforms to the UK intellectual property rules being considered will help creative workers themselves.

Creators vs AI developers

Questions around the use of creative works to train AI systems have become one of the most intense areas of debate since the advent of generative AI (GenAI) and large language models (LLMs) with the release of OpenAI’s ChatGPT in November 2022.

In particular, the debate has focused on what it means for existing copyright protections and the livelihoods of creators, who have expressed concern over the unauthorised use of their works to train AI models.

Aside from a lack of transparency from AI companies about the data included in their training corpuses, creatives have variously complained about the absence of enforceable mechanisms to protect their copyrighted works within the context of scraping at scale, as well as the impacts of AI on creative job markets and competition.

For AI companies, on the other hand, access to vast amounts of high-quality data is of paramount importance, particularly when it comes to the development of LLMs such as Claude, ChatGPT or Gemini.

submission to the US Copyright Office on 30 October 2023 by Amazon and Google-backed LLM developer Anthropic is indicative of how these firms view their use of copyrighted material, and how integral it is for creating generative AI models.

“To the extent copyrighted works are used in training data, it is for analysis (of statistical relationships between words and concepts) that is unrelated to any expressive purpose of the work,” it said. “This sort of transformative use has been recognised as lawful in the past and should continue to be considered lawful in this case.”

It added that using copyrighted works to train its Claude model would count as “fair use”, because “it does not prevent the sale of the original works, and, even where commercial, is still sufficiently transformative”.

As part of a separate legal case brought against Anthropic by major music publishers in November 2023, the firm took the argument further, claiming “it would not be possible to amass sufficient content to train an LLM like Claude in arm’s-length licensing transactions, at any price”.

It added that Anthropic is not alone in using data “broadly assembled from the publicly available internet”, and that “in practice, there is no other way to amass a training corpus with the scale and diversity necessary to train a complex LLM with a broad understanding of human language and the world in general”. 

If licences were required to train LLMs on copyrighted content, today’s general-purpose AI tools simply could not exist
Anthropic

“Any inclusion of plaintiffs’ song lyrics – or other content reflected in those datasets – would simply be a byproduct of the only viable approach to solving that technical challenge,” it said.

It further claimed that the scale of the datasets required to train LLMs is simply too large to for an effective licensing regime to operate: “One could not enter licensing transactions with enough rights owners to cover the billions of texts necessary to yield the trillions of tokens that general-purpose LLMs require for proper training. If licences were required to train LLMs on copyrighted content, today’s general-purpose AI tools simply could not exist.”

While the submission and the court case are specific to the US context, the application of “fair use” exemptions to copyright is not dissimilar in UK. Under current UK copyright laws, original works are automatically protected upon their creation, giving the creators exclusive rights to copy, distribute, perform or adapt their work.

There are, however, limited exemptions that allow the “fair dealing” of copyrighted material for the purposes of, for example, research, criticism, review and reporting. A further exemption was added in 2014, allowing text and data mining for purely non-commercial research purposes.

As it stands, unless one of these exemptions applies, AI companies would therefore need to obtain permission from copyright holders to use these works in their model’s training data.

UK government consultation backlash

According to a previous UK government consultation on the matter, which closed in February 2025, “the application of UK copyright law to the training of AI models is disputed”.

It said that while rights holders are finding it difficult to control the use of their works in training AI models, and are seeking to be remunerated for its use, AI developers are similarly finding it difficult to navigate copyright law in the UK. It noted “this legal uncertainty is undermining investment in and adoption of AI technology”.

In an attempt to solve the dispute, the UK government proposed a new policy in late 2024 that would allow AI companies to train their models on copyrighted works unless rights holders explicitly opted out. This means that, rather than requiring AI companies to seek permission from rights holders for the use of their work, the burden would be placed on the creators themselves to actively object.

The opt-out proposal provoked significant backlash from creatives, who viewed it as too conciliatory to the narrow interests of tech companies. Out of the more than 10,000 people that responded to the government’s consultation on these measures, just 3% backed it’s opt-out proposal, while 95% called for either called for copyright to be strengthened, a requirement for licensing in all cases, or no change to current copyright law.

Others cited issues around the practicality of such proposals, noting that in the context of the current digital landscape – where copyrighted content is scraped at scale and included in training datasets, often without attribution – it may be impossible for someone to know when their work has been used, let alone opt out.

In the wake of this widespread opposition, the UK government has since committed to exploring a licence-first system that would require AI companies to seek explicit permission from creatives and provide them with compensation.

Balancing interests?

A year later, in December 2025, technology secretary Liz Kendall told Parliament there was “no clear consensus” on the AI-copyright issue, saying that the government would “take the time to get this right” while promising to make policy proposals by 18 March 2026.

“Our approach to copyright and AI must support prosperity for all UK citizens, and drive innovation and growth for sectors across the economy, including the creative industries,” she said. “This means keeping the UK at the cutting edge of science and technology so UK citizens can benefit from major breakthroughs, transformative innovation and greater prosperity. It also means continuing to support our creative industries, which make a huge economic contribution, shape our national identity and give us a unique position on the world stage.”

While government rhetoric on AI and copyright has revolved around the need to support both the UK’s creative and tech sectors, there is a sense that – so far at least – it is prioritising the latter in its ambition to make the country a tech superpower.

Beeban Kidron, a crossbench peer and former film director, for example, has previously described the use of copyrighted material by AI companies as “state-sanctioned theft”, claiming ministers would be “knowingly throwing UK designers, artists, authors, musicians, media and nascent AI companies under the bus” if they don’t move to protect their output from being harvested by AI firms.

Owen Meredith, chief executive of the New Media Association, has also previously urged the UK government to rule out any new copyright exception. “This will send a clear message to AI developers that they must enter into licensing agreements with the UK’s media and creative copyright owners, unlocking investment and strengthening the market for the high-quality content that is the most valuable ingredient in producing safe, trustworthy AI models,” he said.

Ed Newton-Rex, a prominent commentator on AI and intellectual property, has also criticised the balance of UK government’s approach, noting that while the government described its consultation proposals at the time as a “win-win … this is very far from the truth. It would be a huge coup for AI companies, and the most damaging legislation for the creative industries in decades”.

He added that a broad copyright exception that allows unlicensed training on copyrighted “would hand the life’s work of the UK’s creators to AI companies, letting them use it to build highly scalable competitors to those creators with impunity”.

[A broad copyright exception] would hand the life’s work of the UK’s creators to AI companies, letting them use it to build highly scalable competitors to those creators with impunity
Ed Newton-Rex, AI and intellectual property commentator

AI companies, of course, disagree. In its October 2023 submission to the US Copyright Office, Anthropic argued that requiring licences would be inappropriate, as it would lock up access to the vast majority of works and benefit “only the most highly resourced entities” that are able to pay their way into compliance

“Requiring a licence for non-expressive use of copyrighted works to train LLMs effectively means impeding use of ideas, facts and other non-copyrightable material,” it said. “Even assuming that aspects of the dataset may provide greater ‘weight’ to a particular output than others, the model is more than the sum of its parts. Thus, it will be difficult to set a royalty rate that is meaningful to individual creators without making it uneconomical to develop generative AI models in the first place.”

Others from the tech sector have also argued that diverging from other jurisdictions too greatly – for example, by implementing a UK-specific licensing arrangement preferred by the creative sector, or requiring firms to disclose detailed data inputs – would simply mean AI companies avoid deploying in the UK.

Trade association TechUK, for example, argued that in the context of AI-copyright related amendments to the government Data Use and Access Bill – which would have forced developers to publish their training corpuses but which were ultimately not included in the final Act of Parliament – departing too much from existing UK and international frameworks would risks companies being “discouraged from operating, training and deploying AI products and models in the UK”.

This was also recognised by the government in its consultation, which noted requiring licenses in all cases “is highly likely to make the UK significantly less competitive compared to other jurisdictions – such as the EU and US – which do not have such restrictive laws. This would make the UK a less attractive location for AI development, reducing investment in the sector. In doing so, it may not actually increase the level of licensing undertaken by AI firms.”

It added that models trained in other jurisdictions which do not meet any new UK standards may be difficult to restrict from the UK market, and risks some of the most capable AI models not being made available in the UK: “This would significantly limit innovation, consumer choice and wider benefits of AI adoptions across the UK economy.”

The technical caveats of copyright law

Under UK copyright law, it should be noted that creating “transient copies” of works is allowed if certain conditions are met. This includes if it’s not a permanent copy and serves a brief, ancillary purpose; if it’s a necessary step in a technological process; if its only goal is enabling lawful use or network transmission; and the copy itself doesn’t hold separate commercial value.

When looking at AI model training processes – which often, but not always, retain only a very small portion of each training item – this indicates it would be technically wrong to assert a copyright infringement has taken place, as Anthropic has argued in the context of the US.

However, this doesn’t mean that a model would never infringe copyright, as it is also technically possible for most models to “memorise” copyrighted works, turning a transitory copy into a permanent, infringing one.  

Although the specificities of whether a particular model or AI-generated output infringes this current copyright regime will be hashed out in individual court cases, some have argued that looking for copyright to solve the tension between creatives and AI companies is a non-starter.

Copyright unfit, even without AI

While there is a clear consensus among UK creatives for a new licensing regime to protect their works from being stolen by AI companies, it would need to avoid repeating the dynamics of the current intellectual property law, which itself receives criticism for creating monopolies, stifling creativity, and disproportionately benefitting large corporations over individual creators and the wider public. 

In their book Chokepoint capitalism: How big tech and big content captured creative labor markets and how we’ll win them back, for example, authors Cory Doctorow and Rebecca Giblin argue that while the past 40 years have been spent elaborating international copyright rules, the financial benefits of this have largely accrued to big business rather than creators themselves, whose share of growing entertainment industry profits have declined in that time.

In essence, their argument is that expanding copyright is very unlikely to protect the jobs or incomes of already underpaid creatives, who have themselves been exploited by entertainment behemoths wielding copyright laws against them for decades.

In their May 2024 book, Who owns this sentence? A history of copyright and wrongs, authors Alexandre Montagu and David Bellos similarly argue that copyright protections – which were originally intended to protect the livelihoods of individual creators – have since been transferred to giant corporations instead, which use them to extract a form of “rent” from consumers globally, while also locking the employees who helped contribute to the creation of the IP out from ownership and the consequent benefits.

It follows, then, that there is little reason to believe these same companies will now treat their creative workers more fairly if they receive compensation as a copyright holder from AI companies.

To alter this dynamic, Doctorow and others argue it would require changing the very structure of creatives markets so that the benefits accrue to creatives, rather than large corporations that essentially run “tollbooths” to facilitate and control access to creative’s work, which in turn allows them to extract disproportionally high profits for themselves.

Writing for the Electronic Frontier Foundation (EFF) in February 2025, Tori Noble argued that “expanding copyright will not mitigate” the harm to creative workers, and that “what neither Big Tech nor Big Media will say is that stronger antitrust rules and enforcement would be a much better solution”.

She added that looking beyond copyright can future-proof protections, including stronger environmental protections, comprehensive privacy laws, worker protections and media literacy, adding: “[This will] create an ecosystem where we will have defences against any new technology that might cause harm in those areas, not just generative AI. Expanding copyright, on the other hand, threatens socially beneficial uses of AI – for example, to conduct scientific research and generate new creative expression – without meaningfully addressing the harms.”

Collective copyright and labour law

As it currently stands, UK government looks to be on course to introduce a new licensing regime for AI companies’ use of copyrighted materials. Observers have said this would need to include mechanisms that allow creators to identify when and how their works are used, as well as to object or seek compensation as they see fit.

However, given the clear tensions that already exist between individual and corporate copyright holders, even a licensing regime could still disproportionally benefit the latter. It could also disproportionally benefit large AI developers, as the pool of actors with the ability to pay for enough copyright licenses to effectively train a model is vanishingly small.

The use of AI in creative endeavours throws up further issues around labour and competition: even if creators received compensation for the use of their copyrighted material, AI’s entire development is underpinned by a neoliberal logic of austerity. This means that, in the current political-economic context, those with the decision-making power to deploy AI largely do so because it allows them to cut labour costs – the biggest overhead for any capitalist enterprise.

In the current political-economic context, those with the decision-making power to deploy AI largely do so because it allows them to cut labour costs – the biggest overhead for any capitalist enterprise

In November 2024, data from the Harvard Business Review showed the impact that generative AI models were already having on labour markets, which highlights how creatives will essentially end up competing with the very models that ingest their data. Specifically, it highlighted how the introduction of ChatGPT decreased writing and coding jobs by 30% and 20% respectively, while AI image generators similarly decreased image creation jobs by 17%. 

Given the sheer scale at which models ingest data, it is not hard to see how creatives – even with a licensing regime in place – could be undermined by bosses who would rather pay for a relatively cheap corporate licence to an AI model, rather than the comparatively expensive labour of human beings.

In the tech sector itself, firms globally have been busy cutting their workforces as they look to increase spending on and investment in AI tools. In October 2025, Amazon laid off 14,000 employees, a decision that was specifically prompted and enabled by the firm’s AI investments.

While many argue that the advent of AI is inevitable, its impacts are certainly not. In November 2023, for example, the Autonomy think tank in the UK argued that while automating jobs with LLMs could lead to significant reductions in working time without a loss of pay or productivity, realising the benefits of AI-driven productivity gains in this way will require concerted political action.

The think tank added this was because it is clear that productivity gains are not always shared evenly between employers and employees, and depend on “geographic, demographics, economic cycle and other intrinsic job market factors” such as workers’ access to collective bargaining.

To deliver positive AI-led changes for workers and not just employers, Autonomy recommended setting up “automation hubs”, underpinned by trade union and industry agreements, to boost the adoption of LLMs in ways that are equitable.

In the context of the creative industries and copyright, a similar situation has already taken place with the 2023 Hollywood writers’ strike, whose collective sector-wide action ended with an agreement from studios that AI cannot be used to write or rewrite scripts, and which gave them the ability to prohibit the use of their writing in model training.

Instead of replying purely on copyright law – which historically has been wielded against individual creatives by entertainment and media companies – the answer may be found in attempting to build up collective copyright mechanisms and improving the underlying labour protections for creative workers to stop them being ripped off by companies, with or without the help of AI.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

OpenAI Really Wants Codex to Shut Up About Goblins

Published

on

OpenAI Really Wants Codex to Shut Up About Goblins


OpenAI has a goblin problem.

Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”



Source link

Continue Reading

Tech

Elon Musk Testifies That He Started OpenAI to Prevent a ‘Terminator Outcome’

Published

on

Elon Musk Testifies That He Started OpenAI to Prevent a ‘Terminator Outcome’


Elon Musk and Sam Altman appeared in a federal courtroom together for the first time on Tuesday as they fight over OpenAI’s decade-long evolution and what it means for the company’s future.

The trial in Musk’s lawsuit against Altman could result in financial damages and, more significantly, governance changes at OpenAI that may complicate its plans for an initial public offering as soon as this year.

As the first witness on the stand, Musk immediately sought to frame his case as more than just about OpenAI. Siding with Altman “will give license to looting every charity in America” and shake the “entire foundation of charitable giving,” Musk told a panel of nine jurors advising US District Judge Yvonne Gonzalez Rogers on how to rule.

Musk has been concerned about computers becoming smarter than people “since he was a young man in college,” his attorney Steven Molo told jurors. Molo explained that Musk lobbied governments to pass regulations addressing the prospect of so-called artificial general intelligence, including meeting with then-President Barack Obama in 2015. “But the government was not stepping up,” Molo said. “Elon felt he had to do something.”

Around the same time, Musk met with Altman, a then-30-year-old investor “whom he didn’t know very well,” Molo said. They soon launched OpenAI together as a nonprofit. Google’s unchecked progress on AI development had sparked concerns for both OpenAI cofounders, and they wanted to create a competing lab with a greater focus on safety. “My perspective is [OpenAI] exists because Larry Page called me a speciesist for being pro-humanity,” Musk said, referring to the Google cofounder. “What would be the opposite of Google? An open-source nonprofit.”

While Musk believes AI could cure diseases and generate prosperity for humanity, he also told the court that he thinks the technology could veer off into catastrophic scenarios straight out of science fiction. “It could also kill all of us … the Terminator outcome. I think we want to be in a movie … like Star Trek, not a James Cameron movie,” Musk said. (While Musk has long raised alarms about AI safety, his current firm, xAI, has been criticized by researchers at other AI labs for its “reckless” safety culture.)

As OpenAI began notching some of its own successes, Musk and Altman agreed that a for-profit arm with fixed returns for investors was necessary to raise extraordinary sums of money needed to fund hiring and computing, according to Molo. He compared it to a nonprofit museum that receives some proceeds from a for-profit store. “I was not opposed to there being a small for-profit as long as the tail didn’t wag the dog,” Musk said on the stand.

Musk felt that the approach had gone too far when Microsoft, another defendant in the trial, agreed to invest $10 billion in 2023, and OpenAI increasingly moved intellectual property and staff to the for-profit company. “The museum store sold the Picassos so they were locked up where no one could see them,” Molo said.

OpenAI’s Rebuttal

William Savitt, an attorney for OpenAI, told jurors that OpenAI never promised Musk that it would remain a nonprofit and publish all its code. “The evidence here will show what Musk says happened did not happen,” Savitt said.

He added that Musk knew about plans to raise corporate investment exceeding $10 billion as far back as 2018. Musk even raised concerns about Microsoft’s involvement in a 2020 tweet. But he didn’t file a lawsuit until he founded a competitor, xAI, in 2023.



Source link

Continue Reading

Tech

Sniffies’ Users Worry About a ‘Straightification’ of the Gay Hookup App

Published

on

Sniffies’ Users Worry About a ‘Straightification’ of the Gay Hookup App


Of all the gay hookup apps Brennan Zubrick uses, Sniffies, a cruising app for men interested in discreet sex-positive casual encounters with other men, is by far his favorite. Some of the most popular kinks among members on the platform include edging, cum play, and BDSM. “I overwhelmingly prefer the experience I get and the community I can access,” he tells WIRED. But Zubrick, who is 40 and based in Washington, DC, has a bad feeling that could soon change.

Tinder and Hinge parent company Match Group announced on Monday an investment of $100 million into Sniffies. The deal gives Match Group a large minority share and the choice to become the sole owner later on. The announcement has set off an intense firestorm of reactions from users who are second-guessing the direction of the company and the longterm sustainability of the app.

“Sniffies has long held its market position as the little guy, catering to a specific section of the gay community, and is somewhere people who might not be comfortable with Grindr—where no face-pic, no-chat culture runs rampant—go to connect with other like-minded people in a more direct and discreet way,” Zubrick tells WIRED.

“This partnership is about supporting that, not redefining it,” Sniffies founder and CEO Blake Gallagher said in a statement, noting that the investment will help the platform focus on three key areas users want: “stronger trust and safety, expansive network growth, and continued product improvements.” According to the agreement, Match Group will offer guidance on the right roles, procedures, and tech to help Sniffies build on its trust and safety efforts.

But users aren’t buying what Gallagher is selling. The Instagram post announcing the news was inundated with negative reactions, as users expressed worry over the strategic partnership. “Please don’t let this be the straightification of sniffies,” expressed one. “You sold out. Plain and simple. Where we moving to next boys?” added Marc Sundstrom, a user in Philadelphia. “Partnering with Match feels very gentrified and straight. Highly concerned about the app being allowed to be what it is in order to court investors,” wrote another. By Tuesday afternoon, comments on the post had been shut off.

Though it remains to be seen how Gallagher will position Sniffies in the months ahead, already users are saying this marks the beginning of the end for the app. “Straight people shouldn’t even know what Sniffies is for fuck sake,” one wrote in the r/askgaybros subreddit. And despite promises, some say a major corporation like Match is not ethically aligned with the indie spirit of Sniffies. On LinkedIn, the top comment under Gallagher’s post questioned the real intent behind Match Group’s investment. “Interested to see how ties to Palantir affect Sniffies’ growth. Hopefully this doesn’t become a surveillance application.”

Spencer Rascoff, who became CEO of Match Group in 2025, previously served on the board of Palantir, the defense tech and data mining company that has become a “technological backbone” of the Trump administration.

Sniffies maintains that it will continue to own and control how its user data is stored, handled, and protected. According to the company, there are no changes planned to its data practices as part of the investment.

But the outrage underscores the significance of platforms like Sniffies and what it would mean to a community of people who already feel like they have so few quality options for seeking desire online.

“It’s a mess and obviously to be expected. It’s definitely an indicator of its fast rise, so no shade, but we saw what happened with Grindr,” says Brad Allen, a 34-year-old event producer and the creator behind Club Quarantine, who joined Sniffies in 2023. “I really am pulling for them to somehow navigate this differently since it’s essential to the cruising community now. Hopefully the pop-up Candy Crush ads don’t light up too much in the bushes.”





Source link

Continue Reading

Trending