Connect with us

Tech

Google is training its AI tools on YouTube videos: These creators aren’t happy

Published

on

Google is training its AI tools on YouTube videos: These creators aren’t happy


YouTube’s algorithm is extremely powerful. If the company were to direct some of its users’ attention to pro-climate content, this would likely have positive consequences on a large scale. Credit: Pixabay/CC0 Public Domain

Santa Ana, California-based entrepreneur Charlie Chang spent years posting finance videos on YouTube before he made a profit.

Today, Chang’s media business oversees more than 50 YouTube channels, along with other digital sites, and generates $3 million to $4 million in annual revenue, he said.

But lately, he’s been faced with a new concern: that YouTube’s moves in artificial intelligence will eat into his business.

“The fear is there, and I’m still building the channels, but I am preparing, just in case my channels become irrelevant,” Chang, 33, said. “I don’t know if I’m gonna be building YouTube channels forever.”

YouTube’s parent company, Google, is using a subset of the platform’s videos to train AI applications, including its text-to- tool Veo. That includes videos made by users who have built their livelihoods on the service, helping turn it into the biggest streaming entertainment provider in the U.S.

The move has sparked deep tensions between the world’s biggest online video company and some of the creators who helped make it a behemoth. Google, creators say, is using their data to train something that could become their biggest competitor.

The schism comes at a pivotal time for Google, which is in a race with rivals including Meta, OpenAI and Runway for dominance in the market for AI-driven video programs. Google has an advantage due to YouTube’s huge video library, with more than 20 billion videos uploaded to its platform as of April.

Many creators worry such tools could make it easier for other people to replicate the style of their videos, by typing in text prompts that could produce images or concepts similar to what popular creators produce. What if AI-generated videos became more popular than their material? Creators say they can’t opt out of AI training and that Google does not compensate them for using videos for such purposes.

“It makes me sad, because I was a big part of this whole creator economy, and now, it’s literally being dismantled by the company that built it,” said Kathleen Grace, a former YouTube employee who is now chief strategy officer at Vermillio, a Chicago-based company that tracks people’s digital likenesses and intellectual property.

“I think they should be with pitchforks outside San Bruno.”

YouTube, founded in 2005, was built on creators posting content. At first, the user-generated videos were amateurish. But eventually, creators got more sophisticated and professional, doing more elaborate stunts and hiring staff to support their productions.

Key to YouTube’s early success was its investment in its video creators. The San Bruno, California-based company shares ad revenue with its creators, which can be huge. That business model has kept creators loyal to YouTube. As they grew their audiences, that in turn increased advertising revenue for both YouTube and creators.

Video creators are typically not employees of YouTube or Google. Many are independents who have built businesses by posting content, making money through ads, brand deals and merchandise.

The creator economy is a bright spot amid struggles in the entertainment industry. Last year, there were more than 490,000 jobs supported by YouTube’s creative ecosystem in the U.S., according to YouTube, citing data from Oxford Economics. YouTube has a greater share of U.S. TV viewership than Netflix and the combined channels of Walt Disney Co., according to Nielsen.

YouTube said it has paid more than $70 billion to creators, artists and media companies from 2021 to 2023.

The company has encouraged creators and filmmakers to use Google’s AI tools to help with brainstorming and creating videos, which could make them faster and more efficient. Some creators said they use AI to help hash out concepts, cut down on production costs and showcase bold ideas.

YouTube is also developing tools that will help identify and manage AI-generated content featuring creators’ likeness. Additionally, it made changes to its privacy policy for people to request removal of AI-generated content that simulates them on the platform, said company spokesman Jack Malon.

“YouTube only succeeds when creators do,” Malon said in a statement. “That partnership, which has delivered billions to the creator economy, is driven by continuous innovation—from the systems that power our recommendations to new AI tools. We’ve always used YouTube data to make these systems better, and we remain committed to building technology that expands opportunity, while leading the industry with safeguards against the misuse of AI.”

But already, creators say they are facing challenges from other people who are using AI to re-create their channels, cutting into their revenue and brand recognition.

“They’re training on things that we, the creators, are creating, but we’re not getting anything in return for the help that we are providing,” said Cory Williams, 44-year-old Oklahoma-based creator of Silly Crocodile, a popular animated character on YouTube.

In other cases, people are using AI to make deepfake versions of creators and falsely posing as them to message fans, said Vermillio’s Grace.

When people upload videos to YouTube, they agree to the company’s terms of service, which grants a royalty-free license to YouTube’s business and its affiliates.

But many creators said they were not aware YouTube videos were used to train Veo until they read about it in media reports. Melissa Hunter, chief executive of Family Video Network, a consulting firm for family-focused creators, said tools like Veo didn’t exist when she signed YouTube’s terms of service years ago.

Back in 2012, Hunter’s son (then 8 years old) wanted to start a YouTube channel together. Her son, now 22, is against AI for environmental reasons, so Hunter made those videos private. But Hunter said Google can still see those videos, and she’s concerned they were used to train Veo without her permission.

“It’s frustrating, and I don’t like it, but I also feel totally helpless to do anything,” Hunter said.

While there are other social media platforms such as TikTok and Instagram that also support content creators, YouTubers say they have already built large audiences on Google’s platform and are reluctant to leave.

“Creators are in a tough spot where this is the best platform to make money … to build real loyal fans,” said Jake Tran, 27, who makes documentary YouTube videos on money, power, war and crime. “So are you going to give up just because Google is using it to train their AI?”

Last year, Tran’s YouTube business made around $1 million in revenue. Tran is also founder of the Scottsdale, Arizona-based skin-care business, Evil Goods, and together, his businesses employ 40 to 45 part-time and full-time workers.

Other AI companies, including Meta and OpenAI, have come under fire by copyright holders who have accused them of training AI models on their intellectual property. Disney and Universal Pictures sued AI business Midjourney in June for copyright infringement.

Tech industry executives have said that they should be able to train AI models with content available online under the “fair use” doctrine, which allows for the limited reproduction of material without permission from the copyright holder.

Some think creators might have a case if they decided to take their issue to court.

“There’s room to argue that simply by agreeing to the terms of service, they have not granted a license to YouTube or Google for AI training purposes, so that might be something that could be argued in the lawsuit,” said Mark Lezama, a partner at law firm Knobbe Martens. “There’s room to argue on both sides.”

Eugene Lee, CEO of ChannelMeter, a data and payments company for the creator economy, said he believes the only way creators can win is by using AI, not by fighting against it.

“Creators should absolutely embrace it and embrace it early, and embrace it as part of their production process, script generators, thumbnail generators—all these things that will require human labor to do in a massive amount of time and resources and capital,” Lee said.

Nate O’Brien, a Philadelphia creator who oversees YouTube channels about finance, estimates that his revenue will be flat or decline slightly, in part because it’ll be more challenging to get noticed on YouTube.

“It’s just a numbers game there,” O’Brien said. “But I think generally a person making a video would still perform better or rank better than an AI video right now. In a few more years, it might change.”

To prepare for the growth of AI content, O’Brien has been experimenting with using AI for videos on one of his channels, asking his assistant to take a script based on an existing video he made on a different channel and using AI to voice it. While the views have not outpaced the human-created videos, the AI-generated videos are lower in production cost. One garnered 5,000 views, 27-year-old O’Brien said.

Some creators have opted to share their video libraries with outside AI companies in exchange for compensation. For example, Salt Lake City YouTube creator Aaron de Azevedo, who oversees 20 YouTube channels, said he shared 30 terabytes of video footage in a deal with an AI company for roughly $9,000.

“There’s a good chunk of change,” De Azevedo, 40, said. “It was good, paid for most of my wedding.”

2025 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

Citation:
Google is training its AI tools on YouTube videos: These creators aren’t happy (2025, August 29)
retrieved 29 August 2025
from https://techxplore.com/news/2025-08-google-ai-tools-youtube-videos.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

Two Thinking Machines Lab Cofounders Are Leaving to Rejoin OpenAI

Published

on

Two Thinking Machines Lab Cofounders Are Leaving to Rejoin OpenAI


Thinking Machines cofounders Barret Zoph and Luke Metz are leaving the fledgling AI lab and rejoining OpenAI, the ChatGPT-maker announced on Thursday. OpenAI’s CEO of applications, Fidji Simo, shared the news in a memo to staff Thursday afternoon.

The news was first reported on X by technology reporter Kylie Robison, who wrote that Zoph was fired for “unethical conduct.”

A source close to Thinking Machines said that Zoph had shared confidential company information with competitors. WIRED was unable to verify this information with Zoph, who did not immediately respond to WIRED’s request for comment.

Zoph told Thinking Machines CEO Mira Murati on Monday he was considering leaving, then was fired today, according to the memo from Simo. She goes on to write that OpenAI doesn’t share the same concerns about Zoph as Murati.

The personnel shake-up is a major win for OpenAI, which recently lost its VP of research, Jerry Tworek.

Another Thinking Machines Lab staffer, Sam Schoenholz, is also rejoining OpenAI, the source said.

Zoph and Metz left OpenAI in late 2024 to start Thinking Machines with Murati, who had been the ChatGPT-maker’s chief technology officer.

This is a developing story. Please check back for updates.



Source link

Continue Reading

Tech

Tech Workers Are Condemning ICE Even as Their CEOs Stay Quiet

Published

on

Tech Workers Are Condemning ICE Even as Their CEOs Stay Quiet


Since Donald Trump returned to the White House last January, the biggest names in tech have mostly fallen in line with the new regime, attending dinners with officials, heaping praise upon the administration, presenting the president with lavish gifts, and pleading for Trump’s permission to sell their products to China. It’s been mostly business as usual for Silicon Valley over the past year, even as the administration ignored a wide range of constitutional norms and attempted to slap arbitrary fees on everything from chip exports to worker visas for high-skilled immigrants employed by tech firms.

But after an ICE agent shot and killed an unarmed US citizen, Renee Nicole Good, in broad daylight in Minneapolis last week, a number of tech leaders have begun publicly speaking out about the Trump administration’s tactics. This includes prominent researchers at Google and Anthropic, who have denounced the killing as calloused and immoral. The most wealthy and powerful tech CEOs are still staying silent as ICE floods America’s streets, but now some researchers and engineers working for them have chosen to break rank.

More than 150 tech workers have so far signed a petition asking for their company CEOs to call the White House, demand that ICE leave US cities, and speak out publicly against the agency’s recent violence. Anne Diemer, a human resources consultant and former Stripe employee who organized the petition, says that workers at Meta, Google, Amazon, OpenAI, TikTok, Spotify, Salesforce, Linkedin, and Rippling are among those who have signed. The group plans to make the list public once they reach 200 signatories.

“I think so many tech folks have felt like they can’t speak up,” Diemer told WIRED. “I want tech leaders to call the country’s leaders and condemn ICE’s actions, but even if this helps people find their people and take a small part in fighting fascism, then that’s cool, too.”

Nikhil Thorat, an engineer at Anthropic, said in a lengthy post on X that Good’s killing had “stirred something” in him. “A mother was gunned down in the street by ICE, and the government doesn’t even have the decency to perform a scripted condolence,” he wrote. Thorat added that the moral foundation of modern society is “infected, and is festering,” and the country is living through a “cosplay” of Nazi Germany, a time when people also stayed silent out of fear.

Jonathan Frankle, chief AI scientist at Databricks, added a “+1” to Thorat’s post. Shrisha Radhakrishna, chief technology and chief product officer of real estate platform Opendoor, replied that what happened to Good is “not normal. It’s immoral. The speed at which the administration is moving to dehumanize a mother is terrifying.” Other users who identified themselves as employees at OpenAI and Anthropic also responded in support of Thorat.

Shortly after Good was shot, Jeff Dean, an early Google employee and University of Minnesota graduate who is now the chief scientist at Google DeepMind and Google Research, began re-sharing posts with his 400,000 X followers criticizing the Trump administration’s immigration tactics, including one outlining circumstances in which deadly force isn’t justified for police officers interacting with moving vehicles.

He then weighed in himself. “This is completely not okay, and we can’t become numb to repeated instances of illegal and unconstitutional action by government agencies,” Dean wrote in an X post on January 10. “The recent days have been horrific.” He linked to a video of a teenager—identified as a US citizen—being violently arrested at a Target in Richfield, Minnesota.

In response to US Vice President JD Vance’s assertion on X that Good was trying to run over the ICE agent with her vehicle, Aaron Levie, the CEO of the cloud storage company Box, replied, “Why is he shooting after he’s fully out of harm’s way (2nd and 3rd shot)? Why doesn’t he just move away from the vehicle instead of standing in front of it?” He added a screenshot of a Justice Department webpage outlining best practices for law enforcement officers interacting with suspects in moving vehicles.





Source link

Continue Reading

Tech

A Brain Mechanism Explains Why People Leave Certain Tasks for Later

Published

on

A Brain Mechanism Explains Why People Leave Certain Tasks for Later


How does procrastination arise? The reason you decide to postpone household chores and spend your time browsing social media could be explained by the workings of a brain circuit. Recent research has identified a neural connection responsible for delaying the start of activities associated with unpleasant experiences, even when these activities offer a clear reward.

The study, led by Ken-ichi Amemori, a neuroscientist at Kyoto University, aimed to analyze the brain mechanisms that reduce motivation to act when a task involves stress, punishment, or discomfort. To do this, the researchers designed an experiment with monkeys, a widely used model for understanding decisionmaking and motivation processes in the brain.

The scientists worked with two macaques that were trained to perform various decisionmaking tasks. In the first phase of the experiment, after a period of water restriction, the animals could activate one of two levers that released different amounts of liquid; one option offered a smaller reward and the other a larger one. This exercise allowed them to evaluate how the value of the reward influences the willingness to perform an action.

In a later stage, the experimental design incorporated an unpleasant element. The monkeys were given the choice of drinking a moderate amount of water without negative consequences or drinking a larger amount on the condition of receiving a direct blast of air in the face. Although the reward was greater in the second option, it involved an uncomfortable experience.

As the researchers anticipated, the macaques’ motivation to complete the task and access the water decreased considerably when the aversive stimulus was introduced. This behavior allowed them to identify a brain circuit that acts as a brake on motivation in the face of anticipated adverse situations. In particular, the connection between the ventral striatum and the ventral pallidum, two structures located in the basal ganglia of the brain, known for their role in regulating pleasure, motivation, and reward systems, was observed to be involved.

The neural analysis revealed that when the brain anticipates an unpleasant event or potential punishment, the ventral striatum is activated and sends an inhibitory signal to the ventral pallidum, which is normally responsible for driving the intention to perform an action. In other words, this communication reduces the impulse to act when the task is associated with a negative experience.

The Brain Connection Behind Procrastination

To investigate the specific role of this connection, as described in the study published in the journal Current Biology, researchers used a chemogenetic technique that, through the administration of a specialized drug, temporarily disrupted communication between the two brain regions. By doing so, the monkeys regained the motivation to initiate tasks, even in those tests that involved blowing air.

Notably, the inhibitory substance produced no change in trials where reward was not accompanied by punishment. This result suggests that the EV-PV circuit does not regulate motivation in a general way, but rather is specifically activated to suppress it when there is an expectation of discomfort. In this sense, apathy toward unpleasant tasks appears to develop gradually as communication between these two regions intensifies.

Beyond explaining why people tend to unconsciously resist starting household chores or uncomfortable obligations, the findings have relevant implications for understanding disorders such as depression or schizophrenia, in which patients often experience a significant loss of the drive to act.

However, Amemori emphasizes that this circuit serves an essential protective function. “Overworking is very dangerous. This circuit protects us from burnout,” he said in comments reported by Nature. Therefore, he cautions that any attempt to externally modify this neural mechanism must be approached with care, as further research is needed to avoid interfering with the brain’s natural protective processes.

This story originally appeared in WIRED en Español and has been translated from Spanish.



Source link

Continue Reading

Trending