Connect with us

Tech

Vodafone Greece automates deals for customers, saves 500 staff-days of work | Computer Weekly

Published

on

Vodafone Greece automates deals for customers, saves 500 staff-days of work | Computer Weekly


Vodafone Greece claims to have saved 500 staff days a year in manual work after rolling out technology to automate previously labour-intensive marketing campaigns as it seeks to win new customers and keep its existing ones.

The telecoms company has rolled out software that allows it to send tailored deals and offers to customers based on their history and interactions with the company on its website, mobile app, in stores and in call centres.

The project has helped Vodafone Greece become one of Vodafone’s top three performing markets, decision strategy chapter lead Georgios Papadas told Computer Weekly.

Since it was founded in 1992, Vodafone Greece has acquired more than three million customers and has grown its revenues to more than €1bn. It is part of the wider Vodafone group, which operates in 21 countries, boasts 12,000 shops, and more than 350 million customers worldwide.

Vodafone as an organisation has been rolling out software supplied by Massachusetts-based Pegasystems across its business operations for some years, as part of a programme to ensure it can give its customers the same personalised offers on phones, broadband and other services, no matter how they choose to interact with the company.

Vodafone Greece began its own programme to deploy Pega’s platform in 2020. Vodafone’s Greek operation previously relied on its marketing staff to devise and run multiple marketing campaigns every month.

The campaigns covered multiple product lines including pre-paid mobile phones, fixed contract phones, broadband and television packages. However, they were often overlapping, and did not always target the right business priorities.

Vodafone Greece decision strategy chapter lead Georgios Papadas

Vodafone Greece was also grappling with an untidy estate of sometimes inappropriate technology. It had grown through acquisitions of smaller companies over the years, leaving it with multiple databases and out-of-date software.

The company relied on IBM’s SPSS Modeler, a tool designed for data mining and analytics to model its monthly campaigns, a task it was not designed to do. It was a complex piece of software, and few people in the organisation had the knowledge to make changes to campaigns once they had been created in the tool, says Papadas.

Each campaign took three to five working days a month to develop, test and execute, and with up to 10 campaigns running a month, it was also resource intensive. If a campaign manager was off work sick or left the company, there was no backup plan. And because each campaign operated independently, monitoring and reporting on the results of campaigns was difficult.

Learning from Accenture

Vodafone Greece set its sights on building an “omni-channel” that would ensure it could communicate with customers in a consistent way no matter what communication channel they chose.

The company hired Accenture as an implementation partner to roll out Pega’s Customer Decision Hub software, while Vodafone’s own staff learned how to use the technology. Over time, Vodafone was able to take on more work in-house while reducing its dependence on Accenture. That led to significant savings in fees.

In 2020, Vodafone started a project to bring mobile phone product campaigns into Pega and began migrating data over from existing databases into the Pega platform. Its staff spent much of the first two years learning how to use the software.

By 2022, Vodafone felt confident enough to set up a dedicated Pega team to build databases and Application Programming Interfaces (APIs). The team also set about linking Vodafone’s CRM system and its customer service Chatbot, and Viber, a messaging service widely used in Greece, into Pega. Vodafone’s internal project team started with six or seven people and at its peak reached 40 or 50.

Six months later the company went live with its first campaign developed entirely by Vodafone staff, scoring an early success with a conversion rate into sales that was two or three times greater than previous campaigns.

Vodafone now manages all of its Pega work internally. “That gives us the agility, ownership and confidence” to focus on the needs of the business, says Papadas. “We have really decreased our time to market compared to how it was done by [Accenture],” he adds. “I think that has been one of the biggest successes.”

Finding the right skills

Papadas says that one of the biggest challenges of the project was finding people with the right technical skills. IT professionals with skills in Pega, a specialist technology, are difficult to find, so Papadas opted to hire people and train them from scratch.

He told recruits they had three learning curves: learning how to use Pega; learning the telecommunications industry; and learning Vodafone Greece – its people, technology and datasets.

“I describe Pega in our operation as a car,” says Papadas. “It needs to keep moving while we tweak the engine. The question is finding the right people and training them fast enough.”

The company set up a “buddy system” so that every new recruit had an experienced person to guide them. That was combined with video training and, for more technical issues, written documentation – but with a focus on real-world tasks.

The end of spam

The project means that Vodafone’s customers receive the same support no matter how they approach Vodafone, says Papadas.

In the past, a customer could receive an offer for a product from Vodafone by phone, visit a shop and receive a slightly different quote for the same product, and then be quoted a different – potentially higher – price on Vodafone’s app.

The technology also ensures that customers do not receive large numbers of spam messages. Pega has enabled Vodafone to set rules so that if a customer has received, say, a message today, or three messages in the past week, they will not be prompted with further marketing messages.

Vodafone is also able to send better targeted messages, says Papadas. For example, if the data shows there are people who never open the Vodafone app, they will be taken out of the campaign. “The success rate will be very much better,” he says. “It’s maths, not rocket science.”

What is the next best action?

The software is able to recommend the “next best action” that Vodafone call centre staff or shop assistants can take to encourage a particular customer to stay loyal or buy extra products based on that customer’s history and real-time interactions.

That tailored approach has allowed Vodafone to move away from “carpet bombing” customers with one or two standard offers in the hope they appeal to enough people.

The company is able to send real-time offers to customers on the Vodafone app. “It is the right message at the right time, and then the customer is more likely to say, ‘Okay, I will accept that’,” says Papadas.

The software, which is used by 1,000 Vodafone staff each day, also warns agents if they are at risk of going over budget by offering customers too many generous deals.

Vodafone’s deployment means that for the first time, it has a record in one database of the behaviour of its customers, which means the company can look at its transactions with each customer, across every channel, and see what messages have been sent to the customer and how they responded.

Real-time offers

Vodafone already has the ability to monitor when a customer looks at renewing their mobile phone contract, or look at deals for, say, mobile phones or TV packages – information that is fed into Pega in real time.

The next step is to offer customers real-time offers and notifications. For example, a customer looking at broadband TV packages on Vodafone’s website could receive a text or email offer to have the Disney channel included for less than the cost of buying the two packages separately.

“If you are looking at the retail price, we can come back to you with an offer which is better most of the time,” says Papadas.

“It’s about the right timing, relevance and contacting you at the right time. The offer arrives at your app, and you can activate it there and then.”

Vodafone also plans to deploy Adobe Analytics. It’s a powerful tool, he says, because, rather than messaging large volumes of customers with general offers, customers will receive targeted offers triggered by their activity on Vodafone’s website or app.

The company also has plans to harness the artificial intelligence capabilities in Pega to help it refine marketing campaigns.

Pega’s “adaptive” technology is able to “read” the behaviour of customers and “score the probability of the customer accepting the offer”.

The system gradually learns how to make small improvements to campaigns when it has enough data.

For example, Vodafone found that in one campaign, by changing its marketing strategy, it was able to make an average of 70 cents more on each sale. But with sales of this particular product line running to 10,000 a month, small improvements can add up.

Vodafone Greece also has plans to move its Pega operations from Google’s cloud platform to Pega’s cloud service, Pega Infinity, providing Vodafone with better support from Pega.

Learning from suppliers

One thing Papadas says he would do differently is have a stricter agreement with its implementation partner, Accenture, to make it clearer that Accenture’s role included training Vodafone staff as the project rolled out.

“If I was about to turn the time back, Vodafone teams would be fully included in the delivery plan, so it would be more or less a joint delivery,” he says.

Papadas says he would advise other IT professionals carrying out similar projects with a supplier, to make sure they learn from the supplier as quickly as possible.

“It’s very good to have a vendor because their expertise is essential, but make sure you learn as fast as you can from the vendor, both technically as well as [learning] the whole ecosystem to take it in-house, because then you have the power in your hands,” he says.

There is an inevitable conflict of interest with using suppliers to train in-house employees in the services they offer, says Papadas. “It’s difficult for vendors to bring the knowledge in-house because they want to sell to you; they want you to rely on them,” he adds. “I don’t blame them. I know how this works.”

How to get buy-in

Papadas says it’s important to involve people who are going to be impacted by the project together, including business experts, technology experts and the people who will be using Pega’s Customer Decision Hub to run campaigns.

“If you don’t have the people right from the beginning involved in designing, giving their input and saying what works for them, what doesn’t work for them, then it’s more difficult to get people on board,” he says.

It’s also important to have at least one person at the executive level to act as an enabler for the project, to keep the project team accountable for meeting deadlines and budgets, and most importantly to act as an “unblocker” when the project runs into hurdles.

In the case of Papadas, an executive responsible for commercial growth and Vodafone’s IT directors acted as high-level sponsors.

“Those two people were in every single review, in every point meeting to assist us or put us under the spotlight if we or the supplier were delayed,” he says. “Those two people were critical. Without them it would be more difficult to deliver.”

Vodafone held weekly reviews with the leadership team to review the project plan, what had been delivered, what had not been delivered, and what the challenges and obstacles were.

“We engaged all sides to ensure not only are they kept up to date on where their money, effort and people are, but also to assist us if there were problems,” says Papadas. 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

FEMA’s Chaotic Summer Has Gone From Bad to Worse

Published

on

FEMA’s Chaotic Summer Has Gone From Bad to Worse


FEMA did not respond to WIRED’s request for comment.

“It is not surprising that some of the same bureaucrats who presided over decades of inefficiency are now objecting to reform,” the agency told the Guardian, which reported on the retaliation against the employees who signed the letter. “Change is always hard. It is especially for those invested in the status quo, who have forgotten that their duty is to the American people not entrenched bureaucracy.”

The targeting of letter signers at FEMA echoes an earlier move at the Environmental Protection Agency (EPA) in July, when that agency suspended about 140 employees who signed onto a similar public letter.

A FEMA employee who signed this week’s letter expressed concern to WIRED that the agency may try to seek out those who did not include their names on the letter—especially given how DHS reportedly administered polygraphs in April attempting to identify employees who leaked to the press. “I’m concerned they may use similar tactics to identify anonymous signers,” they say. This employee spoke to WIRED on the condition of anonymity as they were not authorized to speak to the press.

On Tuesday morning, a day after the employees’ letter was published, former FEMA acting administrator Cameron Hamilton posted a criticism of the agency publicly on LinkedIn.

“Stating that @fema is operating more efficiently, and cutting red tape is either: uninformed about managing disasters; misled by public officials; or lying to the American the public [sic] to prop up talking points,” he wrote. “President Trump and the American people deserve better than this…FEMA is saving money which is good due to the astronomical U.S. Debt from Congress. Despite this, FEMA staff are responding to entirely new forms of bureaucracy now that is lengthening wait times for claim recipients, and delaying the deployment of time sensitive resources.”

Hamilton, who was fired from his position a day after testifying in defense of the agency to Congress in May, did not respond to WIRED’s questions about whether or not his post was related to the employees’ open letter.

Both Hamilton’s post and the open letter call out a new rule, instituted in June, mandating that any spending over $100,000 needs to be personally vetted by DHS Secretary Kristi Noem. That cap, FEMA employees allege in Monday’s letter, “reduces FEMA’s authorities and capabilities to swiftly deliver our mission.” The policy came under fire in July after various outlets reported that it had caused a delay in the agency’s response following the flooding in Texas that killed at least 135 people. The agency’s chief of urban search and rescue operations resigned in late July due in part to frustrations with how the DHS spending approval process delayed aid during the disaster, CNN reported.

Screenshots of contract data seen by WIRED show that as of August 7, the agency still had more than $700 million left to allocate in non-disaster spending before the end of the fiscal year on September 30, with more than 1,000 open contract actions. The agency seems to be feeling the pressure to speed up contract proposals. In early August, several FEMA staff were asked to volunteer to work over a weekend to help review contracts to prepare them for Noem’s signoff, according to emails reviewed by WIRED. (“Lots of work over the weekend,” read the notes from one meeting.)

“Disaster money is just sitting,” one FEMA employee tells WIRED. “Every single day applicants are asking their FEMA contact ‘where’s my money?’ And we are ordered to just say nothing and redirect.”

As the employees’ open letter states, roughly a third of FEMA’s full-time staff had already departed by May, “leading to the loss of irreplaceable institutional knowledge and long-built relationships.” These staff departures may further hamper efforts from the agency to implement financial efficiency measures like the contract reviews. A former FEMA employee tells WIRED that while the agency began the year with nine lawyers on the procurement team that helps review financial contracts during a disaster, almost the entire team has either left or been reassigned, leaving a dearth of experience just as hurricane season ramps up.

“I have no idea what happens,” the former employee tells WIRED, when a hurricane hits “and we need a contract attorney on shift 24/7.”



Source link

Continue Reading

Tech

ICO publishes summary of police facial recognition audit | Computer Weekly

Published

on

ICO publishes summary of police facial recognition audit | Computer Weekly


The Information Commissioner’s Office (ICO) has completed its first-ever data protection audit of UK police forces deploying facial recognition technologies (FRT), noting it is “encouraged” by its findings.

The ICO’s audit, which investigated how South Wales Police and Gwent Police are using and protecting people’s personal information when deploying facial recognition, marks the first time the data regulator has formally audited a UK police force for its use of the technology.

According to an executive summary published on 20 August, the scope of the facial recognition audit – which was agreed with the two police forces beforehand – focused on questions of necessity and proportionality (a key legal test for the deployment of new technologies), whether its design meets expectations around fairness and accuracy, and whether “the end-to-end process” is compliant with the UK’s data protection rules.

“We are encouraged by the findings, which provide a high level of assurance that the processes and procedures currently in place at South Wales Police and Gwent Police are compliant with data protection law,” said the deputy commissioner for regulatory policy, Emily Keaney, in a blog post.

“The forces made sure there was human oversight from trained staff to mitigate the risk of discrimination and ensure no decisions are solely automated, and a formal application process to assess the necessity and proportionality before each LFR deployment,” she wrote.

The executive summary added that South Wales Police and Gwent Police have “comprehensively mapped” their data flows, can “demonstrate the lawful provenance” of the images used to generate biometric templates, and have appropriate data protection impact assessments (DPIAs) in place.

It further added that the data collected “is adequate, relevant and limited to what is necessary for its purpose”, and that individuals are informed about its use “in a clear and accessible manner”.

However, Keaney was clear that the audit only “serves as a snapshot in time” of how the technology is being used by the two police forces in question. “It does not give the green light to all police forces, but those wishing to deploy FRT can learn from the areas of assurance and areas for improvement revealed by the audit summary,” she said.

Commenting on the audit, chief superintendent Tim Morgan of the joint South Wales and Gwent digital services department, said: “The level of oversight and independent scrutiny of facial recognition technology means that we are now in a stronger position than ever before to be able to demonstrate to the communities of South Wales and Gwent that our use of the technology is fair, legitimate, ethical and proportionate.

“We welcome the work of the Information Commissioner’s Office audit, which provides us with independent assurance of the extent to which both forces are complying with data protection legislation.”

He added: “It is important to remember that use of this has never resulted in a wrongful arrest in South Wales and there have been no false alerts for several years as the technology and our understanding has evolved.”

Lack of detail

While the ICO provided a number of recommendations to the police forces, it did not provide any specifics in the executive summary beyond the priority level of the recommendation and whether it applied to the forces’ use of live or retrospective facial recognition (LFR or RFR).

For LFR, it said it made four “medium” and one “low” priority recommendations, while for RFR, it said it made six “medium” and four “low” priority recommendations. For each, it listed one “high” priority recommendation.

Computer Weekly contacted the ICO for more information about the recommendations, but received no response on this point.

Although the summary lists some “key areas for improvement” around data retention policies and the need to periodically review various internal procedures, key questions about the deployments are left unanswered by the ICO’s published material on the audit.

For example, before they can deploy any facial recognition technology, UK police forces must ensure their deployments are “authorised by law”, that the consequent interference with rights – such as the right to privacy – is undertaken for a legally “recognised” or “legitimate” aim, and that this interference is both necessary and proportionate. This must be assessed for each individual deployment of the tech.

However, beyond noting that processes are in place, no detail was provided by the ICO on how the police forces are assessing the necessity and proportionality of their deployments, or how these are assessed in the context of watchlist creation.

Although more detail on proportionality and necessity considerations is provided in South Wales Police’s LFR DPIA, it is unclear if any of the ICO’s recommendations concern this process.  

While police forces using facial recognition have long maintained that their deployments are intelligence-led and focus exclusively on locating individuals wanted for serious crimes, senior officers from the Metropolitan Police and South Wales Police previously admitted to a Lords committee in December 2023 that both forces select images for their watchlists based on crime categories attached to people’s photos, rather than a context-specific assessment of the threat presented by a given individual.

Computer Weekly asked the ICO whether it is able to confirm if this is still the process for selecting watchlist images at South Wales Police, as well as details on how well police are assessing the proportionality and necessity of their deployments generally, but received no response on these points.

While the ICO summary claims the forces are able to demonstrate the “lawful provenance” of watchlist images, the regulator similarly did not respond to Computer Weekly’s questions about what processes are in place to ensure that the millions of unlawfully held custody images in the Police National Database (PND) are not included in facial recognition watchlists.

Computer Weekly also asked why the ICO is only beginning to audit police facial recognition use now, given that it was first deployed by the Met in August 2016 and has been controversial since its inception.

“The ICO has played an active role in the regulation of FRT since its first use by the Met and South Wales Police around 10 years ago. We investigated the use of FRT by the Met and South Wales and Gwent police and produced an accompanying opinion in 2021. We intervened in the Bridges case on the side of the claimant. We have produced follow-up guidance on our expectations of police forces,” said an ICO spokesperson.

“We are stepping up our supervision of AI [artificial intelligence] and biometric technologies – our new strategy includes a specific focus on the use of FRT by police forces. We are conducting an FRT in Policing project under our AI and biometrics strategy. Audits form a core part of this project, which aims to create clear regulatory expectations and scalable good practice that will influence the wider AI and biometrics landscape.

“Our recommendations in a given audit are context-specific, but any findings that have applicability to other police forces will be included in our Outcomes Report due in spring 2026, once we have completed the rest of the audits in this series.”

EHRC joins judicial review

In mid-August 2025, the Equality and Human Rights Commission (EHRC) was granted permission to intervene in an upcoming judicial review of the Met Police’s use of LFR technology, which it claims is being deployed unlawfully.

“The law is clear: everyone has the right to privacy, to freedom of expression and to freedom of assembly. These rights are vital for any democratic society,” said EHRC chief executive John Kirkpatrick.

“As such, there must be clear rules which guarantee that live facial recognition technology is used only where necessary, proportionate and constrained by appropriate safeguards. We believe that the Metropolitan Police’s current policy falls short of this standard.”

He added: “The Met, and other forces using this technology, need to ensure they deploy it in ways which are consistent with the law and with human rights.”

Writing in a blog about the EHRC joining the judicial review, Chris Pounder, director of data protection training firm Amberhawk, said that, in his view, the statement from Kirkpatrick is “precisely the kind of statement that should have been made by” information commissioner John Edwards.

“In addition, the ICO has stressed the need for FRT deployment ‘with appropriate safeguards in place’. If he [Edwards] joined the judicial review process as an interested party, he could get judicial approval for these much vaunted safeguards (which nobody has seen),” he wrote.

“Instead, the ICO sits on the fence whilst others determine whether or not current FRT processing by the Met Police is ‘strictly necessary’ for its law enforcement functions. The home secretary, for her part, has promised a code of practice which will contain an inevitable bias in favour of the deployment of FRT.”

In an appearance before the Lords Justice and Home Affairs Committee on 8 July, home secretary Yvette Cooper confirmed the government is actively working with police forces and unspecified “stakeholders” to draw up a new governance framework for police facial recognition.

However, she did not comment on whether any new framework would be placed on a statutory footing.



Source link

Continue Reading

Tech

Google is training its AI tools on YouTube videos: These creators aren’t happy

Published

on

Google is training its AI tools on YouTube videos: These creators aren’t happy


YouTube’s algorithm is extremely powerful. If the company were to direct some of its users’ attention to pro-climate content, this would likely have positive consequences on a large scale. Credit: Pixabay/CC0 Public Domain

Santa Ana, California-based entrepreneur Charlie Chang spent years posting finance videos on YouTube before he made a profit.

Today, Chang’s media business oversees more than 50 YouTube channels, along with other digital sites, and generates $3 million to $4 million in annual revenue, he said.

But lately, he’s been faced with a new concern: that YouTube’s moves in artificial intelligence will eat into his business.

“The fear is there, and I’m still building the channels, but I am preparing, just in case my channels become irrelevant,” Chang, 33, said. “I don’t know if I’m gonna be building YouTube channels forever.”

YouTube’s parent company, Google, is using a subset of the platform’s videos to train AI applications, including its text-to- tool Veo. That includes videos made by users who have built their livelihoods on the service, helping turn it into the biggest streaming entertainment provider in the U.S.

The move has sparked deep tensions between the world’s biggest online video company and some of the creators who helped make it a behemoth. Google, creators say, is using their data to train something that could become their biggest competitor.

The schism comes at a pivotal time for Google, which is in a race with rivals including Meta, OpenAI and Runway for dominance in the market for AI-driven video programs. Google has an advantage due to YouTube’s huge video library, with more than 20 billion videos uploaded to its platform as of April.

Many creators worry such tools could make it easier for other people to replicate the style of their videos, by typing in text prompts that could produce images or concepts similar to what popular creators produce. What if AI-generated videos became more popular than their material? Creators say they can’t opt out of AI training and that Google does not compensate them for using videos for such purposes.

“It makes me sad, because I was a big part of this whole creator economy, and now, it’s literally being dismantled by the company that built it,” said Kathleen Grace, a former YouTube employee who is now chief strategy officer at Vermillio, a Chicago-based company that tracks people’s digital likenesses and intellectual property.

“I think they should be with pitchforks outside San Bruno.”

YouTube, founded in 2005, was built on creators posting content. At first, the user-generated videos were amateurish. But eventually, creators got more sophisticated and professional, doing more elaborate stunts and hiring staff to support their productions.

Key to YouTube’s early success was its investment in its video creators. The San Bruno, California-based company shares ad revenue with its creators, which can be huge. That business model has kept creators loyal to YouTube. As they grew their audiences, that in turn increased advertising revenue for both YouTube and creators.

Video creators are typically not employees of YouTube or Google. Many are independents who have built businesses by posting content, making money through ads, brand deals and merchandise.

The creator economy is a bright spot amid struggles in the entertainment industry. Last year, there were more than 490,000 jobs supported by YouTube’s creative ecosystem in the U.S., according to YouTube, citing data from Oxford Economics. YouTube has a greater share of U.S. TV viewership than Netflix and the combined channels of Walt Disney Co., according to Nielsen.

YouTube said it has paid more than $70 billion to creators, artists and media companies from 2021 to 2023.

The company has encouraged creators and filmmakers to use Google’s AI tools to help with brainstorming and creating videos, which could make them faster and more efficient. Some creators said they use AI to help hash out concepts, cut down on production costs and showcase bold ideas.

YouTube is also developing tools that will help identify and manage AI-generated content featuring creators’ likeness. Additionally, it made changes to its privacy policy for people to request removal of AI-generated content that simulates them on the platform, said company spokesman Jack Malon.

“YouTube only succeeds when creators do,” Malon said in a statement. “That partnership, which has delivered billions to the creator economy, is driven by continuous innovation—from the systems that power our recommendations to new AI tools. We’ve always used YouTube data to make these systems better, and we remain committed to building technology that expands opportunity, while leading the industry with safeguards against the misuse of AI.”

But already, creators say they are facing challenges from other people who are using AI to re-create their channels, cutting into their revenue and brand recognition.

“They’re training on things that we, the creators, are creating, but we’re not getting anything in return for the help that we are providing,” said Cory Williams, 44-year-old Oklahoma-based creator of Silly Crocodile, a popular animated character on YouTube.

In other cases, people are using AI to make deepfake versions of creators and falsely posing as them to message fans, said Vermillio’s Grace.

When people upload videos to YouTube, they agree to the company’s terms of service, which grants a royalty-free license to YouTube’s business and its affiliates.

But many creators said they were not aware YouTube videos were used to train Veo until they read about it in media reports. Melissa Hunter, chief executive of Family Video Network, a consulting firm for family-focused creators, said tools like Veo didn’t exist when she signed YouTube’s terms of service years ago.

Back in 2012, Hunter’s son (then 8 years old) wanted to start a YouTube channel together. Her son, now 22, is against AI for environmental reasons, so Hunter made those videos private. But Hunter said Google can still see those videos, and she’s concerned they were used to train Veo without her permission.

“It’s frustrating, and I don’t like it, but I also feel totally helpless to do anything,” Hunter said.

While there are other social media platforms such as TikTok and Instagram that also support content creators, YouTubers say they have already built large audiences on Google’s platform and are reluctant to leave.

“Creators are in a tough spot where this is the best platform to make money … to build real loyal fans,” said Jake Tran, 27, who makes documentary YouTube videos on money, power, war and crime. “So are you going to give up just because Google is using it to train their AI?”

Last year, Tran’s YouTube business made around $1 million in revenue. Tran is also founder of the Scottsdale, Arizona-based skin-care business, Evil Goods, and together, his businesses employ 40 to 45 part-time and full-time workers.

Other AI companies, including Meta and OpenAI, have come under fire by copyright holders who have accused them of training AI models on their intellectual property. Disney and Universal Pictures sued AI business Midjourney in June for copyright infringement.

Tech industry executives have said that they should be able to train AI models with content available online under the “fair use” doctrine, which allows for the limited reproduction of material without permission from the copyright holder.

Some think creators might have a case if they decided to take their issue to court.

“There’s room to argue that simply by agreeing to the terms of service, they have not granted a license to YouTube or Google for AI training purposes, so that might be something that could be argued in the lawsuit,” said Mark Lezama, a partner at law firm Knobbe Martens. “There’s room to argue on both sides.”

Eugene Lee, CEO of ChannelMeter, a data and payments company for the creator economy, said he believes the only way creators can win is by using AI, not by fighting against it.

“Creators should absolutely embrace it and embrace it early, and embrace it as part of their production process, script generators, thumbnail generators—all these things that will require human labor to do in a massive amount of time and resources and capital,” Lee said.

Nate O’Brien, a Philadelphia creator who oversees YouTube channels about finance, estimates that his revenue will be flat or decline slightly, in part because it’ll be more challenging to get noticed on YouTube.

“It’s just a numbers game there,” O’Brien said. “But I think generally a person making a video would still perform better or rank better than an AI video right now. In a few more years, it might change.”

To prepare for the growth of AI content, O’Brien has been experimenting with using AI for videos on one of his channels, asking his assistant to take a script based on an existing video he made on a different channel and using AI to voice it. While the views have not outpaced the human-created videos, the AI-generated videos are lower in production cost. One garnered 5,000 views, 27-year-old O’Brien said.

Some creators have opted to share their video libraries with outside AI companies in exchange for compensation. For example, Salt Lake City YouTube creator Aaron de Azevedo, who oversees 20 YouTube channels, said he shared 30 terabytes of video footage in a deal with an AI company for roughly $9,000.

“There’s a good chunk of change,” De Azevedo, 40, said. “It was good, paid for most of my wedding.”

2025 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

Citation:
Google is training its AI tools on YouTube videos: These creators aren’t happy (2025, August 29)
retrieved 29 August 2025
from https://techxplore.com/news/2025-08-google-ai-tools-youtube-videos.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending