Connect with us

Tech

FEMA’s Chaotic Summer Has Gone From Bad to Worse

Published

on

FEMA’s Chaotic Summer Has Gone From Bad to Worse


FEMA did not respond to WIRED’s request for comment.

“It is not surprising that some of the same bureaucrats who presided over decades of inefficiency are now objecting to reform,” the agency told the Guardian, which reported on the retaliation against the employees who signed the letter. “Change is always hard. It is especially for those invested in the status quo, who have forgotten that their duty is to the American people not entrenched bureaucracy.”

The targeting of letter signers at FEMA echoes an earlier move at the Environmental Protection Agency (EPA) in July, when that agency suspended about 140 employees who signed onto a similar public letter.

A FEMA employee who signed this week’s letter expressed concern to WIRED that the agency may try to seek out those who did not include their names on the letter—especially given how DHS reportedly administered polygraphs in April attempting to identify employees who leaked to the press. “I’m concerned they may use similar tactics to identify anonymous signers,” they say. This employee spoke to WIRED on the condition of anonymity as they were not authorized to speak to the press.

On Tuesday morning, a day after the employees’ letter was published, former FEMA acting administrator Cameron Hamilton posted a criticism of the agency publicly on LinkedIn.

“Stating that @fema is operating more efficiently, and cutting red tape is either: uninformed about managing disasters; misled by public officials; or lying to the American the public [sic] to prop up talking points,” he wrote. “President Trump and the American people deserve better than this…FEMA is saving money which is good due to the astronomical U.S. Debt from Congress. Despite this, FEMA staff are responding to entirely new forms of bureaucracy now that is lengthening wait times for claim recipients, and delaying the deployment of time sensitive resources.”

Hamilton, who was fired from his position a day after testifying in defense of the agency to Congress in May, did not respond to WIRED’s questions about whether or not his post was related to the employees’ open letter.

Both Hamilton’s post and the open letter call out a new rule, instituted in June, mandating that any spending over $100,000 needs to be personally vetted by DHS Secretary Kristi Noem. That cap, FEMA employees allege in Monday’s letter, “reduces FEMA’s authorities and capabilities to swiftly deliver our mission.” The policy came under fire in July after various outlets reported that it had caused a delay in the agency’s response following the flooding in Texas that killed at least 135 people. The agency’s chief of urban search and rescue operations resigned in late July due in part to frustrations with how the DHS spending approval process delayed aid during the disaster, CNN reported.

Screenshots of contract data seen by WIRED show that as of August 7, the agency still had more than $700 million left to allocate in non-disaster spending before the end of the fiscal year on September 30, with more than 1,000 open contract actions. The agency seems to be feeling the pressure to speed up contract proposals. In early August, several FEMA staff were asked to volunteer to work over a weekend to help review contracts to prepare them for Noem’s signoff, according to emails reviewed by WIRED. (“Lots of work over the weekend,” read the notes from one meeting.)

“Disaster money is just sitting,” one FEMA employee tells WIRED. “Every single day applicants are asking their FEMA contact ‘where’s my money?’ And we are ordered to just say nothing and redirect.”

As the employees’ open letter states, roughly a third of FEMA’s full-time staff had already departed by May, “leading to the loss of irreplaceable institutional knowledge and long-built relationships.” These staff departures may further hamper efforts from the agency to implement financial efficiency measures like the contract reviews. A former FEMA employee tells WIRED that while the agency began the year with nine lawyers on the procurement team that helps review financial contracts during a disaster, almost the entire team has either left or been reassigned, leaving a dearth of experience just as hurricane season ramps up.

“I have no idea what happens,” the former employee tells WIRED, when a hurricane hits “and we need a contract attorney on shift 24/7.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

ICO publishes summary of police facial recognition audit | Computer Weekly

Published

on

ICO publishes summary of police facial recognition audit | Computer Weekly


The Information Commissioner’s Office (ICO) has completed its first-ever data protection audit of UK police forces deploying facial recognition technologies (FRT), noting it is “encouraged” by its findings.

The ICO’s audit, which investigated how South Wales Police and Gwent Police are using and protecting people’s personal information when deploying facial recognition, marks the first time the data regulator has formally audited a UK police force for its use of the technology.

According to an executive summary published on 20 August, the scope of the facial recognition audit – which was agreed with the two police forces beforehand – focused on questions of necessity and proportionality (a key legal test for the deployment of new technologies), whether its design meets expectations around fairness and accuracy, and whether “the end-to-end process” is compliant with the UK’s data protection rules.

“We are encouraged by the findings, which provide a high level of assurance that the processes and procedures currently in place at South Wales Police and Gwent Police are compliant with data protection law,” said the deputy commissioner for regulatory policy, Emily Keaney, in a blog post.

“The forces made sure there was human oversight from trained staff to mitigate the risk of discrimination and ensure no decisions are solely automated, and a formal application process to assess the necessity and proportionality before each LFR deployment,” she wrote.

The executive summary added that South Wales Police and Gwent Police have “comprehensively mapped” their data flows, can “demonstrate the lawful provenance” of the images used to generate biometric templates, and have appropriate data protection impact assessments (DPIAs) in place.

It further added that the data collected “is adequate, relevant and limited to what is necessary for its purpose”, and that individuals are informed about its use “in a clear and accessible manner”.

However, Keaney was clear that the audit only “serves as a snapshot in time” of how the technology is being used by the two police forces in question. “It does not give the green light to all police forces, but those wishing to deploy FRT can learn from the areas of assurance and areas for improvement revealed by the audit summary,” she said.

Commenting on the audit, chief superintendent Tim Morgan of the joint South Wales and Gwent digital services department, said: “The level of oversight and independent scrutiny of facial recognition technology means that we are now in a stronger position than ever before to be able to demonstrate to the communities of South Wales and Gwent that our use of the technology is fair, legitimate, ethical and proportionate.

“We welcome the work of the Information Commissioner’s Office audit, which provides us with independent assurance of the extent to which both forces are complying with data protection legislation.”

He added: “It is important to remember that use of this has never resulted in a wrongful arrest in South Wales and there have been no false alerts for several years as the technology and our understanding has evolved.”

Lack of detail

While the ICO provided a number of recommendations to the police forces, it did not provide any specifics in the executive summary beyond the priority level of the recommendation and whether it applied to the forces’ use of live or retrospective facial recognition (LFR or RFR).

For LFR, it said it made four “medium” and one “low” priority recommendations, while for RFR, it said it made six “medium” and four “low” priority recommendations. For each, it listed one “high” priority recommendation.

Computer Weekly contacted the ICO for more information about the recommendations, but received no response on this point.

Although the summary lists some “key areas for improvement” around data retention policies and the need to periodically review various internal procedures, key questions about the deployments are left unanswered by the ICO’s published material on the audit.

For example, before they can deploy any facial recognition technology, UK police forces must ensure their deployments are “authorised by law”, that the consequent interference with rights – such as the right to privacy – is undertaken for a legally “recognised” or “legitimate” aim, and that this interference is both necessary and proportionate. This must be assessed for each individual deployment of the tech.

However, beyond noting that processes are in place, no detail was provided by the ICO on how the police forces are assessing the necessity and proportionality of their deployments, or how these are assessed in the context of watchlist creation.

Although more detail on proportionality and necessity considerations is provided in South Wales Police’s LFR DPIA, it is unclear if any of the ICO’s recommendations concern this process.  

While police forces using facial recognition have long maintained that their deployments are intelligence-led and focus exclusively on locating individuals wanted for serious crimes, senior officers from the Metropolitan Police and South Wales Police previously admitted to a Lords committee in December 2023 that both forces select images for their watchlists based on crime categories attached to people’s photos, rather than a context-specific assessment of the threat presented by a given individual.

Computer Weekly asked the ICO whether it is able to confirm if this is still the process for selecting watchlist images at South Wales Police, as well as details on how well police are assessing the proportionality and necessity of their deployments generally, but received no response on these points.

While the ICO summary claims the forces are able to demonstrate the “lawful provenance” of watchlist images, the regulator similarly did not respond to Computer Weekly’s questions about what processes are in place to ensure that the millions of unlawfully held custody images in the Police National Database (PND) are not included in facial recognition watchlists.

Computer Weekly also asked why the ICO is only beginning to audit police facial recognition use now, given that it was first deployed by the Met in August 2016 and has been controversial since its inception.

“The ICO has played an active role in the regulation of FRT since its first use by the Met and South Wales Police around 10 years ago. We investigated the use of FRT by the Met and South Wales and Gwent police and produced an accompanying opinion in 2021. We intervened in the Bridges case on the side of the claimant. We have produced follow-up guidance on our expectations of police forces,” said an ICO spokesperson.

“We are stepping up our supervision of AI [artificial intelligence] and biometric technologies – our new strategy includes a specific focus on the use of FRT by police forces. We are conducting an FRT in Policing project under our AI and biometrics strategy. Audits form a core part of this project, which aims to create clear regulatory expectations and scalable good practice that will influence the wider AI and biometrics landscape.

“Our recommendations in a given audit are context-specific, but any findings that have applicability to other police forces will be included in our Outcomes Report due in spring 2026, once we have completed the rest of the audits in this series.”

EHRC joins judicial review

In mid-August 2025, the Equality and Human Rights Commission (EHRC) was granted permission to intervene in an upcoming judicial review of the Met Police’s use of LFR technology, which it claims is being deployed unlawfully.

“The law is clear: everyone has the right to privacy, to freedom of expression and to freedom of assembly. These rights are vital for any democratic society,” said EHRC chief executive John Kirkpatrick.

“As such, there must be clear rules which guarantee that live facial recognition technology is used only where necessary, proportionate and constrained by appropriate safeguards. We believe that the Metropolitan Police’s current policy falls short of this standard.”

He added: “The Met, and other forces using this technology, need to ensure they deploy it in ways which are consistent with the law and with human rights.”

Writing in a blog about the EHRC joining the judicial review, Chris Pounder, director of data protection training firm Amberhawk, said that, in his view, the statement from Kirkpatrick is “precisely the kind of statement that should have been made by” information commissioner John Edwards.

“In addition, the ICO has stressed the need for FRT deployment ‘with appropriate safeguards in place’. If he [Edwards] joined the judicial review process as an interested party, he could get judicial approval for these much vaunted safeguards (which nobody has seen),” he wrote.

“Instead, the ICO sits on the fence whilst others determine whether or not current FRT processing by the Met Police is ‘strictly necessary’ for its law enforcement functions. The home secretary, for her part, has promised a code of practice which will contain an inevitable bias in favour of the deployment of FRT.”

In an appearance before the Lords Justice and Home Affairs Committee on 8 July, home secretary Yvette Cooper confirmed the government is actively working with police forces and unspecified “stakeholders” to draw up a new governance framework for police facial recognition.

However, she did not comment on whether any new framework would be placed on a statutory footing.



Source link

Continue Reading

Tech

Google is training its AI tools on YouTube videos: These creators aren’t happy

Published

on

Google is training its AI tools on YouTube videos: These creators aren’t happy


YouTube’s algorithm is extremely powerful. If the company were to direct some of its users’ attention to pro-climate content, this would likely have positive consequences on a large scale. Credit: Pixabay/CC0 Public Domain

Santa Ana, California-based entrepreneur Charlie Chang spent years posting finance videos on YouTube before he made a profit.

Today, Chang’s media business oversees more than 50 YouTube channels, along with other digital sites, and generates $3 million to $4 million in annual revenue, he said.

But lately, he’s been faced with a new concern: that YouTube’s moves in artificial intelligence will eat into his business.

“The fear is there, and I’m still building the channels, but I am preparing, just in case my channels become irrelevant,” Chang, 33, said. “I don’t know if I’m gonna be building YouTube channels forever.”

YouTube’s parent company, Google, is using a subset of the platform’s videos to train AI applications, including its text-to- tool Veo. That includes videos made by users who have built their livelihoods on the service, helping turn it into the biggest streaming entertainment provider in the U.S.

The move has sparked deep tensions between the world’s biggest online video company and some of the creators who helped make it a behemoth. Google, creators say, is using their data to train something that could become their biggest competitor.

The schism comes at a pivotal time for Google, which is in a race with rivals including Meta, OpenAI and Runway for dominance in the market for AI-driven video programs. Google has an advantage due to YouTube’s huge video library, with more than 20 billion videos uploaded to its platform as of April.

Many creators worry such tools could make it easier for other people to replicate the style of their videos, by typing in text prompts that could produce images or concepts similar to what popular creators produce. What if AI-generated videos became more popular than their material? Creators say they can’t opt out of AI training and that Google does not compensate them for using videos for such purposes.

“It makes me sad, because I was a big part of this whole creator economy, and now, it’s literally being dismantled by the company that built it,” said Kathleen Grace, a former YouTube employee who is now chief strategy officer at Vermillio, a Chicago-based company that tracks people’s digital likenesses and intellectual property.

“I think they should be with pitchforks outside San Bruno.”

YouTube, founded in 2005, was built on creators posting content. At first, the user-generated videos were amateurish. But eventually, creators got more sophisticated and professional, doing more elaborate stunts and hiring staff to support their productions.

Key to YouTube’s early success was its investment in its video creators. The San Bruno, California-based company shares ad revenue with its creators, which can be huge. That business model has kept creators loyal to YouTube. As they grew their audiences, that in turn increased advertising revenue for both YouTube and creators.

Video creators are typically not employees of YouTube or Google. Many are independents who have built businesses by posting content, making money through ads, brand deals and merchandise.

The creator economy is a bright spot amid struggles in the entertainment industry. Last year, there were more than 490,000 jobs supported by YouTube’s creative ecosystem in the U.S., according to YouTube, citing data from Oxford Economics. YouTube has a greater share of U.S. TV viewership than Netflix and the combined channels of Walt Disney Co., according to Nielsen.

YouTube said it has paid more than $70 billion to creators, artists and media companies from 2021 to 2023.

The company has encouraged creators and filmmakers to use Google’s AI tools to help with brainstorming and creating videos, which could make them faster and more efficient. Some creators said they use AI to help hash out concepts, cut down on production costs and showcase bold ideas.

YouTube is also developing tools that will help identify and manage AI-generated content featuring creators’ likeness. Additionally, it made changes to its privacy policy for people to request removal of AI-generated content that simulates them on the platform, said company spokesman Jack Malon.

“YouTube only succeeds when creators do,” Malon said in a statement. “That partnership, which has delivered billions to the creator economy, is driven by continuous innovation—from the systems that power our recommendations to new AI tools. We’ve always used YouTube data to make these systems better, and we remain committed to building technology that expands opportunity, while leading the industry with safeguards against the misuse of AI.”

But already, creators say they are facing challenges from other people who are using AI to re-create their channels, cutting into their revenue and brand recognition.

“They’re training on things that we, the creators, are creating, but we’re not getting anything in return for the help that we are providing,” said Cory Williams, 44-year-old Oklahoma-based creator of Silly Crocodile, a popular animated character on YouTube.

In other cases, people are using AI to make deepfake versions of creators and falsely posing as them to message fans, said Vermillio’s Grace.

When people upload videos to YouTube, they agree to the company’s terms of service, which grants a royalty-free license to YouTube’s business and its affiliates.

But many creators said they were not aware YouTube videos were used to train Veo until they read about it in media reports. Melissa Hunter, chief executive of Family Video Network, a consulting firm for family-focused creators, said tools like Veo didn’t exist when she signed YouTube’s terms of service years ago.

Back in 2012, Hunter’s son (then 8 years old) wanted to start a YouTube channel together. Her son, now 22, is against AI for environmental reasons, so Hunter made those videos private. But Hunter said Google can still see those videos, and she’s concerned they were used to train Veo without her permission.

“It’s frustrating, and I don’t like it, but I also feel totally helpless to do anything,” Hunter said.

While there are other social media platforms such as TikTok and Instagram that also support content creators, YouTubers say they have already built large audiences on Google’s platform and are reluctant to leave.

“Creators are in a tough spot where this is the best platform to make money … to build real loyal fans,” said Jake Tran, 27, who makes documentary YouTube videos on money, power, war and crime. “So are you going to give up just because Google is using it to train their AI?”

Last year, Tran’s YouTube business made around $1 million in revenue. Tran is also founder of the Scottsdale, Arizona-based skin-care business, Evil Goods, and together, his businesses employ 40 to 45 part-time and full-time workers.

Other AI companies, including Meta and OpenAI, have come under fire by copyright holders who have accused them of training AI models on their intellectual property. Disney and Universal Pictures sued AI business Midjourney in June for copyright infringement.

Tech industry executives have said that they should be able to train AI models with content available online under the “fair use” doctrine, which allows for the limited reproduction of material without permission from the copyright holder.

Some think creators might have a case if they decided to take their issue to court.

“There’s room to argue that simply by agreeing to the terms of service, they have not granted a license to YouTube or Google for AI training purposes, so that might be something that could be argued in the lawsuit,” said Mark Lezama, a partner at law firm Knobbe Martens. “There’s room to argue on both sides.”

Eugene Lee, CEO of ChannelMeter, a data and payments company for the creator economy, said he believes the only way creators can win is by using AI, not by fighting against it.

“Creators should absolutely embrace it and embrace it early, and embrace it as part of their production process, script generators, thumbnail generators—all these things that will require human labor to do in a massive amount of time and resources and capital,” Lee said.

Nate O’Brien, a Philadelphia creator who oversees YouTube channels about finance, estimates that his revenue will be flat or decline slightly, in part because it’ll be more challenging to get noticed on YouTube.

“It’s just a numbers game there,” O’Brien said. “But I think generally a person making a video would still perform better or rank better than an AI video right now. In a few more years, it might change.”

To prepare for the growth of AI content, O’Brien has been experimenting with using AI for videos on one of his channels, asking his assistant to take a script based on an existing video he made on a different channel and using AI to voice it. While the views have not outpaced the human-created videos, the AI-generated videos are lower in production cost. One garnered 5,000 views, 27-year-old O’Brien said.

Some creators have opted to share their video libraries with outside AI companies in exchange for compensation. For example, Salt Lake City YouTube creator Aaron de Azevedo, who oversees 20 YouTube channels, said he shared 30 terabytes of video footage in a deal with an AI company for roughly $9,000.

“There’s a good chunk of change,” De Azevedo, 40, said. “It was good, paid for most of my wedding.”

2025 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

Citation:
Google is training its AI tools on YouTube videos: These creators aren’t happy (2025, August 29)
retrieved 29 August 2025
from https://techxplore.com/news/2025-08-google-ai-tools-youtube-videos.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

DJI’s Mic 3 Takes the Best Wireless Microphone and Makes It Better

Published

on

DJI’s Mic 3 Takes the Best Wireless Microphone and Makes It Better


I tested the Mic 3 with cameras, computers, and smartphones, using both direct connection and receiver connection methods. It paired painlessly with everything I threw at it, from my mirrorless camera to my iPhone, and the audio quality remained consistently excellent across different devices and environments. It’s part of the OsmoAudio system too, meaning the transmitter can directly link with DJI cameras like the Osmo 360, Osmo Action 5 Pro, and Osmo Pocket 3, bypassing the receiver entirely while still offering high-quality audio.

Missing Pieces

Photograph: Sam Kieldsen

The Mic 3 isn’t perfect, but I found little to complain about. The transmitters no longer include a 3.5-mm input for connecting external lavalier microphones, which might frustrate people who prefer to hide their mics completely. DJI has also dropped the Safety Track recording mode that was available on the Mic 2, but it’s entirely possible to rig one up using the available options.

US availability remains uncertain; like other recent DJI products, the Mic 3 isn’t officially launching in America due to ongoing tariff concerns. US consumers may be able to source units through third-party retailers, but that’s far from ideal for a product that should really be widely available. At $329 for the complete two-transmitter, one-receiver, and charging case package, the Mic 3 is actually cheaper than the Mic 2 was at launch, which I think is a remarkably good value for a product that’s superior in almost every way. DJI’s decision to sell individual components separately is welcome too. It means users can start with a basic setup and expand over time, or replace a damaged or lost component without too much fuss.

The DJI Mic 3 essentially combines the best aspects of both the Mic 2 and Mic Mini into a single, well-rounded package. It’s more compact and practical than the Mic 2, and far more advanced than the Mini. For content creators, filmmakers, and podcasters looking for a wireless microphone system that just works, it’s very hard to find fault with it.

The only real question is whether existing Mic 2 owners need to upgrade. If the improved portability and expanded feature set appeal to you, the Mic 3 represents a solid step forward. But the Mic 2 remains an excellent microphone in its own right, so there’s no urgent need to make the switch unless those new features and upgrades genuinely solve problems you’re currently facing.



Source link

Continue Reading

Trending