Tech
California enacts AI safety law targeting tech giants
California Governor Gavin Newsom has signed into law groundbreaking legislation requiring the world’s largest artificial intelligence companies to publicly disclose their safety protocols and report critical incidents, state lawmakers announced Monday.
Senate Bill 53 marks California’s most significant move yet to regulate Silicon Valley’s rapidly advancing AI industry while also maintaining its position as a global tech hub.
“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails,” State Senator Scott Wiener, the bill’s sponsor, said in a statement.
The new law represents a successful second attempt by Wiener to establish AI safety regulations after Newsom vetoed his previous bill, SB 1047, after furious pushback from the tech industry.
It also comes after a failed attempt by the Trump administration to prevent states from enacting AI regulations, under the argument that they would create regulatory chaos and slow US-made innovation in a race with China.
The new law says major AI companies have to publicly disclose their safety and security protocols in redacted form to protect intellectual property.
They must also report critical safety incidents—including model-enabled weapons threats, major cyber-attacks, or loss of model control—within 15 days to state officials.
The legislation also establishes whistleblower protections for employees who reveal evidence of dangers or violations.
According to Wiener, California’s approach differs from the European Union’s landmark AI Act, which requires private disclosures to government agencies.
SB 53, meanwhile, mandates public disclosure to ensure greater accountability.
In what advocates describe as a world-first provision, the law requires companies to report instances where AI systems engage in dangerous deceptive behavior during testing.
For example, if an AI system lies about the effectiveness of controls designed to prevent it from assisting in bioweapon construction, developers must disclose the incident if it materially increases catastrophic harm risks.
The working group behind the law was led by prominent experts including Stanford University’s Fei-Fei Li, known as the “godmother of AI.”
© 2025 AFP
Citation:
California enacts AI safety law targeting tech giants (2025, September 30)
retrieved 30 September 2025
from https://techxplore.com/news/2025-09-california-ai-safety-law-tech.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions feel increasingly prescient.
In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws, claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.
Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois’ reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.
Tech
China Is Cracking Down on Scams. Just Not the Ones Hitting Americans
Governments around the world have been struggling to address the rise of industrial-scale scamming operations based in countries like Laos, Myanmar, and Cambodia that have cost victims billions of dollars over the past few years. The operations often have ties to Chinese organized crime, use forced labor to carry out the actual scamming, and rely on vast money laundering networks to collect a profit. They have become so widespread and ingrained in the region that even major international law enforcement collaborations targeting individual scam centers or kingpins haven’t been able to stem the tide.
The FBI said this week that “cyber-enabled” scam complaints from Americans totaled more than $17.7 billion in reported losses last year—likely a major undercount of the real total, given that many victims don’t report their experiences. Some US officials say that a major barrier to comprehensively addressing the issue is the lack of collaboration with Chinese authorities. China’s efforts to address industrial scamming, they argue, appear aimed at reducing the number of Chinese citizens being impacted rather than comprehensively stopping the activity to protect all victims around the world.
“To its credit, China has cracked down on these operations, but it has done so selectively, largely turning a blind eye to scam centers victimizing foreigners,” Reva Price, a member of the US-China Economic and Security Review Commission said at a Senate hearing last month. “As a result, the Chinese criminal syndicates have been incentivized to shift toward targeting Americans.”
According to research the commission published in March, Beijing’s selective strategy has helped embolden some Chinese scammers, even those working within China, to continue operating so long as they exclusively target foreigners.
Other US-based researchers have come to similar conclusions. From 2023 to 2024, China reported a 30 percent decrease in the amount of money its citizens lost to scams, while the US suffered a more than 40 percent increase, according to congressional testimony last year by Jason Tower, who was then the Myanmar country director for the US Institute of Peace’s Program on Transnational Crime and Security in Southeast Asia. In response to Beijing’s enforcement dynamics, Tower said at the time, “the scam syndicates are increasingly pivoting to target the rest of the world, and especially Americans.”
The United Nations Office on Drugs and Crime noted last year that scam centers have been diversifying their worker pools, shifting from predominantly trafficking Chinese nationals and other Chinese speakers to entrapping people from a broader array of countries and backgrounds who speak various languages. UN researchers attributed this change in part to attackers broadening their targets to include different populations around the world. But they added that the dynamic also seemed to be a reaction to Chinese enforcement and Beijing’s efforts to protect Chinese citizens.
“China is doing more to fight fraud—like orders of magnitude more—than any other country,” says Gary Warner, a longtime digital scams researcher and director of intelligence at the cybersecurity firm DarkTower. “But I would agree that the crackdown by China on people scamming China has squeezed the balloon so to speak and led to more international and American targeting.”
The Chinese government has spent years investing in national safety campaigns warning citizens about the threat of scams and how to avoid falling victim to them. Some of the public discourse attempts to appeal to a sense of national solidarity. There’s a common meme in China, 中国人不骗中国人, literally, “Chinese people don’t deceive Chinese people” that is used to signal trust when swapping restaurant recommendations or job leads. In the context of digital scams, a variant has emerged: “Chinese don’t scam Chinese.”
Tech
The 70-Person AI Image Startup Taking on Silicon Valley’s Giants
Standing inside the HumanX conference in San Francisco’s Moscone Center, it’s hard not to feel like you’re at the center of the AI universe. Technology leaders swarm the building, and the headquarters of OpenAI and Anthropic are just down the block. But a 70-person startup headquartered 5,000 miles away in Germany’s Black Forest—a region famous for its ham—has become a top competitor to Silicon Valley’s leading labs in AI image generation.
In December, Black Forest Labs raised funds at a $3.25 billion valuation, after signing deals to power AI image-generation features in Adobe and the graphic design platform Canva. It has even struck agreements with major AI labs like Microsoft, Meta, and xAI to power similar features in their products.
Nearly two years after launch, Black Forest Labs can afford to be picky about who it works with. In 2024, Elon Musk’s xAI tapped Black Forest Labs to power Grok’s first image generator. That partnership put Black Forest Labs on the map but generated a lot of controversy due to the chatbot’s limited safeguards. It ended months later when xAI developed an in-house AI image model.
In recent months, xAI approached Black Forest Labs about licensing the startup’s technology again, sources familiar with the matter tell WIRED. This time around, Black Forest Labs declined, the sources said, deeming it too operationally difficult to partner with xAI, which has a famously chaotic work environment. xAI did not immediately respond to WIRED’s request for comment.
In September, Black Forest Labs struck a $140 million multiyear deal to give Meta access to its AI image-generation technology.
These AI labs want to work with Black Forest Labs because its image generators are among the world’s best, ranking just below OpenAI and Google’s offerings on the third-party firm Artificial Analysis’ benchmarks. The startup also offers some of the most downloaded text-to-image models on Hugging Face, indicating that a lot of AI image tools on the market are likely powered by a free version of Black Forest Labs’ technology.
It’s particularly impressive since the company has historically had far fewer resources than its competitors. This has led it to a more efficient line of research called latent diffusion, which is essentially when an AI model first sketches out a rough blueprint of an image, and then paints in more detail.
Latent diffusion “enabled us to put out very powerful models that took orders of magnitude less resources than our competitor’s models,” said cofounder Andreas Blattmann in an interview with WIRED onstage at HumanX this week.
Despite its success, Black Forest Labs believes image generation is just the beginning. Blattmann said the startup plans to unveil a robot powered by one of its AI models later this year. (He did not reveal what company is making the hardware.) The push is part of a larger opportunity the company sees to build AI that can perceive and take actions in the physical world.
“Visual intelligence is so much more than content creation. Content creation is just the first segue into this entire technology,” said Blattmann. “What I’m personally super excited about—and that’s a pattern throughout this conference—is physical AI.”
Black Forest Labs is also in talks with a handful of hardware companies, to power features in products like smart glasses and robots, sources tell WIRED.
Building in the Black Forest
Blattmann and his cofounders, Robin Rombach and Patrick Esser, made a name for themselves publishing some groundbreaking research on AI image models in 2021. In 2022, they were hired by Stability AI and released Stable Diffusion, a popular open source AI image generator based on their prior research. But two years later, they announced their departure and launched Black Forest Labs.
Rather than move to San Francisco, the trio decided to maintain a headquarters near their hometowns in Freiburg, Germany. Blattmann said the decision has been key to the company’s success.
-
Business1 week agoJaguar Land Rover sees sales recover after cyber attack
-
Uncategorized1 week ago
[CinePlex360] Please moderate: “Trump signals p
-
Entertainment7 days agoJoe Jonas shares candid glimpse into parenthood with Sophie Turner
-
Tech7 days agoOur Favorite iPad Is $50 Off
-
Sports6 days agoUConn Final Four run could trigger a $50M furniture giveaway for Massachusetts-based Jordan’s Furniture
-
Entertainment6 days agoBlake Lively reacts to harassment claims dismissal against Justin Baldoni
-
Business7 days agoVideo: Why Is the Labor Market Stuck?
-
Politics7 days agoIran can sustain Strait of Hormuz closure for years, will cut US military logistics: Official
