Tech
Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases
If you believe artificial intelligence poses grave risks to humanity, then a professor at Carnegie Mellon University has one of the most important roles in the tech industry right now.
Zico Kolter leads a 4-person panel at OpenAI that has the authority to halt the ChatGPT maker’s release of new AI systems if it finds them unsafe. That could be technology so powerful that an evildoer could use it to make weapons of mass destruction. It could also be a new chatbot so poorly designed that it will hurt people’s mental health.
“Very much we’re not just talking about existential concerns here,” Kolter said in an interview with The Associated Press. “We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.”
OpenAI tapped the computer scientist to be chair of its Safety and Security Committee more than a year ago, but the position took on heightened significance last week when California and Delaware regulators made Kolter’s oversight a key part of their agreements to allow OpenAI to form a new business structure to more easily raise capital and make a profit.
Safety has been central to OpenAI’s mission since it was founded as a nonprofit research laboratory a decade ago with a goal of building better-than-human AI that benefits humanity. But after its release of ChatGPT sparked a global AI commercial boom, the company has been accused of rushing products to market before they were fully safe in order to stay at the front of the race. Internal divisions that led to the temporary ouster of CEO Sam Altman in 2023 brought those concerns that it had strayed from its mission to a wider audience.
The San Francisco-based organization faced pushback—including a lawsuit from co-founder Elon Musk—when it began steps to convert itself into a more traditional for-profit company to continue advancing its technology.
Agreements announced last week by OpenAI along with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings aimed to assuage some of those concerns.
At the heart of the formal commitments is a promise that decisions about safety and security must come before financial considerations as OpenAI forms a new public benefit corporation that is technically under the control of its nonprofit OpenAI Foundation.
Kolter will be a member of the nonprofit’s board but not on the for-profit board. But he will have “full observation rights” to attend all for-profit board meetings and have access to information it gets about AI safety decisions, according to Bonta’s memorandum of understanding with OpenAI. Kolter is the only person, besides Bonta, named in the lengthy document.
Kolter said the agreements largely confirm that his safety committee, formed last year, will retain the authorities it already had. The other three members also sit on the OpenAI board—one of them is former U.S. Army General Paul Nakasone, who was commander of the U.S. Cyber Command. Altman stepped down from the safety panel last year in a move seen as giving it more independence.
“We have the ability to do things like request delays of model releases until certain mitigations are met,” Kolter said. He declined to say if the safety panel has ever had to halt or mitigate a release, citing the confidentiality of its proceedings.

Kolter said there will be a variety of concerns about AI agents to consider in the coming months and years, from cybersecurity—”Could an agent that encounters some malicious text on the internet accidentally exfiltrate data?”—to security concerns surrounding AI model weights, which are numerical values that influence how an AI system performs.
“But there’s also topics that are either emerging or really specific to this new class of AI model that have no real analogues in traditional security,” he said. “Do models enable malicious users to have much higher capabilities when it comes to things like designing bioweapons or performing malicious cyberattacks?”
“And then finally, there’s just the impact of AI models on people,” he said. “The impact to people’s mental health, the effects of people interacting with these models and what that can cause. All of these things, I think, need to be addressed from a safety standpoint.”
OpenAI has already faced criticism this year about the behavior of its flagship chatbot, including a wrongful-death lawsuit from California parents whose teenage son killed himself in April after lengthy interactions with ChatGPT.
Kolter, director of Carnegie Mellon’s machine learning department, began studying AI as a Georgetown University freshman in the early 2000s, long before it was fashionable.
“When I started working in machine learning, this was an esoteric, niche area,” he said. “We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered.”
Kolter, 42, has been following OpenAI for years and was close enough to its founders that he attended its launch party at an AI conference in 2015. Still, he didn’t expect how rapidly AI would advance.
“I think very few people, even people working in machine learning deeply, really anticipated the current state we are in, the explosion of capabilities, the explosion of risks that are emerging right now,” he said.
AI safety advocates will be closely watching OpenAI’s restructuring and Kolter’s work. One of the company’s sharpest critics says he’s “cautiously optimistic,” particularly if Kolter’s group “is actually able to hire staff and play a robust role.”
“I think he has the sort of background that makes sense for this role. He seems like a good choice to be running this,” said Nathan Calvin, general counsel at the small AI policy nonprofit Encode. Calvin, who OpenAI targeted with a subpoena at his home as part of its fact-finding to defend against the Musk lawsuit, said he wants OpenAI to stay true to its original mission.
“Some of these commitments could be a really big deal if the board members take them seriously,” Calvin said. “They also could just be the words on paper and pretty divorced from anything that actually happens. I think we don’t know which one of those we’re in yet.”
© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
Citation:
Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases (2025, November 2)
retrieved 2 November 2025
from https://techxplore.com/news/2025-11-zico-kolter-professor-openai-safety.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Denmark inaugurates rare low-carbon hydrogen plant
Denmark inaugurated one of Europe’s few low-carbon hydrogen plants on Monday, a sector touted as a key to cleaner energy but plagued with challenges.
Using eight electrolyzers powered by solar and wind energy, the HySynergy project will produce around eight tonnes of hydrogen a day in its first phase, to be transported to a nearby refinery and to Germany.
Hydrogen has been touted as a potential energy game-changer that could decarbonize industry and heavy transport.
Unlike fossil fuels, which emit planet-warming carbon, hydrogen simply produces water vapor when burned.
But producing so-called “green hydrogen” remains a challenge, and the sector is still struggling to take off in Europe, with a multitude of projects abandoned or delayed.
Originally scheduled to open in 2023, the HySynergy project, based in Fredericia in western Denmark, has suffered from delays.
According to the International Energy Agency (IEA), only four plants currently produce low-carbon hydrogen in Europe, none with a capacity greater than one megawatt.
HySynergy will initially produce 20 megawatts, but “our ambitions grow far beyond” that, said Jakob Korsgaard, founder and CEO of Everfuel, which owns 51% of the project.
“We have power connection, we have land, we have utilities starting to be ready for expansions right here, up to 350 megawatts,” he told AFP.
With the technology not yet fully mature, hydrogen often remains far too expensive compared to the gas and oil it aims to replace, mainly due to the cost of electricity required for its production.
Outside of China, which is leading the sector, the “slower-than-expected deployment” is limiting the potential cost reductions from larger-scale production, the IEA said in recent report.
It added that “only a small share of all announced projects are expected to be operative by 2030.”
“The growth of green hydrogen depends on the political momentum,” Korsgaard said, urging European Union countries and politicians to push for ambitious implementation of the EU’s so-called RED III renewable energy directive.
The directive sets a goal of at least 42.5% renewable energy in the EU’s gross final consumption by 2030, and highlights fuels such as low-carbon hydrogen.
© 2025 AFP
Citation:
Denmark inaugurates rare low-carbon hydrogen plant (2025, November 3)
retrieved 3 November 2025
from https://techxplore.com/news/2025-11-denmark-inaugurates-rare-carbon-hydrogen.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Researchers launch smoke-sensing drones that one day could fight wildfires
Plumes of smoke drifted up from a fire steadily taking over a 30-acre prairie at Cedar Creek Ecosystem Science Reserve, north of the Twin Cities. Amid the haze, five black drones zipped around.
More than 150 feet below the flying robots, research student Nikil Krishnakumar raised the controller in the air. The work has been published on the arXiv preprint server.
“It’s all autonomous now,” he said. “I’m not doing anything.”
The aerial robotic team’s mission: examine the smoke from the prescribed burn and send the data to a computer on the ground. The computer then analyzes the smoke data to understand the fire’s flow patterns, Krishnakumar said.
The University of Minnesota project is the latest research into using artificial intelligence to detect and track wildfires. The work has become more urgent as climate change is expected to make wildfires, like those that devastated Manitoba this summer, larger and more frequent.
NOAA’s Next-Generation Fire System consists of two satellites 22,000 miles above the equator that detect new sources of heat and report them to local National Weather Service stations and its online dashboard. Earlier this year, the satellites were credited with spotting 19 fires in Oklahoma and preventing $850 million in structure and property damage, according to the agency.
In Minnesota, Xcel has installed tower-mounted, AI-equipped high-definition cameras near power lines in Mankato and Clear Lake. Thirty-six more are planned. When a fire is detected, local fire departments are notified.
Krishnakumar and other members of the U’s research team performed their 11th trial at the U’s field station in East Bethel on Friday, with notable improvements from their previous attempts.
The first-generation drones crashed several times during previous field tests, Krishnakumar said. The team upgraded sensors for better data collecting and autonomous steering, and improved the drones’ propulsion by making them bigger and fitting them with better propellers.
“The big picture is one day these drones can be used to understand where the wildfires go, how they behave and to perform large-scale surveillance of wildfires,” Krishnakumar said. “The major challenge we’re trying to understand is how far these smoke particles can be transported and the altitude at which they can go.”
Understanding the behavior of particles like embers can help firefighters prevent wildfires from spreading, said Yue Weng, another researcher on the team.
Though the project has a way to go before it can be used for large-scale wildfires, the research represents a significant step toward using fully autonomous drone systems for emergency response and scientific research missions, said Jiarong Hong, professor at the University of Minnesota’s Department of Mechanical Engineering.
This year, 1,200 wildfires have been recorded in Minnesota so far, according to the state Department of Natural Resources. On a smaller scale, the technology could also be used to better manage prescribed burns, Hong said. Between 2012 and 2021, prescribed burns that went out of control caused 43 wildfires nationwide, according to the Associated Press.
“To characterize and measure particle transport in the real field is very challenging. Traditionally, people do small-scale lab experiments and study this at a fundamental level,” Hong said. “Such an experiment doesn’t capture the complexity involved in the real field environment.”
Smoke changes direction with the wind. Deploying multiple drones—with one at the center managing the four around it—enables them to navigate in the air without human intervention, Hong said.
The 11-pound drones were custom-built by the students to autonomously collect particle data. Future improvements to the project include collecting more data and extending the battery life of the drones. The drones are currently able to operate in the air for about 25 minutes, less in colder temperatures, Hong said.
“We have drones flying out at different heights, so we can actually measure the particle composition at different elevations at the same time,” Hong said.
“Particles are in a very irregular shape and some of them are porous and have varying levels of density. But we have been able to characterize their morphology and shape for the very first time.”
More information:
Nikil Krishnakumar et al, 3D Characterization of Smoke Plume Dispersion Using Multi-View Drone Swarm, arXiv (2025). DOI: 10.48550/arxiv.2505.06638
2025 The Minnesota Star Tribune. Distributed by Tribune Content Agency, LLC
Citation:
Researchers launch smoke-sensing drones that one day could fight wildfires (2025, November 3)
retrieved 3 November 2025
from https://techxplore.com/news/2025-11-drones-day-wildfires.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
An Anarchist’s Conviction Offers a Grim Foreshadowing of Trump’s War on the ‘Left’
By the standards of the San Francisco Bay Area’s hard left, Casey Goonan’s crimes were unremarkable. A police SUV partially burned by an incendiary device on UC Berkeley’s campus. A planter of shrubs lit on fire after Goonan unsuccessfully tried to smash a glass office window and throw a firebomb into the federal building in downtown Oakland.
But thanks to a series of communiques where Goonan claimed to have carried out the summer 2024 attacks in solidarity with Hamas and the East Bay native’s anarchist beliefs, federal prosecutors claimed Goonan “intended to promote” terrorism on top of a felony count for using an incendiary device. Goonan’s original charges notably did not contain terrorism counts. In late September, US District Court Judge Jeffrey White sentenced Goonan, whom they called “a domestic terrorist” during the hearing, to 19 and a half years in prison plus 15 years probation. Prosecutors also asked that he be sent to the Bureau of Prisons facility that contains a Communications Management Units, a highly restrictive assignment reserved for what the government claims are “extremist” inmates with terrorism-related offenses or affiliations.
Although Goonan’s case began under the Biden Administration, it offers a glimpse of the approach the Department of Justice may take in President Donald Trump’s forthcoming offensive against the “left,” formalized in late September in National Security Presidential Memorandum 7 (NSPM-7), an executive order targeting anti-fascist beliefs, opposition towards Immigrations and Customs Enforcement raids, and criticism of capitalism and Christianity as potential “indicators of terrorism.”
In addition to Goonan’s purported admiration for Hamas—a designated terrorist organization since 1997—and cofounding of True Leap, a tiny Anarchist publisher, the 35-year-old doctorate in African-American Studies’ biography includes another trait being targeted by the Trump administration and its allies: Goonan identifies as a transgender person. While NPSM-7 cites “extremism migration, race, and gender” as an indicator of “this pattern of violent and terroristic tendencies,” the Heritage Foundation has attempted to link gender-fluid identity to mass shootings and is urging the FBI to create a new, specious domestic terrorism classification of “Transgender Ideology-Inspired Violent Extremism,” or TIVE.
The executive order, meanwhile, directs the American security state’s sprawling post-9/11 counterterrorism apparatus to be reoriented away from neo-Nazis, Proud Boys, white nationalists, Christian nationalists, and other extreme right-wing actors that have been overwhelmingly responsible for the majority of political violence in the past few decades, and towards opponents of ICE, anti-fascists, and the administration writ large. Along with potentially violent actors, NSPM-7 instructs federal law enforcement to scrutinize nonprofit groups and philanthropic foundations involved in funding organizations that espouse amorphous ideologies, from “support for the overthrow of the United States Government” to expressing “hostility towards those who hold traditional American views on family, religion, and morality.”
“NSPM-7 is the natural culmination of ‘radicalization theory’ as the basis for the American approach to counterterrorism,” says Mike German, a retired FBI agent who spent years infiltrating violent white supremacist groups and quit the Bureau in response to its post-9/11 shift in terrorism strategy. German explored radicalization theory’s trajectory in his 2019 book, Disrupt, Discredit and Divide: How the New FBI Damages Democracy.
-
Tech6 days agoOpenAI says a million ChatGPT users talk about suicide
-
Tech6 days agoHow digital technologies can support a circular economy
-
Sports1 week agoGiants-Eagles rivalry and the NFL punt that lives in infamy
-
Tech6 days agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoCFDA changes New York Fashion Week dates for February edition
-
Fashion6 days agoITMF elects new board at 2025 Yogyakarta conference
-
Tech5 days agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Business6 days agoTransfer test: Children from Belfast low income families to be given free tuition
