Tech
Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases
If you believe artificial intelligence poses grave risks to humanity, then a professor at Carnegie Mellon University has one of the most important roles in the tech industry right now.
Zico Kolter leads a 4-person panel at OpenAI that has the authority to halt the ChatGPT maker’s release of new AI systems if it finds them unsafe. That could be technology so powerful that an evildoer could use it to make weapons of mass destruction. It could also be a new chatbot so poorly designed that it will hurt people’s mental health.
“Very much we’re not just talking about existential concerns here,” Kolter said in an interview with The Associated Press. “We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.”
OpenAI tapped the computer scientist to be chair of its Safety and Security Committee more than a year ago, but the position took on heightened significance last week when California and Delaware regulators made Kolter’s oversight a key part of their agreements to allow OpenAI to form a new business structure to more easily raise capital and make a profit.
Safety has been central to OpenAI’s mission since it was founded as a nonprofit research laboratory a decade ago with a goal of building better-than-human AI that benefits humanity. But after its release of ChatGPT sparked a global AI commercial boom, the company has been accused of rushing products to market before they were fully safe in order to stay at the front of the race. Internal divisions that led to the temporary ouster of CEO Sam Altman in 2023 brought those concerns that it had strayed from its mission to a wider audience.
The San Francisco-based organization faced pushback—including a lawsuit from co-founder Elon Musk—when it began steps to convert itself into a more traditional for-profit company to continue advancing its technology.
Agreements announced last week by OpenAI along with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings aimed to assuage some of those concerns.
At the heart of the formal commitments is a promise that decisions about safety and security must come before financial considerations as OpenAI forms a new public benefit corporation that is technically under the control of its nonprofit OpenAI Foundation.
Kolter will be a member of the nonprofit’s board but not on the for-profit board. But he will have “full observation rights” to attend all for-profit board meetings and have access to information it gets about AI safety decisions, according to Bonta’s memorandum of understanding with OpenAI. Kolter is the only person, besides Bonta, named in the lengthy document.
Kolter said the agreements largely confirm that his safety committee, formed last year, will retain the authorities it already had. The other three members also sit on the OpenAI board—one of them is former U.S. Army General Paul Nakasone, who was commander of the U.S. Cyber Command. Altman stepped down from the safety panel last year in a move seen as giving it more independence.
“We have the ability to do things like request delays of model releases until certain mitigations are met,” Kolter said. He declined to say if the safety panel has ever had to halt or mitigate a release, citing the confidentiality of its proceedings.

Kolter said there will be a variety of concerns about AI agents to consider in the coming months and years, from cybersecurity—”Could an agent that encounters some malicious text on the internet accidentally exfiltrate data?”—to security concerns surrounding AI model weights, which are numerical values that influence how an AI system performs.
“But there’s also topics that are either emerging or really specific to this new class of AI model that have no real analogues in traditional security,” he said. “Do models enable malicious users to have much higher capabilities when it comes to things like designing bioweapons or performing malicious cyberattacks?”
“And then finally, there’s just the impact of AI models on people,” he said. “The impact to people’s mental health, the effects of people interacting with these models and what that can cause. All of these things, I think, need to be addressed from a safety standpoint.”
OpenAI has already faced criticism this year about the behavior of its flagship chatbot, including a wrongful-death lawsuit from California parents whose teenage son killed himself in April after lengthy interactions with ChatGPT.
Kolter, director of Carnegie Mellon’s machine learning department, began studying AI as a Georgetown University freshman in the early 2000s, long before it was fashionable.
“When I started working in machine learning, this was an esoteric, niche area,” he said. “We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered.”
Kolter, 42, has been following OpenAI for years and was close enough to its founders that he attended its launch party at an AI conference in 2015. Still, he didn’t expect how rapidly AI would advance.
“I think very few people, even people working in machine learning deeply, really anticipated the current state we are in, the explosion of capabilities, the explosion of risks that are emerging right now,” he said.
AI safety advocates will be closely watching OpenAI’s restructuring and Kolter’s work. One of the company’s sharpest critics says he’s “cautiously optimistic,” particularly if Kolter’s group “is actually able to hire staff and play a robust role.”
“I think he has the sort of background that makes sense for this role. He seems like a good choice to be running this,” said Nathan Calvin, general counsel at the small AI policy nonprofit Encode. Calvin, who OpenAI targeted with a subpoena at his home as part of its fact-finding to defend against the Musk lawsuit, said he wants OpenAI to stay true to its original mission.
“Some of these commitments could be a really big deal if the board members take them seriously,” Calvin said. “They also could just be the words on paper and pretty divorced from anything that actually happens. I think we don’t know which one of those we’re in yet.”
© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
Citation:
Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases (2025, November 2)
retrieved 2 November 2025
from https://techxplore.com/news/2025-11-zico-kolter-professor-openai-safety.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Tackling the housing shortage with robotic microfactories
A national housing shortage is straining finances and communities across the United States. In Massachusetts, at least 222,000 homes will have to be built in the next 10 years to meet the population’s needs. At the same time, there are numerous challenges in traditional construction. There’s a shortage of skilled construction workers. Most projects involve multiple contractors and subcontractors, adding complexity and lag time. And the construction process, as well as the buildings themselves, can be a major source of emissions that contribute to climate change.
Reframe Systems, co-founded by Vikas Enti SM ’20, uses robotics, software, and high-performance materials to address these problems. Founded in 2022, the company deploys microfactories that bring housing fabrication and production closer to the regions where the homes are needed. The first homes designed and manufactured in Reframe’s first microfactory have been fully built in Arlington and Somerville, Massachusetts.
Enti’s experiences in MIT System Design and Management (SDM) shaped the company from its start. “Learning how to navigate the system and finding the optimal value for each stakeholder has been a key part of the business strategy,” he says, “and that’s rooted in what I learned at SDM.”
Better tools for system-level problems
Enti applied to SDM’s master of science in engineering and management while he was working at Kiva Systems, overseeing its acquisition by Amazon and transformation into Amazon Robotics. He found that the SDM program’s fundamentals of systems engineering, system architecture, and project management provided him with the tools he needed to address system-level problems in his work.
While he was at MIT, Enti also served as an associate director for the MIT $100K Entrepreneurship Competition, which offers students and researchers mentorship, feedback, and potential funding for their startup ideas. He realized that “there isn’t a single formula for how businesses start, or how long it takes to get them started,” he says, which helped shape his plans to start his own business.
Enti took a leave of absence from MIT to oversee the expansion of Amazon Robotics in Europe. He returned and completed his degree in 2020, writing his thesis on developing technology that could mitigate falls for elderly people. This instinct to use his education for a good cause resurfaced when his daughters were born. He wanted his future business to address a real-world problem and have a social impact, while also reducing carbon emissions.
Growing housing, shrinking emissions
Enti concluded that housing, with immediate real-world impact and a significant share of global carbon emissions, was the right problem to work on. He reached out to his colleagues Aaron Small and Felipe Polido from Amazon Robotics to share his idea for advanced, low-cost factories that could be deployed quickly and close to where they were needed. The two joined him as co-founders.
Currently, the microfactory in Andover, Massachusetts, produces structural panels, with robotics completing wall and ceiling framing and people completing the rest of the work, including wiring and plumbing. Eventually, Reframe hopes to automate more of the building process through further use of robotics. The modular construction process allows for reduced waste and disruption on the eventual home site. And the finished homes are designed to be energy-efficient and ready for solar panel installation. The company is set to start work soon on a group of homes in Devens, Massachusetts.
In addition to the Andover location, Reframe is setting up in southern California to help rebuild homes that were destroyed in the area’s January 2025 wildfires. The company’s software-assisted design process and the adjustability of the microfactories allows them to meet local zoning and building codes and align with the local architectural aesthetic. This means that in Somerville, Reframe’s completed buildings look like modernized versions of the neighboring three-story buildings, known locally as “triple-deckers.” On the other side of the country, Reframe’s design offerings include Spanish-style and craftsman homes.
“Housing is a complex systems problem,” Enti says, explaining the impact SDM has had on his work at Reframe. The methods and tools taught in the integrated core class EM.412 (Foundations of System Design and Management) help him tackle systems-level problems and take the needs of multiple stakeholders into account. The Reframe team used technology roadmapping as they devised their overall business plan, inspired by the work of Olivier de Weck, associate head of the MIT Department of Aeronautics and Astronautics. And lectures on project management from Bryan Moser, SDM’s academic director, remain relevant.
“Embracing the fact that this is a systems problem, and learning how to navigate the system and the stakeholders to make sure we’re finding the optimal value, has been a key part of the business strategy,” Enti says.
Reframe Systems is set to continue learning through iteration as they plan to expand their network of microfactories. The company remains committed to the core vision of sustainably meeting the country’s need for more housing. “I’m grateful we get to do this,” Enti says. “Once you strip away all the robotics, the advanced algorithms, and the factories, these are high-quality, healthy homes that families get to live in and grow.”
Tech
Framework Has a Better, More Take-Apart-Able Laptop
Framework, the company that makes laptops designed for optimal repairability, announced a new version of its main product, a 13-inch screen laptop. It’s called the Framework Laptop 13 Pro, and it has far better battery life, a touchscreen, a haptic touchpad, and is fitted with Intel processors.
At an event in San Francisco today, Framework CEO Nirav Patel showed off the company’s new tech, opening with a joke about making Framework AI—something the company is very much not doing. Framework’s whole thing, after all, is aiming to give users control over the physical tech they use.
“That industry is fighting for you to own nothing, and they own everything,” Patel said about the AI industry. “We’re fighting for a future where you can own everything and be free.”
Framework used the event to detail other updates coming to its 16-inch laptop. It also showed off previews of an official developer kit and a wireless keyboard for controlling your rig from the couch.
Framework 13 Pro
As the name implies, the 13 Pro is a step up from the company’s last version, the Framework 13. It’s also pricier, starting at $1,199 for a DIY Edition that requires assembling the computer yourself. Pre-built units start at $1,499 but can be upgraded with more features. Framework says it will start shipping the 13 Pro in June.
Framework’s signature move for its products is the ability to take the thing apart. The 13 Pro is made with that ethos in mind, so its parts can be easily swapped out, upgraded, or replaced. Four Thunderbolt 4 interfaces let you pick which ports (USB-C, HDMI, etc.) you want and then choose where to place them. Framework says it planned the laptop with cross-generation compatibility in mind, so current Framebook 13 laptop owners will be able to use new 13 Pro parts like the mainboard, display, and battery, and put them into their existing machine.
The big changes in the guts of the 13 Pro come from Framework’s shift away from using an AMD processor to Intel’s Core Ultra Series 3 processors, which Framework described in its press release as “just insanely efficient.” That efficiency, along with a bigger battery, translates to more than 20 hours of battery life while streaming 4K Netflix videos, at least that’s the claim. That’s almost 12 hours longer than the Framework 13.
Courtesy of Framework
Courtesy of Framework
Tech
OpenAI Beefs Up ChatGPT’s Image Generation Model
OpenAI launched a new image generation AI model on Tuesday, dubbed ChatGPT Images 2.0. This model can generate more than one image from a single prompt, like an entire study booklet, as well as output text, including in non-English languages, like Chinese and Hindi. This release is available globally for ChatGPT and Codex users, with a more powerful version available for paying subscribers.
When any major AI company releases a new image model, it can revive interest and boost usage, especially if social media users adopt a meme-able trend, transforming images of themselves. Last year, Google’s launch of the Nano Banana model was a major moment for the company, especially when users started posting hyperrealistic figurines of themselves online. Earlier this year, ChatGPT Images made waves on social media as users shared AI-generated caricatures.
What’s Different?
Since the new model can tap into ChatGPT’s “reasoning” capabilities, Images 2.0 can search the internet for recent information and generate more than one image at a time. In essence, the bot can use additional steps to output more thorough generations from a single prompt. Images 2.0 also has a more recent knowledge cutoff date: December 2025.
This also means that outputs from the new model are more granular. For example, I generated an infographic with San Francisco’s weather forecast for the next day, as well as activities worth doing. The image ChatGPT generated included accurate weather details for the rainy day, along with accurate-looking drawings of the Ferry Building, Castro Theater, Painted Ladies houses, and Transamerica Pyramid.
Additionally, Images 2.0 is more customizable for users who want unique aspect ratios for image outputs. The new model can generate images, ranging from 3:1 wide to 1:3 tall, and users can adjust the image’s size as part of their prompt to the AI tool.
First Impressions
After a few hours of generating images with the new model, I was generally impressed with the text rendering capabilities, in English at least. Not that long ago, image outputs featuring text, from any of the major models, often included numerous malformed characters or words with errant extra letters. ChatGPT struggled to label images accurately two years prior, so the cleaner, more complex outputs from Images 2.0 are a sign of continued improvement. Google has also focused on improving image outputs featuring text in its recent iterations of Nano Banana.
-
Fashion6 days agoFrance’s LVMH Q1 revenue falls 6%, shows resilience amid Iran war
-
Sports1 week agoThe case for Man United’s Fernandes as Premier League’s best
-
Entertainment1 week agoPalace left in shock as Prince William cancels grand ceremony
-
Business1 week agoUK could adopt EU single market rules under new legislation
-
Entertainment6 days agoIs Claude down? Here’s why users are seeing errors
-
Fashion1 week agoEnergy emerges as biggest cost driver in textile margins
-
Business1 week agoDelta Air Lines unveils first new Delta One suite in premium cabin arms race
-
Fashion1 week agoAsia claims largest share of markets on Kearney FDI Confidence Index



