Connect with us

Tech

Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases

Published

on

Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases


Carnegie Mellon University Head of Machine Learning, Zico Kolter delivers a keynote speech at AI Horizons Summit in Bakery Square on Thursday, Sept. 11, 2025 in Pittsburgh. Credit: Sebastian Foltz/Pittsburgh Post-Gazette via AP

If you believe artificial intelligence poses grave risks to humanity, then a professor at Carnegie Mellon University has one of the most important roles in the tech industry right now.

Zico Kolter leads a 4-person panel at OpenAI that has the authority to halt the ChatGPT maker’s release of new AI systems if it finds them unsafe. That could be technology so powerful that an evildoer could use it to make weapons of mass destruction. It could also be a new chatbot so poorly designed that it will hurt people’s mental health.

“Very much we’re not just talking about existential concerns here,” Kolter said in an interview with The Associated Press. “We’re talking about the entire swath of safety and and critical topics that come up when we start talking about these very widely used AI systems.”

OpenAI tapped the computer scientist to be chair of its Safety and Security Committee more than a year ago, but the position took on heightened significance last week when California and Delaware regulators made Kolter’s oversight a key part of their agreements to allow OpenAI to form a new business structure to more easily raise capital and make a profit.

Safety has been central to OpenAI’s mission since it was founded as a nonprofit research laboratory a decade ago with a goal of building better-than-human AI that benefits humanity. But after its release of ChatGPT sparked a global AI commercial boom, the company has been accused of rushing products to market before they were fully safe in order to stay at the front of the race. Internal divisions that led to the temporary ouster of CEO Sam Altman in 2023 brought those concerns that it had strayed from its mission to a wider audience.

The San Francisco-based organization faced pushback—including a lawsuit from co-founder Elon Musk—when it began steps to convert itself into a more traditional for-profit company to continue advancing its technology.

Agreements announced last week by OpenAI along with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings aimed to assuage some of those concerns.

At the heart of the formal commitments is a promise that decisions about safety and security must come before financial considerations as OpenAI forms a new public benefit corporation that is technically under the control of its nonprofit OpenAI Foundation.

Kolter will be a member of the nonprofit’s board but not on the for-profit board. But he will have “full observation rights” to attend all for-profit board meetings and have access to information it gets about AI safety decisions, according to Bonta’s memorandum of understanding with OpenAI. Kolter is the only person, besides Bonta, named in the lengthy document.

Kolter said the agreements largely confirm that his safety committee, formed last year, will retain the authorities it already had. The other three members also sit on the OpenAI board—one of them is former U.S. Army General Paul Nakasone, who was commander of the U.S. Cyber Command. Altman stepped down from the safety panel last year in a move seen as giving it more independence.

“We have the ability to do things like request delays of model releases until certain mitigations are met,” Kolter said. He declined to say if the safety panel has ever had to halt or mitigate a release, citing the confidentiality of its proceedings.

Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases
Carnegie Mellon University Head of Machine Learning, Zico Kolter delivers a keynote speech at AI Horizons Summit in Bakery Square on Thursday, Sept. 11, 2025 in Pittsburgh. Credit: Sebastian Foltz/Pittsburgh Post-Gazette via AP

Kolter said there will be a variety of concerns about AI agents to consider in the coming months and years, from cybersecurity—”Could an agent that encounters some malicious text on the internet accidentally exfiltrate data?”—to security concerns surrounding AI model weights, which are numerical values that influence how an AI system performs.

“But there’s also topics that are either emerging or really specific to this new class of AI model that have no real analogues in traditional security,” he said. “Do models enable malicious users to have much higher capabilities when it comes to things like designing bioweapons or performing malicious cyberattacks?”

“And then finally, there’s just the impact of AI models on people,” he said. “The impact to people’s mental health, the effects of people interacting with these models and what that can cause. All of these things, I think, need to be addressed from a safety standpoint.”

OpenAI has already faced criticism this year about the behavior of its flagship chatbot, including a wrongful-death lawsuit from California parents whose teenage son killed himself in April after lengthy interactions with ChatGPT.

Kolter, director of Carnegie Mellon’s machine learning department, began studying AI as a Georgetown University freshman in the early 2000s, long before it was fashionable.

“When I started working in machine learning, this was an esoteric, niche area,” he said. “We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered.”

Kolter, 42, has been following OpenAI for years and was close enough to its founders that he attended its launch party at an AI conference in 2015. Still, he didn’t expect how rapidly AI would advance.

“I think very few people, even people working in machine learning deeply, really anticipated the current state we are in, the explosion of capabilities, the explosion of risks that are emerging right now,” he said.

AI safety advocates will be closely watching OpenAI’s restructuring and Kolter’s work. One of the company’s sharpest critics says he’s “cautiously optimistic,” particularly if Kolter’s group “is actually able to hire staff and play a robust role.”

“I think he has the sort of background that makes sense for this role. He seems like a good choice to be running this,” said Nathan Calvin, general counsel at the small AI policy nonprofit Encode. Calvin, who OpenAI targeted with a subpoena at his home as part of its fact-finding to defend against the Musk lawsuit, said he wants OpenAI to stay true to its original mission.

“Some of these commitments could be a really big deal if the board members take them seriously,” Calvin said. “They also could just be the words on paper and pretty divorced from anything that actually happens. I think we don’t know which one of those we’re in yet.”

© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
Who is Zico Kolter? A professor leads OpenAI safety panel with power to halt unsafe AI releases (2025, November 2)
retrieved 2 November 2025
from https://techxplore.com/news/2025-11-zico-kolter-professor-openai-safety.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

Epstein Files Reveal Peter Thiel’s Elaborate Dietary Restrictions

Published

on

Epstein Files Reveal Peter Thiel’s Elaborate Dietary Restrictions


Peter Thiel—the billionaire venture capitalist, PayPal and Palantir cofounder, and outspoken commentator on all matters relating to the “Antichrist”—appears at least 2,200 times in the latest batch of files released by the Department of Justice related to convicted sex offender and disgraced financier Jeffrey Epstein.

The tranche of records demonstrate how Epstein managed to cultivate an extensive network of wealthy and influential figures in Silicon Valley. A number of them, including Thiel, continued to interact with Epstein even after his 2008 guilty plea for solicitation of prostitution and of procurement of minors to engage in prostitution.

The new files show that Thiel arranged to meet with Epstein several times between 2014 and 2017. “What are you up to on Friday?” Thiel wrote to Epstein on April 5, 2016. “Should we try for lunch?” The bulk of the communications between the two men in the data dump concern scheduling meals, calls, and meetings with one another. Thiel did not immediately return a request for comment from WIRED.

One piece of correspondence stands out for being particularly bizarre. On February 3, 2016, Thiel’s former chief of staff and senior executive assistant, Alisa Bekins, sent an email with the subject line “Meeting – Feb 4 – 9:30 AM – Peter Thiel dietary restrictions – CONFIDENTIAL.” The initial recipient of the email is redacted, but it was later forwarded directly to Epstein.

The contents of the message are also redacted in at least one version of the email chain uploaded by the Justice Department on Friday. However, two other files from what appears to be the same set of messages have less information redacted.

In one email, Bekins listed some two dozen approved kinds of sushi and animal protein, 14 approved vegetables, and 0 approved fruits for Thiel to eat. “Fresh herbs” and “olive oil” were permitted, however, ketchup, mayonnaise, and soy sauce should be avoided. Only one actual meal was explicitly outlined: “egg whites or greens/salad with some form of protein,” such as steak, which Bekins included “in the event they eat breakfast.” It’s unclear if the February 4 meeting ultimately occurred; other emails indicate Thiel got stuck in traffic on his way to meet Epstein that day.

According to a recording of an undated conversation between Epstein and former Israeli Prime Minister Ehud Barak that was also part of the files the DOJ released on Friday, Epstein told Barak that he was hoping to meet Thiel the following week. He added that he was familiar with Thiel’s company Palantir, but proceeded to spell it out loud for Barak as “Pallentier.” Epstein speculated that Thiel may put Barak on the board of Palantir, though there’s no evidence that ever occurred.

“I’ve never met Peter Thiel, and everybody says he sort of jumps around and acts really strange, like he’s on drugs,” Epstein said at one point in the audio recording, referring to Thiel. The former prime minister expressed agreement with Epstein’s assessment.

In 2015 and 2016, Epstein put $40 million in two funds managed by one of Thiel’s investment firms, Valar Ventures, according to The New York Times. Epstein and Thiel continued to communicate and were discussing meeting with one another as recently as January 2019, according to the files released by the DOJ. Epstein committed suicide in his prison cell in August of that year.

Below are Thiel’s dietary restrictions as outlined in the February 2016 email. (The following list has been reformatted slightly for clarity.)

APPROVED SUSHI + APPROVED PROTEIN

  • Kaki Oysters
  • Bass
  • Nigiri
  • Beef
  • Octopus
  • Catfish
  • Sashimi
  • Chicken
  • Scallops
  • Eggs
  • Sea Urchin
  • Lamb
  • Seabass
  • Perch
  • Spicy Tuna w Avocado
  • Squid
  • Turkey
  • Sweet Shrimps
  • Whitefish
  • Tobiko
  • Tuna
  • Yellowtail
  • Trout

APPROVED VEGETABLES

  • Artichoke
  • Avocado
  • Beets
  • Broccoli
  • Brussels sprouts
  • Cabbage
  • Carrots
  • Cucumber
  • Garlic
  • Olives
  • Onions
  • Peppers
  • Salad greens
  • Spinach

APPROVED NUTS

  • Anything unsalted and unroasted
  • Peanuts
  • Pecans
  • Pistachios

CONDIMENTS

  • Most fresh herbs, and olive oil

AVOID

  • Dairy
  • Fruits
  • Gluten
  • Grains
  • Ketchup
  • Mayo
  • Mushroom
  • Processed foods
  • Soy Sauce
  • Sugar
  • Tomato
  • Vinegar

MEAL SUGGESTIONS

  • Breakfast Egg whites or greens/salad with some form of protein (Steak etc)



Source link

Continue Reading

Tech

Elon Musk Is Rolling xAI Into SpaceX—Creating the World’s Most Valuable Private Company

Published

on

Elon Musk Is Rolling xAI Into SpaceX—Creating the World’s Most Valuable Private Company


Elon Musk’s rocket and satellite company SpaceX is acquiring his AI startup xAI, the centibillionaire announced on Monday. In a blog post, Musk said the acquisition was warranted because global electricity demand for AI cannot be met with “terrestrial solutions,” and Silicon Valley will soon need to build data centers in space to power its AI ambitions.

“In the long term, space-based AI is obviously the only way to scale,” Musk wrote. “The only logical solution therefore is to transport these resource-intensive efforts to a location with vast power and space. I mean, space is called ‘space’ for a reason.”

The deal, which pulls together two of Musk’s largest private ventures, values the combined entity at $1.25 trillion, making it the most valuable private company in the world, according to a report from Bloomberg.

SpaceX was in the process of preparing to go public later this year before the xAI acquisition was announced. The space firm’s plans for an initial public offering are still on, according to Bloomberg.

In December, SpaceX told employees that it would buy insider shares in a deal that would value the rocket company at $800 billion, according to The New York Times. Last month, xAI announced that it had raised $20 billion from investors, bringing the company’s valuation to roughly $230 billion.

This isn’t the first time Musk has sought to consolidate parts of his vast business empire, which is largely privately owned and includes xAI, SpaceX, the brain interface company Neuralink, and the tunnel transportation firm the Boring Company.

Last year, xAI acquired Musk’s social media platform, X, formerly known as Twitter, in a deal that valued the combined entity at more than $110 billion. Since then, xAI’s core product, Grok, has become further integrated into the social media platform. Grok is featured prominently in various X features, and Musk has claimed the app’s content-recommendation algorithm is powered by xAI’s technology.

A decade ago, Musk also used shares of his electric car company Tesla to purchase SolarCity, a renewable energy firm that was run at the time by cousin Lyndon Rive.

The xAI acquisition demonstrates how Musk can use his expansive network of companies to help power his own often grandiose visions of the future. Elon Musk said in the blog post that SpaceX will immediately focus on launching satellites into space to power AI development on Earth, but eventually, the space-based data centers he envisions building could power civilizations on other planets, such as Mars.

“This marks not just the next chapter, but the next book in SpaceX and xAI’s mission: scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars,” Musk said in the blog post.



Source link

Continue Reading

Tech

HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

Published

on

HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants


Since last March, the Department of Health and Human Services has been using AI tools from Palantir to screen and audit grants, grant applications, and job descriptions for noncompliance with President Donald Trump’s executive orders targeting “gender ideology” and anything related to diversity, equity, and inclusion (DEI), according to a recently published inventory of all use cases HHS had for AI in 2025.

Neither Palantir nor HHS has publicly announced that the company’s software was being used for these purposes. During the first year of Trump’s second term, Palantir earned more than $35 million in payments and obligations from HHS alone. None of the descriptions for these transactions mention this work targeting DEI or “gender ideology.”

The audits have been taking place within HHS’s Administration for Children and Families (ACF), which funds family and child welfare and oversees the foster and adoption systems. Palantir is the sole contractor charged with making a list of “position descriptions that may need to be adjusted for alignment with recent executive orders.”

In addition to Palantir, the startup Credal AI—which was founded by two Palantir alumni—helped ACF audit “existing grants and new grant applications.” The “AI-based” grant review process, the inventory says, “reviews application submission files and generates initial flags and priorities for discussion.” All relevant information is then routed to the ACF Program Office for final review.

ACF staffers ultimately review any job descriptions, grants, and grant applications that are flagged by AI during a “final review” stage, according to the inventory. It also says that these particular AI use cases are currently “deployed” within ACF, meaning that they are actively being used at the agency.

Last year, ACF paid Credal AI about $750,000 to provide the company’s “Tech Enterprise Generative Artificial Intelligence (GenAI) Platform,” but the payment descriptions in the Federal Register do not mention DEI or “gender ideology.”

HHS, ACF, Palantir, and Credal AI did not return WIRED’s requests for comment.

The executive orders—Executive Order 14151, “Ending Radical and Wasteful Government DEI Programs and Preferencing,” and Executive Order 14168, “Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government”—were both issued on Trump’s first day in office last year.

The first of these orders demands an end to any policies, programs, contracts, grants that mention or concern DEIA, DEI, “equity,” or “environmental justice,” and charges the Office of Management and Budget, the Office of Personnel Management, and the attorney general with leading these efforts.

The second order demands that all “interpretation of and application” of federal laws and policies define “sex” as an “immutable biological classification” and define the only genders as “male” and “female.” It deems “gender ideology” and “gender identity” to be “false” and “disconnected from biological reality.” It also says that no federal funds can be used “to promote gender ideology.”

“Each agency shall assess grant conditions and grantee preferences and ensure grant funds do not promote gender ideology,” it reads.

The consequences of Executive Order 14151, targeting DEI, and Executive Order 14168, targeting “gender ideology,” have been felt deeply throughout the country over the past year.

Early last year, the National Science Foundation started to flag any research that contained terms associated with DEI—including relatively general terms, like “female,” “inclusion,” “systemic,” or “underrepresented”—and place it under official review. The Centers for Disease Control and Prevention began retracting or pausing research that mentioned terms like “LGBT,” “transsexual,” or “nonbinary,” and stopped processing any data related to transgender people. Last July, the Substance Abuse and Mental Health Services Administration removed an LGBTQ youth service line offered by the 988 Suicide & Crisis Lifeline.



Source link

Continue Reading

Trending