Connect with us

Tech

OpenAI reaches new agreement with Microsoft to change its corporate structure

Published

on

OpenAI reaches new agreement with Microsoft to change its corporate structure


Credit: Unsplash/CC0 Public Domain

OpenAI has reached a new tentative agreement with Microsoft and said its nonprofit, which technically controls its business, will now be given a $100 billion equity stake in its for-profit corporation.

The maker of ChatGPT said it had reached a new nonbinding with Microsoft, its longtime partner, “for the next phase of our partnership.”

The announcements on Thursday include a few details about these new arrangements. OpenAI’s proposed changes to its corporate structure have drawn the scrutiny of regulators, competitors and advocates concerned about the impacts of artificial intelligence.

OpenAI was founded as a nonprofit in 2015 and its nonprofit board has continued to control the for-profit subsidiary that now develops and sells its AI products. It’s not clear whether the $100 billion the nonprofit will get as part of this announcement represents a controlling stake in the business.

California Attorney General Rob Bonta said last week that his office was investigating OpenAI’s proposed restructuring of its finances and governance. His office said they could not comment on the new announcements but said they are “committed to protecting charitable assets for their intended purpose.”

Bonta and Delaware Attorney General Kathy Jennings also sent the company a letter expressing concerns about the safety of ChatGPT after meeting with OpenAI’s legal team earlier last week in Delaware, where OpenAI is incorporated.

“Together, we are particularly concerned with ensuring that the stated safety mission of OpenAI as a non-profit remains front and center,” Bonta said in a statement last week.

Microsoft invested its first $1 billion in OpenAI in 2019 and the two companies later formed an agreement that made Microsoft the exclusive provider of the computing power needed to build OpenAI’s technology. In turn, Microsoft heavily used the technology behind ChatGPT to enhance its own AI products.

The two companies announced on Jan. 21 that they were altering that agreement, enabling the smaller company to build its own computing capacity, “primarily for research and training of models.” That coincided with OpenAI’s announcements of a partnership with Oracle to build a massive new data center in Abilene, Texas.

But other parts of its agreements with Microsoft remained up in the air as the two companies appeared to veer further apart. Their Thursday joint statement said they were still “actively working to finalize contractual terms in a definitive agreement.” Both companies declined further comment.

OpenAI had given its nonprofit board of directors—whose members now include a former U.S. Treasury secretary—the responsibility of deciding when its AI systems have reached the point at which they “outperform humans at most economically valuable work,” a concept known as artificial general intelligence, or AGI.

Such an achievement, per its earlier agreements, would cut off Microsoft from the rights to commercialize such a system, since the terms “only apply to pre-AGI technology.”

OpenAI’s corporate structure and nonprofit mission are also the subject of a lawsuit brought by Elon Musk, who helped found the nonprofit research lab and provided initial funding. Musk’s suit seeks to stop OpenAI from taking control of the company away from its nonprofit and alleges it has betrayed its promise to develop AI for the benefit of humanity.

© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
OpenAI reaches new agreement with Microsoft to change its corporate structure (2025, September 13)
retrieved 13 September 2025
from https://techxplore.com/news/2025-09-openai-agreement-microsoft-corporate.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

We Just Found Out Taylor Swift Sleeps on a Coop Pillow—They’re Having a Flash Sale to Celebrate

Published

on

We Just Found Out Taylor Swift Sleeps on a Coop Pillow—They’re Having a Flash Sale to Celebrate


While I’m a mattress and sleep product expert, thanks to years of hands-on experience, I’m also aware that my opinion is not the end-all, be-all for everyone. However, when a megastar is also a fan of a product you’ve reviewed, it’s a good confirmation that you’re on the right track.

Taylor Swift, as it would turn out, is also a fan of Coop Sleep Goods—which we can confirm based on this December 10 Late Show With Stephen Colbert appearance.

Coop’s got some of our favorite pillows, particularly the Original Adjustable pillow. It comes in three shapes: the Crescent, the Cut Out, and the Classic, which is a traditional rectangular shape. I love (and regularly sleep on) the Crescent, which has a gentle curve on the bottom to allow for movement while maintaining head and neck support.



Source link

Continue Reading

Tech

Nvidia Becomes a Major Model Maker With Nemotron 3

Published

on

Nvidia Becomes a Major Model Maker With Nemotron 3


Nvidia has made a fortune supplying chips to companies working on artificial intelligence, but today the chipmaker took a step toward becoming a more serious model maker itself by releasing a series of cutting-edge open models, along with data and tools to help engineers use them.

The move, which comes at a moment when AI companies like OpenAI, Google, and Anthropic are developing increasingly capable chips of their own, could be a hedge against these firms veering away from Nvidia’s technology over time.

Open models are already a crucial part of the AI ecosystem with many researchers and startups using them to experiment, prototype, and build. While OpenAI and Google offer small open models, they do not update them as frequently as their rivals in China. For this reason and others, open models from Chinese companies are currently much more popular, according to data from Hugging Face, a hosting platform for open source projects.

Nvidia’s new Nemotron 3 models are among the best that can be downloaded, modified, and run on one’s own hardware, according to benchmark scores shared by the company ahead of release.

“Open innovation is the foundation of AI progress,” CEO Jensen Huang said in a statement ahead of the news. “With Nemotron, we’re transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale.”

Nvidia is taking a more fully transparent approach than many of its US rivals by releasing the data used to train Nemotron—a fact that should help engineers modify the models more easily. The company is also releasing tools to help with customization and fine-tuning. This includes a new hybrid latent mixture-of-experts model architecture, which Nvidia says is especially good for building AI agents that can take actions on computers or the web. The company is also launching libraries that allow users to train agents to do things using reinforcement learning, which involves giving models simulated rewards and punishments.

Nemotron 3 models come in three sizes: Nano, which has 30 billion parameters; Super, which has 100 billion; and Ultra, which has 500 billion. A model’s parameters loosely correspond to how capable it is as well as how unwieldy it is to run. The largest models are so cumbersome that they need to run on racks of expensive hardware.

Model Foundations

Kari Ann Briski, vice president of generative AI software for enterprise at Nvidia, said open models are important to AI builders for three reasons: Builders increasingly need to customize models for particular tasks; it often helps to hand queries off to different models; and it is easier to squeeze more intelligent responses from these models after training by having them perform a kind of simulated reasoning. “We believe open source is the foundation for AI innovation, continuing to accelerate the global economy,” Briski said.

The social media giant Meta released the first advanced open models under the name Llama in February 2023. As competition has intensified, however, Meta has signaled that its future releases might not be open source.

The move is part of a larger trend in the AI industry. Over the past year, US firms have moved away from openness, becoming more secretive about their research and more reluctant to tip off their rivals about their latest engineering tricks.



Source link

Continue Reading

Tech

This Startup Wants to Build Self-Driving Car Software—Super Fast

Published

on

This Startup Wants to Build Self-Driving Car Software—Super Fast


For the last year and a half, two hacked white Tesla Model 3 sedans each loaded with five extra cameras and one palm-sized supercomputer have quietly cruised around San Francisco. In a city and era swarming with questions about the capabilities and limits of artificial intelligence, the startup behind the modified Teslas is trying to answer what amounts to a simple question: How quickly can a company build autonomous vehicle software today?

The startup, which is making its activities public for the first time today, is called HyprLabs. Its 17-person team (just eight of them full-time) is divided between Paris and San Francisco, and the company is helmed by an autonomous vehicle company veteran, Zoox cofounder Tim Kentley-Klay, who suddenly exited the now Amazon-owned firm in 2018. Hypr has taken in relatively little funding, $5.5 million since 2022, but its ambitions are wide-ranging. Eventually, it plans to build and operate its own robots. “Think of the love child of R2-D2 and Sonic the Hedgehog,” Kentley-Klay says. “It’s going to define a new category that doesn’t currently exist.”

For now, though, the startup is announcing its software product called Hyprdrive, which it bills as a leap forward in how engineers train vehicles to pilot themselves. These sorts of leaps are all over the robotics space, thanks to advances in machine learning that promise to bring down the cost of training autonomous vehicle software, and the amount of human labor involved. This training evolution has brought new movement to a space that for years suffered through a “trough of disillusionment,” as tech builders failed to meet their own deadlines to operate robots in public spaces. Now, robotaxis pick up paying passengers in more and more cities, and automakers make newly ambitious promises about bringing self-driving to customers’ personal cars.

But using a small, agile, and cheap team to get from “driving pretty well” to “driving much more safely than a human” is its own long hurdle. “I can’t say to you, hand on heart, that this will work,” Kentley-Klay says. “But what we’ve built is a really solid signal. It just needs to be scaled up.”

Old Tech, New Tricks

HyprLabs’ software training technique is a departure from other robotics’ startups approaches to teaching their systems to drive themselves.

First, some background: For years, the big battle in autonomous vehicles seemed to be between those who used just cameras to train their software—Tesla!—and those who depended on other sensors, too—Waymo, Cruise!—including once-expensive lidar and radar. But below the surface, larger philosophical differences churned.

Camera-only adherents like Tesla wanted to save money while scheming to launch a gigantic fleet of robots; for a decade, CEO Elon Musk’s plan has been to suddenly switch all of his customers’ cars to self-driving ones with the push of a software update. The upside was that these companies had lots and lots of data, as their not-yet self-driving cars collected images wherever they drove. This information got fed into what’s called an “end-to-end” machine learning model through reinforcement. The system takes in images—a bike—and spits out driving commands—move the steering wheel to the left and go easy on the acceleration to avoid hitting it. “It’s like training a dog,” says Philip Koopman, an autonomous vehicle software and safety researcher at Carnegie Mellon University. “At the end, you say, ‘Bad dog,” or ‘Good dog.’”



Source link

Continue Reading

Trending