Tech
Nvidia Becomes a Major Model Maker With Nemotron 3
Nvidia has made a fortune supplying chips to companies working on artificial intelligence, but today the chipmaker took a step toward becoming a more serious model maker itself by releasing a series of cutting-edge open models, along with data and tools to help engineers use them.
The move, which comes at a moment when AI companies like OpenAI, Google, and Anthropic are developing increasingly capable chips of their own, could be a hedge against these firms veering away from Nvidia’s technology over time.
Open models are already a crucial part of the AI ecosystem with many researchers and startups using them to experiment, prototype, and build. While OpenAI and Google offer small open models, they do not update them as frequently as their rivals in China. For this reason and others, open models from Chinese companies are currently much more popular, according to data from Hugging Face, a hosting platform for open source projects.
Nvidia’s new Nemotron 3 models are among the best that can be downloaded, modified, and run on one’s own hardware, according to benchmark scores shared by the company ahead of release.
“Open innovation is the foundation of AI progress,” CEO Jensen Huang said in a statement ahead of the news. “With Nemotron, we’re transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale.”
Nvidia is taking a more fully transparent approach than many of its US rivals by releasing the data used to train Nemotron—a fact that should help engineers modify the models more easily. The company is also releasing tools to help with customization and fine-tuning. This includes a new hybrid latent mixture-of-experts model architecture, which Nvidia says is especially good for building AI agents that can take actions on computers or the web. The company is also launching libraries that allow users to train agents to do things using reinforcement learning, which involves giving models simulated rewards and punishments.
Nemotron 3 models come in three sizes: Nano, which has 30 billion parameters; Super, which has 100 billion; and Ultra, which has 500 billion. A model’s parameters loosely correspond to how capable it is as well as how unwieldy it is to run. The largest models are so cumbersome that they need to run on racks of expensive hardware.
Model Foundations
Kari Ann Briski, vice president of generative AI software for enterprise at Nvidia, said open models are important to AI builders for three reasons: Builders increasingly need to customize models for particular tasks; it often helps to hand queries off to different models; and it is easier to squeeze more intelligent responses from these models after training by having them perform a kind of simulated reasoning. “We believe open source is the foundation for AI innovation, continuing to accelerate the global economy,” Briski said.
The social media giant Meta released the first advanced open models under the name Llama in February 2023. As competition has intensified, however, Meta has signaled that its future releases might not be open source.
The move is part of a larger trend in the AI industry. Over the past year, US firms have moved away from openness, becoming more secretive about their research and more reluctant to tip off their rivals about their latest engineering tricks.
Tech
‘Orbs,’ ‘Saucers,’ and ‘Flashes’ on the Moon: Pentagon Drops New UFO Files
Trump first teased the release in February in a Truth Social post. The Pentagon coordinated the release in partnership with the White House, Director of National Intelligence Tulsi Gabbard, the Energy Department, NASA, and the FBI. Many of the files in this new drop contain documents that are already publicly available. However, some versions of these known documents in the new files contain more pages, or fewer redactions, than previously released versions.
More than 60 percent of Americans believe that the government is concealing information about UAP, according to YouGov, while 40 percent think UAP are likely alien in origin, according to Gallup. Congress has held hearings into whether there’s been a decades-long program to recover “non-human” technologies, yet evidence remains elusive.
Courtesy of the US Department of Defense
“If it’s just more blobby photos or redacted documents that don’t have any details in them, it’s more of the same,” Adam Frank, an astrophysicist at the University of Rochester who studies the search for alien life, says of the new files. “What we need are actual scientific results from the investigations that should have been done if the most extraordinary claims being made are true.”
The document drop follows a week of high-profile discussions of aliens, including Stephen Colbert’s interview with former President Barack Obama, released on Wednesday. Obama cast doubt on government cover-ups about aliens by joking that “some guy guarding the installation would have taken a selfie with the alien and sent it to his girlfriend.”
Courtesy of the US Department of Defense
Members of the Artemis II crew also second-guessed the idea of a vast government-wide conspiracy to hide the discovery of extraterrestrial life in a discussion with The Daily this week.
“Do you realize that if we found alien life out there, and we came back and reported on it, NASA would never have a budget issue for the rest of eternity?” said Reid Weisman, the commander of Artemis II. “So trust me.”
Victor Glover, the astronaut who piloted the mission, added: “Why would we hide that from you?”
Tech
Nick Bostrom Has a Plan for Humanity’s ‘Big Retirement’
Philosopher Nick Bostrom recently posted a paper, where he postulated that a small chance of AI annihilating all humans might be worth the risk, because advanced AI might relieve humanity of “its universal death sentence.” That upbeat gamble is quite a leap from his previous dark musings on AI, which made him a doomer godfather. His 2014 book Superintelligence was an early examination of AI’s existential risk. One memorable thought experiment: An AI tasked with making paper clips winds up destroying humanity because all those resource-needy people are an impediment to paper clip production. His more recent book, Deep Utopia, reflects a shift in his focus. Bostrom, who leads Oxford’s Future of Humanity Institute, dwells on the “solved world” that comes if we get AI right.
STEVEN LEVY: Deep Utopia is more optimistic than your previous book. What changed for you?
NICK BOSTROM: I call myself a fretful optimist. I am very excited about the potential for radically improving human life and unlocking possibilities for our civilization. That’s consistent with the real possibility of things going wrong.
You wrote a paper with a striking argument: Since we’re all going to die anyway, the worst that can happen with AI is that we die sooner. But if AI works out, it might extend our lives, maybe indefinitely.
That paper explicitly looks at only one aspect of this. In any given academic paper, you can’t address life, the universe, and the meaning of everything. So let’s just look at this little issue and try to nail that down.
That isn’t a little issue.
I guess I’ve been irked by some of the arguments made by doomers who say that if you build AI, you’re going to kill me and my children and how dare you. Like the recent book If Anyone Builds It, Everyone Dies. Even more probable is that if nobody builds it, everyone dies! That’s been the experience for the last several 100,000 years.
But in the doomer scenario everybody dies and there’s no more people being born. Big difference.
I have obviously been very concerned with that. But in this paper, I’m looking at a different question, which is, what would be best for the currently existing human population like you and me and our families and the people in Bangladesh? It does seem like our life expectancy would go up if we develop AI, even if it is quite risky.
In Deep Utopia you speculate that AI could create incredible abundance, so much that humanity might have a huge problem with finding purpose. I live in the United States. We’re a very rich country, but our government, ostensibly with support of the people, has policies that deny services to the poor and distribute rewards to the rich. I think that even if AI was able to provide abundance for everyone, we would not supply it to everyone.
You might be right. Deep Utopia takes as its starting point the postulation that everything goes extremely well. If we do a reasonably good job on governance, everybody gets a share. There is quite a deep philosophical question of what a good human life would look like under these ideal circumstances.
The meaning of life is something you hear a lot about in Woody Allen movies and maybe in the philosophers community. I’m worried more about the wherewithal to support oneself and get a stake in this abundance.
The book is not only about meaning. That’s one out of a bunch of different values that it considers. This could be a wonderful emancipation from the drudgery that humans have been subjected to. If you have to give up, say, half of your waking hours as an adult just to make ends meet, doing some work you don’t enjoy and that you don’t believe in, that’s a sad condition. Society is so used to it that we’ve invented all kinds of rationalizations around it. It’s like a partial form of slavery.
Tech
There’s a Long-Shot Proposal to Protect California Workers From AI
Billionaire California gubernatorial candidate Tom Steyer is rolling out a new proposal that would guarantee jobs with benefits for workers displaced by artificial intelligence. He’s the first state-wide candidate to make such a pledge.
The plan, which builds on a broader AI policy framework Steyer released in March, promises to make California “the first major economy in the world” to ensure “good-paying” jobs to workers impacted by AI. To do so, Steyer tells WIRED he plans to build off a previous proposal to introduce a “token tax” which would tax big tech companies “a fraction of a cent for every unit of data processed” for AI. The funding generated by that tax would go to what Steyer has called the Golden State Sovereign Wealth Fund, with some of that money being earmarked for jobs building housing, health care, and modernizing California’s energy infrastructure.
“The aim of the initiative will be to strengthen the foundation of the state’s economy, invest in our communities, and create beautiful, vibrant public spaces,” states a campaign memo viewed by WIRED. “To support these efforts, Tom will also invest heavily in training and apprenticeship programs across the state.”
The new plan also intends to expand unemployment insurance and establish a new agency called the AI Worker Protection Administration that would include union leaders, academics, and technologists that would adopt rules to protect workers’ rights, the memo says.
“People all over this state are terrified that AI is going to hollow out this whole economy and they’re going to lose their jobs. Young people are worried they’ll never get a job,” Steyer tells WIRED. “We believe this can be an amazing transformational technology in many ways, but we’re not in the business of leaving people in California behind.”
Steyer’s job guarantee comes as lawmakers across the state and federal levels—and even some AI executives—scramble to address the ramifications of widespread AI adoption across the US workforce. In New Jersey, state senator Troy Singleton recently put out a bill that would require companies that replace workers with AI to contribute to a fund that would pay to retrain those workers. In Congress, there are a handful of proposals for grants and tax credits for companies to provide AI training to existing employees.
Dario Amodei, CEO of Anthropic, has previously suggested the concept of a token tax that is now being proposed by Steyer. “Obviously, that’s not in my economic interest,” Amodei told Axios last year. “But I think that would be a reasonable solution to the problem.” In April, OpenAI proposed a similar public wealth fund to what Steyer has rolled out.
Steyer’s announcement comes days after Democratic primary opponent Xavier Becerra—former Health and Human Services secretary under president Joe Biden—offered his own AI plan. In that proposal, Becerra calls for “workforce investment and transition support” but doesn’t provide a specific funding mechanism.
“Displacement without support is abandonment,” Becerra said in a Monday memo outlining his plan. “I will work with the Legislature, the California public education system and industry partners to build accessible, stackable workforce programs that prepare Californians for the AI economy and support workers navigating role changes.”
Over the past few months, the White House has threatened to go after states that choose to regulate AI. In December, President Donald Trump signed an executive order that could revoke federal broadband funding from states that approve “onerous” AI laws. This is happening in local races as well: In New York, a super PAC backed by a number of Silicon Valley powerhouses, including OpenAI cofounder Greg Brockman, has targeted Alex Bores, a Manhattan congressional candidate who has made AI regulation the centerpiece of his campaign.
“Not regulating AI doesn’t seem remotely reasonable,” Steyer says. “But if California wants to lead, we’ve got to have a vision for the future that includes something that is not just about letting entrepreneurs get rich at the expense of everybody else.”
-
Politics5 days agoIran weighs US reply delivered via Pakistan as Trump signals opposition to deal terms
-
Business1 week agoPSX plunges over 4,800 points | The Express Tribune
-
Fashion7 days agoUS’ J.Jill, Inc. appoints Kimberly Wallengren as CMO
-
Fashion1 week agoCanada’s Lululemon appoints Esi Eggleston Bracey to board of directors
-
Tech1 week agoAlmost half of UK businesses hit by cyber attacks | Computer Weekly
-
Tech1 week agoThis Indigenous Language Survived Russian Occupation. Can It Survive YouTube?
-
Entertainment1 week agoDavid Allan Coe, country singer who wrote “Take This Job and Shove It,” dies at age 86
-
Business1 week agoGovernment hikes jet fuel prices by 5% for international airlines – The Times of India

