Connect with us

Tech

Nvidia Becomes a Major Model Maker With Nemotron 3

Published

on

Nvidia Becomes a Major Model Maker With Nemotron 3


Nvidia has made a fortune supplying chips to companies working on artificial intelligence, but today the chipmaker took a step toward becoming a more serious model maker itself by releasing a series of cutting-edge open models, along with data and tools to help engineers use them.

The move, which comes at a moment when AI companies like OpenAI, Google, and Anthropic are developing increasingly capable chips of their own, could be a hedge against these firms veering away from Nvidia’s technology over time.

Open models are already a crucial part of the AI ecosystem with many researchers and startups using them to experiment, prototype, and build. While OpenAI and Google offer small open models, they do not update them as frequently as their rivals in China. For this reason and others, open models from Chinese companies are currently much more popular, according to data from Hugging Face, a hosting platform for open source projects.

Nvidia’s new Nemotron 3 models are among the best that can be downloaded, modified, and run on one’s own hardware, according to benchmark scores shared by the company ahead of release.

“Open innovation is the foundation of AI progress,” CEO Jensen Huang said in a statement ahead of the news. “With Nemotron, we’re transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale.”

Nvidia is taking a more fully transparent approach than many of its US rivals by releasing the data used to train Nemotron—a fact that should help engineers modify the models more easily. The company is also releasing tools to help with customization and fine-tuning. This includes a new hybrid latent mixture-of-experts model architecture, which Nvidia says is especially good for building AI agents that can take actions on computers or the web. The company is also launching libraries that allow users to train agents to do things using reinforcement learning, which involves giving models simulated rewards and punishments.

Nemotron 3 models come in three sizes: Nano, which has 30 billion parameters; Super, which has 100 billion; and Ultra, which has 500 billion. A model’s parameters loosely correspond to how capable it is as well as how unwieldy it is to run. The largest models are so cumbersome that they need to run on racks of expensive hardware.

Model Foundations

Kari Ann Briski, vice president of generative AI software for enterprise at Nvidia, said open models are important to AI builders for three reasons: Builders increasingly need to customize models for particular tasks; it often helps to hand queries off to different models; and it is easier to squeeze more intelligent responses from these models after training by having them perform a kind of simulated reasoning. “We believe open source is the foundation for AI innovation, continuing to accelerate the global economy,” Briski said.

The social media giant Meta released the first advanced open models under the name Llama in February 2023. As competition has intensified, however, Meta has signaled that its future releases might not be open source.

The move is part of a larger trend in the AI industry. Over the past year, US firms have moved away from openness, becoming more secretive about their research and more reluctant to tip off their rivals about their latest engineering tricks.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

This Startup Wants to Build Self-Driving Car Software—Super Fast

Published

on

This Startup Wants to Build Self-Driving Car Software—Super Fast


For the last year and a half, two hacked white Tesla Model 3 sedans each loaded with five extra cameras and one palm-sized supercomputer have quietly cruised around San Francisco. In a city and era swarming with questions about the capabilities and limits of artificial intelligence, the startup behind the modified Teslas is trying to answer what amounts to a simple question: How quickly can a company build autonomous vehicle software today?

The startup, which is making its activities public for the first time today, is called HyprLabs. Its 17-person team (just eight of them full-time) is divided between Paris and San Francisco, and the company is helmed by an autonomous vehicle company veteran, Zoox cofounder Tim Kentley-Klay, who suddenly exited the now Amazon-owned firm in 2018. Hypr has taken in relatively little funding, $5.5 million since 2022, but its ambitions are wide-ranging. Eventually, it plans to build and operate its own robots. “Think of the love child of R2-D2 and Sonic the Hedgehog,” Kentley-Klay says. “It’s going to define a new category that doesn’t currently exist.”

For now, though, the startup is announcing its software product called Hyprdrive, which it bills as a leap forward in how engineers train vehicles to pilot themselves. These sorts of leaps are all over the robotics space, thanks to advances in machine learning that promise to bring down the cost of training autonomous vehicle software, and the amount of human labor involved. This training evolution has brought new movement to a space that for years suffered through a “trough of disillusionment,” as tech builders failed to meet their own deadlines to operate robots in public spaces. Now, robotaxis pick up paying passengers in more and more cities, and automakers make newly ambitious promises about bringing self-driving to customers’ personal cars.

But using a small, agile, and cheap team to get from “driving pretty well” to “driving much more safely than a human” is its own long hurdle. “I can’t say to you, hand on heart, that this will work,” Kentley-Klay says. “But what we’ve built is a really solid signal. It just needs to be scaled up.”

Old Tech, New Tricks

HyprLabs’ software training technique is a departure from other robotics’ startups approaches to teaching their systems to drive themselves.

First, some background: For years, the big battle in autonomous vehicles seemed to be between those who used just cameras to train their software—Tesla!—and those who depended on other sensors, too—Waymo, Cruise!—including once-expensive lidar and radar. But below the surface, larger philosophical differences churned.

Camera-only adherents like Tesla wanted to save money while scheming to launch a gigantic fleet of robots; for a decade, CEO Elon Musk’s plan has been to suddenly switch all of his customers’ cars to self-driving ones with the push of a software update. The upside was that these companies had lots and lots of data, as their not-yet self-driving cars collected images wherever they drove. This information got fed into what’s called an “end-to-end” machine learning model through reinforcement. The system takes in images—a bike—and spits out driving commands—move the steering wheel to the left and go easy on the acceleration to avoid hitting it. “It’s like training a dog,” says Philip Koopman, an autonomous vehicle software and safety researcher at Carnegie Mellon University. “At the end, you say, ‘Bad dog,” or ‘Good dog.’”



Source link

Continue Reading

Tech

The three cyber trends that will define 2026 | Computer Weekly

Published

on

The three cyber trends that will define 2026 | Computer Weekly


We are staring down the barrel of 2026. If you think the last 12 months were chaotic, strap in.

The business-as-usual model for security is dead. We are moving into an era where the CISO is either a financial risk broker or irrelevant, where AI doesn’t just write emails but writes exploits, and where your right to privacy is being legislated out of existence.

Here is my take on the three trends that will define the next year.

1. The federated CISO (stop counting bugs)

Let’s be honest: the CISO 2.0 buzzword from 2020 is stale. In mature organisations, the CISO role has already shifted. We aren’t technical guardians anymore; we are risk brokers.

By 2026, if you are still reporting the number of vulnerabilities you patched to your board, you are failing. The successful CISO is embedded in the profit and loss (P&L) function. They speak the language of the CFO, not the language of the firewall. They don’t ask for budget to ‘fix stuff’; they present investment cases based on earnings at risk.

The Office of the CISO

The days of the CISO trying to manage every security decision are over. The scope is too wide. The smart move for 2026 is decentralisation, a Federated Security Model. You set the guardrails (policy and platform), but you let your security champions in engineering, sales, and other business functions to execute the actual work. You stop being the bottleneck and start being the auditor.

And you better have the emotional intelligence to handle the heat. When a ransomware negotiation goes south or your team is burning out from alerting fatigue, you need to be the calmest person in the room.

2. The agentic AI explosion

We have moved way past large language models (LLMs) that just ‘chat’. We are now dealing with autonomous agents that ‘do’. As 2026 arrives, we aren’t writing prompts; we are governing digital workers capable of reasoning and using tools. In timely news, you should read the new OWASP Top 10 for Agentic Applications 2026.

I view this with a mix of professional alarm and strategic hope.

The bad news:

The bad guys are moving faster. We are seeing polymorphic attack agents that don’t just run scripts; they improvise. They scan for targets, write bespoke exploit code on the fly, and – this is the part that keeps me up at night – then manage the extortion. These agents can negotiate ransom payments using sentiment analysis to squeeze the maximum payout from a victim without a human criminal ever touching a keyboard.

The good news:

We can fight fire with fire. We are entering the era of self-healing infrastructure. Defensive agents that detect an anomaly and fix it – blocking IPs, isolating containers, rewriting rules – before a human analyst even opens their laptop.

For the CISO, this is how we solve the data overload. We don’t need more dashboards. We need virtual analyst agents that audit our environment 24/7 and feed a quantitative risk model.

3. The fight for the right to privacy

While we obsess over AI, a much quieter war is being lost. Governments are dismantling the presumption of privacy.

I am watching this “slow boiling of the frog” with deep concern. It’s not just about encryption anymore; it’s about the right to exist digitally without showing your papers.

The border dragnet

Have you travelled recently? The presumption of privacy at the border is gone. It is becoming normal to surrender years of emails and social media history just to enter a country. We are handing over our digital souls to border agents as the price of entry.

The “16+” trap

Look at what happened in Australia just a few days ago. The new legislation restricts social media to those over 16. It sounds noble, but the logic is flawed. To exclude a minor, you have to verify everyone. You cannot filter out the 15-year-old without carding the 50-year-old.

The naive solution – uploading passport scans to random websites – is a privacy disaster waiting to happen.

The only way out – the device lifeline

There is only one technical way to comply with these laws without building a surveillance state: Privacy-preserving age verification.

We need a model where your device – which already knows who you are – generates a cryptographic token (a zero-knowledge proof) that simply tells the website the user is over 16. The website gets a ‘Yes’, but never your name. The OS vendor sees a token request, but not which site you are visiting.

But let’s be clear about the trade-off. We are effectively asking Apple and Google to become the custodians of our civil liberties, protecting us from state overreach.

It is a strange world where I trust Apple more than I trust the government, but here we are.



Source link

Continue Reading

Tech

Radiation-Detection Systems Are Quietly Running in the Background All Around You

Published

on

Radiation-Detection Systems Are Quietly Running in the Background All Around You


Most people are not aware of how much radiation monitoring goes on around them all the time, including in public places. Airports have sophisticated radiation detectors, for example. In 2022, devices at Heathrow flagged a package that turned out to contain a small amount of uranium. There was no risk to the public, authorities said at the time.

Mirion is one of several companies that make radiation detectors. Their products are used for defense and security applications, as well as in nuclear power plants, laboratories, and research contexts. “If there’s an incident in a nuclear plant like a fuel leak…these systems are connected to the safety system of the nuclear plant, so the nuclear plant will shut down,” explains James Cocks, chief technology officer. Area monitors suck particulate emitted by power plants onto filter paper, which can be analyzed to see whether or not there has been an uncontrolled release of radiation.

The company even makes a radiation detector designed to fit to the underside of a drone. Cocks says that, in the immediate aftermath of Fukushima, such was the need to collect data on radiation that someone drove around on a motorbike with a radiation detector. Drones would, today, offer a safer way of gathering such information, he suggests.

But Mirion also makes handheld detectors that can be carried by personnel keeping an eye on major sports events, for example. And these can distinguish between different types of radiation. You want to be able to tell, for example, whether your higher-than-normal readings are coming from a dirty bomb—or just someone who recently had medical treatment involving a radioisotope. “We can identify whether it’s background, naturally occurring radiation…whether it’s a medical radioisotope or whether it’s…a fission product,” says Cocks.

And so one legacy of the Chernobyl and Fukushima disasters is that we now have hugely upgraded radiation-monitoring systems dotted around the world. There has been a marked increase in efforts to track radiation in the wake of those accidents, says Kearfott.

Bonner acknowledges that some people experience anxiety regarding radiation—now and again, a volunteer would build a Safecast detector, switch it on and “freak out” when it began detecting activity, he says. However, it is important to show how pervasive, and variable, background radiation really is, he says: “We absolutely believe that it’s reassuring to let people know what’s going on.”



Source link

Continue Reading

Trending