Tech
The AI Boom Is Fueling a Need for Speed in Chip Networking
The new era of Silicon Valley runs on networking—and not the kind you find on LinkedIn.
As the tech industry funnels billions into AI data centers, chip makers both big and small are ramping up innovation around the technology that connects chips to other chips, and server racks to other server racks.
Networking technology has been around since the dawn of the computer, critically connecting mainframes so they can share data. In the world of semiconductors, networking plays a part at almost every level of the stack—from the interconnect between transistors on the chip itself, to the external connections made between boxes or racks of chips.
Chip giants like Nvidia, Broadcom, and Marvell already have well-established networking bona fides. But in the AI boom, some companies are seeking new networking approaches that help them speed up the massive amounts of digital information flowing through data centers. This is where deep-tech startups like Lightmatter, Celestial AI, and PsiQuantum, which use optical technology to accelerate high-speed computing, come in.
Optical technology, or photonics, is having a coming-of-age moment. The technology was considered “lame, expensive, and marginally useful,” for 25 years until the AI boom reignited interest in it, according to PsiQuantum cofounder and chief scientific officer Pete Shadbolt. (Shadbolt appeared on a panel last week that WIRED cohosted.)
Some venture capitalists and institutional investors, hoping to catch the next wave of chip innovation or at least find a suitable acquisition target, are funneling billions into startups like these that have found new ways to speed up data throughput. They believe that traditional interconnect technology, which relies on electrons, simply can’t keep pace with the growing need for high-bandwidth AI workloads.
“If you look back historically, networking was really boring to cover, because it was switching packets of bits,” says Ben Bajarin, a longtime tech analyst who serves as CEO of the research firm Creative Strategies. “Now, because of AI, it’s having to move fairly robust workloads, and that’s why you’re seeing innovation around speed.”
Big Chip Energy
Bajarin and others give credit to Nvidia for being prescient about the importance of networking when it made two key acquisitions in the technology years ago. In 2020, Nvidia spent nearly $7 billion to acquire the Israeli firm Mellanox Technologies, which makes high-speed networking solutions for servers and data centers. Shortly after, Nvidia purchased Cumulus Networks, to power its Linux-based software system for computer networking. This was a turning point for Nvidia, which rightly wagered that the GPU and its parallel-computing capabilities would become much more powerful when clustered with other GPUs and put in data centers.
While Nvidia dominates in vertically-integrated GPU stacks, Broadcom has become a key player in custom chip accelerators and high-speed networking technology. The $1.7 trillion company works closely with Google, Meta, and more recently, OpenAI, on chips for data centers. It’s also at the forefront of silicon photonics. And last month, Reuters reported that Broadcom is readying a new networking chip called Thor Ultra, designed to provide a “critical link between an AI system and the rest of the data center.”
On its earnings call last week, semiconductor design giant ARM announced plans to acquire the networking company DreamBig for $265 million. DreamBig makes AI chiplets—small, modular circuits designed to be packaged together in larger chip systems—in partnership with Samsung. The startup has “interesting intellectual property … which [is] very key for scale-up and scale-out networking” said ARM CEO Rene Haas on the earnings call. (This means connecting components and sending data up and down a single chip cluster, as well as connecting racks of chips with other racks.)
Light On
Lightmatter CEO Nick Harris has pointed out that the amount of computing power that AI requires now doubles every three months—much faster than Moore’s Law dictates. Computer chips are getting bigger and bigger. “Whenever you’re at the state of the art of the biggest chips you can build, all performance after that comes from linking the chips together,” Harris says.
His company’s approach is cutting-edge and doesn’t rely on traditional networking technology. Lightmatter builds silicon photonics that link chips together. It claims to make the world’s fastest photonic engine for AI chips, essentially a 3D stack of silicon connected by light-based interconnect technology. The startup has raised more than $500 million over the past two years from investors like GV and T. Rowe Price. Last year, its valuation reached $4.4 billion.