Connect with us

Tech

Engineers design origami structures that change shape and stiffness on demand

Published

on

Engineers design origami structures that change shape and stiffness on demand


Elastic components allow engineers to create structures that respond in new ways to outside force. Credit: Wright Seneres/Princeton University

Princeton engineers are twisting, stretching and creasing structures to create a new type of origami, one that changes its shape and properties in response to changing circumstances. The new method could be useful for prosthetics, antennas and other devices.

When a device needs to fit into a compact space—in a spacecraft or a surgical device—and then unfold into an intricate shape, origami often provides a solution. But most origami shapes are locked into a few set patterns once their folds are made.

A Princeton team led by Glaucio Paulino wanted to create structures that react to an outside stimulus in multiple ways, not just in a few patterned responses. To accomplish this, the team turned to a technique called .

An origami-based structure will fold and twist in certain ways based on the structure’s material properties and its geometry. When engineers prevent that natural motion, they call it “frustrating” the structure. Normally, engineers have to work around frustration, but in this case it expands their toolkit.

“Sometimes frustration is desirable,” said Paulino, the Margareta Engman Augustine Professor of Engineering at Princeton. Frustration allows designers to cause the origami to follow patterns not normally allowed by its geometry. “This opens up many possibilities of things we could engineer that we could never do before.”

Looking for the perfect fold? It's frustrating.
Elastic components allow engineers to create structures that respond in new ways to outside force. Credit: Wright Seneres/Princeton University

In an article published in the Proceedings of the National Academy of Sciences, the researchers described how they added elastic components to cylindrical origami structures called Kresling cells. The elastic sections act like springs. By controlling how the springs respond to a force, the researchers were able to execute precise folding patterns of the cells that were not feasible without the springs.

Paulino said springs allow designers to introduce internal energy into the folded structure using pre-stress. This pre-stress allows the origami to respond in ways that are not possible with ordinary materials. For example, engineers can introduce a twisting spring that rotates the origami in a specific fashion; they can add a along the structure’s main axis that either squeezes the structure into a compact shape or stretches it out.

By combining frustrated cells in stacks, the engineers were able to develop materials with fine control over material properties like stiffness. For example, a built with this system can stiffen to provide support while walking on a flat surface but reconfigure into a more flexible state for climbing stairs. The designers could also create adjustable metasurfaces that are used in antennas and optics.

“Exploiting frustration lets us reprogram origami mechanics, for instance turning random Kresling folding into precise, controllable sequences and opening new possibilities for advanced applications,” said Diego Misseroni, a collaborator from the University of Trento.

“We can program any mechanical property that we wish, so this is quite unique,” said Tuo Zhao, a postdoctoral researcher in Paulino’s group.

The team sees potential impact for this type of structure in many fields. This frustrated origami system can combine with other techniques and materials that can change on demand, according to Shixi Zang, postdoctoral researcher and first author of the paper. One example is using frustrated to develop responsive, modular devices like a passive sunshade that opens and closes based on the ambient temperature.

More information:
Shixi Zang et al, Origami frustration and its influence on energy landscapes of origami assemblies, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2426790122

Citation:
Engineers design origami structures that change shape and stiffness on demand (2025, September 6)
retrieved 6 September 2025
from https://techxplore.com/news/2025-09-origami-stiffness-demand.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

We Just Found Out Taylor Swift Sleeps on a Coop Pillow—They’re Having a Flash Sale to Celebrate

Published

on

We Just Found Out Taylor Swift Sleeps on a Coop Pillow—They’re Having a Flash Sale to Celebrate


While I’m a mattress and sleep product expert, thanks to years of hands-on experience, I’m also aware that my opinion is not the end-all, be-all for everyone. However, when a megastar is also a fan of a product you’ve reviewed, it’s a good confirmation that you’re on the right track.

Taylor Swift, as it would turn out, is also a fan of Coop Sleep Goods—which we can confirm based on this December 10 Late Show With Stephen Colbert appearance.

Coop’s got some of our favorite pillows, particularly the Original Adjustable pillow. It comes in three shapes: the Crescent, the Cut Out, and the Classic, which is a traditional rectangular shape. I love (and regularly sleep on) the Crescent, which has a gentle curve on the bottom to allow for movement while maintaining head and neck support.



Source link

Continue Reading

Tech

Nvidia Becomes a Major Model Maker With Nemotron 3

Published

on

Nvidia Becomes a Major Model Maker With Nemotron 3


Nvidia has made a fortune supplying chips to companies working on artificial intelligence, but today the chipmaker took a step toward becoming a more serious model maker itself by releasing a series of cutting-edge open models, along with data and tools to help engineers use them.

The move, which comes at a moment when AI companies like OpenAI, Google, and Anthropic are developing increasingly capable chips of their own, could be a hedge against these firms veering away from Nvidia’s technology over time.

Open models are already a crucial part of the AI ecosystem with many researchers and startups using them to experiment, prototype, and build. While OpenAI and Google offer small open models, they do not update them as frequently as their rivals in China. For this reason and others, open models from Chinese companies are currently much more popular, according to data from Hugging Face, a hosting platform for open source projects.

Nvidia’s new Nemotron 3 models are among the best that can be downloaded, modified, and run on one’s own hardware, according to benchmark scores shared by the company ahead of release.

“Open innovation is the foundation of AI progress,” CEO Jensen Huang said in a statement ahead of the news. “With Nemotron, we’re transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale.”

Nvidia is taking a more fully transparent approach than many of its US rivals by releasing the data used to train Nemotron—a fact that should help engineers modify the models more easily. The company is also releasing tools to help with customization and fine-tuning. This includes a new hybrid latent mixture-of-experts model architecture, which Nvidia says is especially good for building AI agents that can take actions on computers or the web. The company is also launching libraries that allow users to train agents to do things using reinforcement learning, which involves giving models simulated rewards and punishments.

Nemotron 3 models come in three sizes: Nano, which has 30 billion parameters; Super, which has 100 billion; and Ultra, which has 500 billion. A model’s parameters loosely correspond to how capable it is as well as how unwieldy it is to run. The largest models are so cumbersome that they need to run on racks of expensive hardware.

Model Foundations

Kari Ann Briski, vice president of generative AI software for enterprise at Nvidia, said open models are important to AI builders for three reasons: Builders increasingly need to customize models for particular tasks; it often helps to hand queries off to different models; and it is easier to squeeze more intelligent responses from these models after training by having them perform a kind of simulated reasoning. “We believe open source is the foundation for AI innovation, continuing to accelerate the global economy,” Briski said.

The social media giant Meta released the first advanced open models under the name Llama in February 2023. As competition has intensified, however, Meta has signaled that its future releases might not be open source.

The move is part of a larger trend in the AI industry. Over the past year, US firms have moved away from openness, becoming more secretive about their research and more reluctant to tip off their rivals about their latest engineering tricks.



Source link

Continue Reading

Tech

This Startup Wants to Build Self-Driving Car Software—Super Fast

Published

on

This Startup Wants to Build Self-Driving Car Software—Super Fast


For the last year and a half, two hacked white Tesla Model 3 sedans each loaded with five extra cameras and one palm-sized supercomputer have quietly cruised around San Francisco. In a city and era swarming with questions about the capabilities and limits of artificial intelligence, the startup behind the modified Teslas is trying to answer what amounts to a simple question: How quickly can a company build autonomous vehicle software today?

The startup, which is making its activities public for the first time today, is called HyprLabs. Its 17-person team (just eight of them full-time) is divided between Paris and San Francisco, and the company is helmed by an autonomous vehicle company veteran, Zoox cofounder Tim Kentley-Klay, who suddenly exited the now Amazon-owned firm in 2018. Hypr has taken in relatively little funding, $5.5 million since 2022, but its ambitions are wide-ranging. Eventually, it plans to build and operate its own robots. “Think of the love child of R2-D2 and Sonic the Hedgehog,” Kentley-Klay says. “It’s going to define a new category that doesn’t currently exist.”

For now, though, the startup is announcing its software product called Hyprdrive, which it bills as a leap forward in how engineers train vehicles to pilot themselves. These sorts of leaps are all over the robotics space, thanks to advances in machine learning that promise to bring down the cost of training autonomous vehicle software, and the amount of human labor involved. This training evolution has brought new movement to a space that for years suffered through a “trough of disillusionment,” as tech builders failed to meet their own deadlines to operate robots in public spaces. Now, robotaxis pick up paying passengers in more and more cities, and automakers make newly ambitious promises about bringing self-driving to customers’ personal cars.

But using a small, agile, and cheap team to get from “driving pretty well” to “driving much more safely than a human” is its own long hurdle. “I can’t say to you, hand on heart, that this will work,” Kentley-Klay says. “But what we’ve built is a really solid signal. It just needs to be scaled up.”

Old Tech, New Tricks

HyprLabs’ software training technique is a departure from other robotics’ startups approaches to teaching their systems to drive themselves.

First, some background: For years, the big battle in autonomous vehicles seemed to be between those who used just cameras to train their software—Tesla!—and those who depended on other sensors, too—Waymo, Cruise!—including once-expensive lidar and radar. But below the surface, larger philosophical differences churned.

Camera-only adherents like Tesla wanted to save money while scheming to launch a gigantic fleet of robots; for a decade, CEO Elon Musk’s plan has been to suddenly switch all of his customers’ cars to self-driving ones with the push of a software update. The upside was that these companies had lots and lots of data, as their not-yet self-driving cars collected images wherever they drove. This information got fed into what’s called an “end-to-end” machine learning model through reinforcement. The system takes in images—a bike—and spits out driving commands—move the steering wheel to the left and go easy on the acceleration to avoid hitting it. “It’s like training a dog,” says Philip Koopman, an autonomous vehicle software and safety researcher at Carnegie Mellon University. “At the end, you say, ‘Bad dog,” or ‘Good dog.’”



Source link

Continue Reading

Trending