Tech
A neural blueprint for human-like intelligence in soft robots
A new artificial intelligence control system enables soft robotic arms to learn a wide repertoire of motions and tasks once, then adjust to new scenarios on the fly, without needing retraining or sacrificing functionality.
This breakthrough brings soft robotics closer to human-like adaptability for real-world applications, such as in assistive robotics, rehabilitation robots, and wearable or medical soft robots, by making them more intelligent, versatile, and safe.
The work was led by the Mens, Manus and Machina (M3S) interdisciplinary research group — a play on the Latin MIT motto “mens et manus,” or “mind and hand,” with the addition of “machina” for “machine” — within the Singapore-MIT Alliance for Research and Technology. Co-leading the project are researchers from the National University of Singapore (NUS), alongside collaborators from MIT and Nanyang Technological University in Singapore (NTU Singapore).
Unlike regular robots that move using rigid motors and joints, soft robots are made from flexible materials such as soft rubber and move using special actuators — components that act like artificial muscles to produce physical motion. While their flexibility makes them ideal for delicate or adaptive tasks, controlling soft robots has always been a challenge because their shape changes in unpredictable ways. Real-world environments are often complicated and full of unexpected disturbances, and even small changes in conditions — like a shift in weight, a gust of wind, or a minor hardware fault — can throw off their movements.
Despite substantial progress in soft robotics, existing approaches often can only achieve one or two of the three capabilities needed for soft robots to operate intelligently in real-world environments: using what they’ve learned from one task to perform a different task, adapting quickly when the situation changes, and guaranteeing that the robot will stay stable and safe while adapting its movements. This lack of adaptability and reliability has been a major barrier to deploying soft robots in real-world applications until now.
In an open-access study titled “A general soft robotic controller inspired by neuronal structural and plastic synapses that adapts to diverse arms, tasks, and perturbations,” published Jan. 6 in Science Advances, the researchers describe how they developed a new AI control system that allows soft robots to adapt across diverse tasks and disturbances. The study takes inspiration from the way the human brain learns and adapts, and was built on extensive research in learning-based robotic control, embodied intelligence, soft robotics, and meta-learning.
The system uses two complementary sets of “synapses” — connections that adjust how the robot moves — working in tandem. The first set, known as “structural synapses”, is trained offline on a variety of foundational movements, such as bending or extending a soft arm smoothly. These form the robot’s built‑in skills and provide a strong, stable foundation. The second set, called “plastic synapses,” continually updates online as the robot operates, fine-tuning the arm’s behavior to respond to what is happening in the moment. A built-in stability measure acts like a safeguard, so even as the robot adjusts during online adaptation, its behavior remains smooth and controlled.
“Soft robots hold immense potential to take on tasks that conventional machines simply cannot, but true adoption requires control systems that are both highly capable and reliably safe. By combining structural learning with real-time adaptiveness, we’ve created a system that can handle the complexity of soft materials in unpredictable environments,” says MIT Professor Daniela Rus, co-lead principal investigator at M3S, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and co-corresponding author of the paper. “It’s a step closer to a future where versatile soft robots can operate safely and intelligently alongside people — in clinics, factories, or everyday lives.”
“This new AI control system is one of the first general soft-robot controllers that can achieve all three key aspects needed for soft robots to be used in society and various industries. It can apply what it learned offline across different tasks, adapt instantly to new conditions, and remain stable throughout — all within one control framework,” says Associate Professor Zhiqiang Tang, first author and co-corresponding author of the paper who was a postdoc at M3S and at NUS when he carried out the research and is now an associate professor at Southeast University in China (SEU China).
The system supports multiple task types, enabling soft robotic arms to execute trajectory tracking, object placement, and whole-body shape regulation within one unified approach. The method also generalizes across different soft-arm platforms, demonstrating cross-platform applicability.
The system was tested and validated on two physical platforms — a cable-driven soft arm and a shape-memory-alloy–actuated soft arm — and delivered impressive results. It achieved a 44–55 percent reduction in tracking error under heavy disturbances; over 92 percent shape accuracy under payload changes, airflow disturbances, and actuator failures; and stable performance even when up to half of the actuators failed.
“This work redefines what’s possible in soft robotics. We’ve shifted the paradigm from task-specific tuning and capabilities toward a truly generalizable framework with human-like intelligence. It is a breakthrough that opens the door to scalable, intelligent soft machines capable of operating in real-world environments,” says Professor Cecilia Laschi, co-corresponding author and principal investigator at M3S, Provost’s Chair Professor in the NUS Department of Mechanical Engineering at the College of Design and Engineering, and director of the NUS Advanced Robotics Centre.
This breakthrough opens doors for more robust soft robotic systems to develop manufacturing, logistics, inspection, and medical robotics without the need for constant reprogramming — reducing downtime and costs. In health care, assistive and rehabilitation devices can automatically tailor their movements to a patient’s changing strength or posture, while wearable or medical soft robots can respond more sensitively to individual needs, improving safety and patient outcomes.
The researchers plan to extend this technology to robotic systems or components that can operate at higher speeds and more complex environments, with potential applications in assistive robotics, medical devices, and industrial soft manipulators, as well as integration into real-world autonomous systems.
The research conducted at SMART was supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.
Tech
One of Our Favorite 360 Cams Is 35 Percent Off
Tired of taking your action camera on an adventure, only to get home and find out you missed the action with a bad angle? One option is to switch to a 360-degree action cam, so you can capture all of the action and then edit down to just the good stuff later. One of our favorite options, the DJI Osmo 360, is currently available for just $390 on Amazon, a $209 discount from its usual price, and it comes with a selfie stick and an extra battery.
The DJI Osmo 360 achieves its impressive all-around video quality by leveraging a pair of 1/1.1-inch sensors, larger than some other offerings, and by supporting 10-bit color. You can really see that in the camera’s output, with colors that are vivid and bold, to the point that you may need to dial them back a bit in post if you want something more natural. With support for up to 50 frames per second at 8K when recording in 360 degrees, or 120 fps at 4K when shooting with only one sensor, you’ll have plenty of material to work with. In our testing, it ran for just shy of two hours at 30 fps, which is also around the time the internal storage had filled up anyway.
If you plan on catching any serious discussions with your Osmo 360, you’ll be pleased to know it connects directly to DJI’s line of wireless lavalier microphones, including the excellent and frequently discounted DJI Mic 2 and Mic Mini. If you want to mount it to something other than the included 1.2-meter selfie stick, it has both DJI’s magnetic attachment system and a more traditional ¼”-20 tripod mount. The DJI Mimo app lets you control the camera and adjust any settings, and there’s even a simple editor for on-the-fly production. For desktop users, DJI Studio has even more in-depth settings and editing options, in case you don’t want to pay for Premiere.
The DJI Osmo 360 is one of our favorite action cameras, and is particularly appealing at the discounted price point, but make sure to check out our full review for more info, or head over to our full roundup to see what else is available.
Tech
Artemis II: Everything We Know as Its Crew Approaches the Far Side of the Moon
On day six of its mission, Artemis II is closing in on the far side of the moon. Meanwhile, the historic journey has not been without fascinating and curious stories, from the images and videos that its four crew members have shared with the world to the inevitable unforeseen events—including a tricky toilet situation.
A few hours before the crew begins its lunar flyby, here’s how things are going on Artemis II.
When Will They Reach the Far Side of the Moon?
While Artemis II won’t actually land on the moon (that won’t happen until Artemis IV), that does not make this mission any less compelling. Once the Artemis II astronauts finish flying over the dark side of the moon, they will have the historic distinction of being the humans who have traveled the farthest from Earth.
They will also test all the systems needed for future lunar missions, validating life support, navigation, spacesuits, communications, and other human operations in deep space.
But when are they supposed to reach this far-off point? First, the Orion capsule reached what is known as the moon’s “sphere of influence” on Sunday night. This is the point where the moon’s gravitational force is stronger than the force of the Earth.
At present, Orion is circling the moon. Once the capsule is on the dark side of the moon, approximately 7,000 kilometers from the surface, communications with Earth will be interrupted. For six hours, they will be able to view the far side of the moon, something no human being has ever seen with their own eyes—not even the astronauts of the Apollo program, as this region of the moon was always too dark or difficult for them to reach.
That six-hour flyby of the dark side of the moon is expected to begin Monday, April 6, at 2:45 pm EDT and 7:45 pm London time.
After that, the capsule will use the moon’s gravity to propel itself back to Earth. Splashdown, when the astronauts reach Earth, is scheduled for April 10 in the Pacific Ocean, not far from the coast of California, the tenth day of the mission.
Remember that you can follow the live broadcast of the Artemis II mission from NASA’s official channels.
What Has Happened so Far?
Since its successful launch on April 1 from Kennedy Space Center, the Artemis II crew has shared several spectacular photos, such as the featured image in this post, which shows mission specialist Christina Koch looking down at Earth through one of Orion’s main cabin windows.
This incredible photo of a Earth, taken on April 2, went viral on social media, referencing the famous “Blue Marble” image captured by the Apollo 17 astronauts in 1972.
View of Earth taken by astronaut Reid Wiseman from the window of the Orion spacecraft after completing the translunar injection maneuver on April 2, 2026.Photograph: Reid Wiseman/NASA/Getty Images
Tech
The DOJ Misled a Judge About How It’s Using Voter Roll Data
Last week in Rhode Island, in a hearing over the Trump administration’s efforts to access the state’s unredacted voter lists, US district judge Mary McElroy asked a Department of Justice lawyer what the agency had been doing with the voter roll data it already amassed from other states in recent months.
“We have not done anything yet,” said Eric Neff, the acting chief of the agency’s voting section, a core part of the DOJ’s civil rights division that focuses on enforcing federal laws that protect the right to vote. Neff added that the data the DOJ collected from states—which can include Social Security numbers, drivers licenses, dates of birth, and addresses—was being kept separate.
“The United States is taking extra concern to make sure that we’re complying with the Privacy Act in every conceivable way,” Neff added. The Privacy Act of 1974 regulates how government agencies collect and use personally identifiable information about US residents.
But Neff was not telling the truth: The DOJ, he later admitted, was pooling the data and already analyzing it to identify voting irregularities.
In a court document filed on March 27, Neff walked back his claims. “The United States represented that each data set was stored separately,” Neff wrote. “The United States also stated that no analysis had yet been conducted on the data. To correct and clarify the record, preliminary internal data analysis of the nonpublic voter registration data has begun. In particular, the Civil Rights Division has begun the process of identifying and quantifying the number and type of duplicate and deceased registered voters in each state.”
The revelation confirms what was widely speculated, which is that the DOJ appears to be pooling the data and using it to identify potential issues with suspected voting irregularities ahead of the midterms, which is a core part of Trump’s broad attack on elections.
Neff and the DOJ did not respond to repeated requests for comment.
Critics have grown increasingly concerned about the DOJ’s voting section, which has undergone a stark transformation since President Donald Trump has retaken office. A newly installed coterie of inexperienced but ultra-loyal lawyers in the DOJ’s voting section, many of whom have supported election denial conspiracy theories, have spent their time on forcing states to hand over their voter roll information.
The initiative began in May last year, when the Department of Justice sent letters to election officials in at least 48 states and Washington, DC, asking for unredacted voter rolls. Some Republican-led states immediately handed over the information, but dozens of others pushed back. As a result, Neff and his colleagues have sued 30 states, asking courts to force them to hand over the information. So far, courts have sided with the states, with judges already dismissing cases in California, Michigan, and Oregon.
In many of the lawsuits, state election officials pointed out the huge security risk involved in sharing such sensitive data, especially when it was unclear how the data would be stored or who it would be shared with. “We still have no idea what the government is doing with this data,” says David Becker, the head of the Center for Election Innovation and Research and a former Justice Department lawyer. “No idea where it is being stored, how it is being protected, or who has access to it. This data is incredibly sensitive. If someone has any of these three data points on any of us, Social Security number, driver’s license number, or date of birth, they can wreck us financially. This is why the states protect this data, and they do a good job of it.”
-
Sports1 week agoUSMNT handed reality check by Doku, Belgium ahead of World Cup
-
Sports1 week ago2026 NCAA men’s hockey tournament: Schedule, results
-
Uncategorized4 days ago
[CinePlex360] Please moderate: “Trump signals p
-
Uncategorized1 week ago
[CinePlex360] Please moderate: “Further tariff
-
Entertainment3 days agoJoe Jonas shares candid glimpse into parenthood with Sophie Turner
-
Tech3 days agoOur Favorite iPad Is $50 Off
-
Sports1 week agoMan City show why they are worthy WSL title winners as tired United wilt
-
Entertainment1 week agoDemystifying the PTI
