Tech
Using generative AI to diversify virtual training grounds for robots
Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.
Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.
Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.
How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.
“We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”
In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.
Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.
Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”
The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.
According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”
Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.
While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.
To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.
“Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”
“Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”
Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.
Tech
I’ve Tried Every Digital Notebook. Here Are the Best Ones on Sale
I love a digital notebook. I write about them all year long here at WIRED, and it’s not often my favorites go on sale. (Or for any to go on sale, besides Amazon’s own sale events.) But this year, multiple digital notebooks I love are on sale for the biggest sale event of the year.
If you’ve thought about getting one of these for yourself, there’s truly no better moment. From reMarkable’s on-sale bundles to Kobo’s deals, you can shop five of the best digital notebooks we’ve ever tried right now at a lower price than you might find until next year. They’re a handy device just about everyone can enjoy, whether you want to digitally annotate your books or write out your grocery list without using a piece of paper.
Looking for more great sales to shop? Don’t miss our guides to the Best Amazon Device and Kindle Deals, Best Laptop Deals, the Absolute Best Cyber Monday Deals, and our liveblog.
Update Dec. 1: We updated prices, links, and deals, and added the Rocketbook Fusion Plus notebook.
The Best Digital Notebook Deals
Some of the best digital notebooks we’ve tried come from reMarkable, and one of reMarkable’s models always seems to reign supreme over our digital notebooks guide. While the Paper Pro Move is the newest model, the reMarkable Paper Pro that launched in September 2024 is my current all-around favorite. It’s not only powerful with tons of tools and an easy interface, but packs a color screen for colorful notes. It also has a gentle front light so that you can use it in darker environments. You can get the bundles on sale right now, so combine one of reMarkable’s markers and folio covers with a Paper Pro to get $50 off.
The best discount from reMarkable is actually for its older device and our previous top pick, the reMarkable 2. It doesn’t have a color screen or the front light, but you’ll get the reMarkable’s great software and options for accessories like the Keyboard Folio to use it like a laptop. The reMarkable 2 bundles are also on sale, so add on your favorite folio of choice on reMarkable’s website to get $70 off.
The Kobo Libra Colour is my favorite all-around e-reader with its color screen and page turner buttons, but you can add on a stylus to have it double as a digital notebook. It’s one of the more affordable options, and it’s a smaller screen than the rest of these, but I especially love that you can use the stylus to doodle on the books you’re reading (something you can’t do with the Kindle Scribe). It’s $30 off on Kobo’s site for Cyber Monday.
The second-generation Kindle Scribe isn’t the best digital notebook, but the long battery life (12 weeks!!) and convenient starting point of it being a Kindle I could already be reading on makes it a great go-to for casual notetakers and doodlers. It’s a good choice for Kindle and Amazon users, and there are new models due out this winter, but they likely won’t be as cheap as this one. (Especially since some of those new models will have color!)
If you like the idea of getting a Kobo e-reader that doubles as a digital notebook, you can go for more of a classic size with the larger Elipsa 2E. This one comes with the stylus, so you won’t have to add it on, and it’s $50 off.
The Rocketbook Fusion Plus digital planner and notebook is for those who don’t want to charge their notebook or give up on the whole “paper” experience. Take notes with the included, erasable Pilot Frixion Pen, scan photos of the pages into the app, and erase the whole thing with the damp microfiber cloth (also included). Fusion Plus is on its steepest discount of recent memory, and comes templates that range from monthly and weekly pages to project management and meeting notes.
Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.
Tech
Artificial tendons give muscle-powered robots a boost
Our muscles are nature’s actuators. The sinewy tissue is what generates the forces that make our bodies move. In recent years, engineers have used real muscle tissue to actuate “biohybrid robots” made from both living tissue and synthetic parts. By pairing lab-grown muscles with synthetic skeletons, researchers are engineering a menagerie of muscle-powered crawlers, walkers, swimmers, and grippers.
But for the most part, these designs are limited in the amount of motion and power they can produce. Now, MIT engineers are aiming to give bio-bots a power lift with artificial tendons.
In a study appearing today in the journal Advanced Science, the researchers developed artificial tendons made from tough and flexible hydrogel. They attached the rubber band-like tendons to either end of a small piece of lab-grown muscle, forming a “muscle-tendon unit.” Then they connected the ends of each artificial tendon to the fingers of a robotic gripper.
When they stimulated the central muscle to contract, the tendons pulled the gripper’s fingers together. The robot pinched its fingers together three times faster, and with 30 times greater force, compared with the same design without the connecting tendons.
The researchers envision the new muscle-tendon unit can be fit to a wide range of biohybrid robot designs, much like a universal engineering element.
“We are introducing artificial tendons as interchangeable connectors between muscle actuators and robotic skeletons,” says lead author Ritu Raman, an assistant professor of mechanical engineering (MechE) at MIT. “Such modularity could make it easier to design a wide range of robotic applications, from microscale surgical tools to adaptive, autonomous exploratory machines.”
The study’s MIT co-authors include graduate students Nicolas Castro, Maheera Bawa, Bastien Aymon, Sonika Kohli, and Angel Bu; undergraduate Annika Marschner; postdoc Ronald Heisser; alumni Sarah J. Wu ’19, SM ’21, PhD ’24 and Laura Rosado ’22, SM ’25; and MechE professors Martin Culpepper and Xuanhe Zhao.
Muscle’s gains
Raman and her colleagues at MIT are at the forefront of biohybrid robotics, a relatively new field that has emerged in the last decade. They focus on combining synthetic, structural robotic parts with living muscle tissue as natural actuators.
“Most actuators that engineers typically work with are really hard to make small,” Raman says. “Past a certain size, the basic physics doesn’t work. The nice thing about muscle is, each cell is an independent actuator that generates force and produces motion. So you could, in principle, make robots that are really small.”
Muscle actuators also come with other advantages, which Raman’s team has already demonstrated: The tissue can grow stronger as it works out, and can naturally heal when injured. For these reasons, Raman and others envision that muscly droids could one day be sent out to explore environments that are too remote or dangerous for humans. Such muscle-bound bots could build up their strength for unforeseen traverses or heal themselves when help is unavailable. Biohybrid bots could also serve as small, surgical assistants that perform delicate, microscale procedures inside the body.
All these future scenarios are motivating Raman and others to find ways to pair living muscles with synthetic skeletons. Designs to date have involved growing a band of muscle and attaching either end to a synthetic skeleton, similar to looping a rubber band around two posts. When the muscle is stimulated to contract, it can pull the parts of a skeleton together to generate a desired motion.
But Raman says this method produces a lot of wasted muscle that is used to attach the tissue to the skeleton rather than to make it move. And that connection isn’t always secure. Muscle is quite soft compared with skeletal structures, and the difference can cause muscle to tear or detach. What’s more, it is often only the contractions in the central part of the muscle that end up doing any work — an amount that’s relatively small and generates little force.
“We thought, how do we stop wasting muscle material, make it more modular so it can attach to anything, and make it work more efficiently?” Raman says. “The solution the body has come up with is to have tendons that are halfway in stiffness between muscle and bone, that allow you to bridge this mechanical mismatch between soft muscle and rigid skeleton. They’re like thin cables that wrap around joints efficiently.”
“Smartly connected”
In their new work, Raman and her colleagues designed artificial tendons to connect natural muscle tissue with a synthetic gripper skeleton. Their material of choice was hydrogel — a squishy yet sturdy polymer-based gel. Raman obtained hydrogel samples from her colleague and co-author Xuanhe Zhao, who has pioneered the development of hydrogels at MIT. Zhao’s group has derived recipes for hydrogels of varying toughness and stretch that can stick to many surfaces, including synthetic and biological materials.
To figure out how tough and stretchy artificial tendons should be in order to work in their gripper design, Raman’s team first modeled the design as a simple system of three types of springs, each representing the central muscle, the two connecting tendons, and the gripper skeleton. They assigned a certain stiffness to the muscle and skeleton, which were previously known, and used this to calculate the stiffness of the connecting tendons that would be required in order to move the gripper by a desired amount.
From this modeling, the team derived a recipe for hydrogel of a certain stiffness. Once the gel was made, the researchers carefully etched the gel into thin cables to form artificial tendons. They attached two tendons to either end of a small sample of muscle tissue, which they grew using lab-standard techniques. They then wrapped each tendon around a small post at the end of each finger of the robotic gripper — a skeleton design that was developed by MechE professor Martin Culpepper, an expert in designing and building precision machines.
When the team stimulated the muscle to contract, the tendons in turn pulled on the gripper to pinch its fingers together. Over multiple experiments, the researchers found that the muscle-tendon gripper worked three times faster and produced 30 times more force compared to when the gripper is actuated just with a band of muscle tissue (and without any artificial tendons). The new tendon-based design also was able to keep up this performance over 7,000 cycles, or muscle contractions.
Overall, Raman saw that the addition of artificial tendons increased the robot’s power-to-weight ratio by 11 times, meaning that the system required far less muscle to do just as much work.
“You just need a small piece of actuator that’s smartly connected to the skeleton,” Raman says. “Normally, if a muscle is really soft and attached to something with high resistance, it will just tear itself before moving anything. But if you attach it to something like a tendon that can resist tearing, it can really transmit its force through the tendon, and it can move a skeleton that it wouldn’t have been able to move otherwise.”
The team’s new muscle-tendon design successfully merges biology with robotics, says biomedical engineer Simone Schürle-Finke, associate professor of health sciences and technology at ETH Zürich.
“The tough-hydrogel tendons create a more physiological muscle–tendon–bone architecture, which greatly improves force transmission, durability, and modularity,” says Schürle-Finke, who was not involved with the study. “This moves the field toward biohybrid systems that can operate repeatably and eventually function outside the lab.”
With the new artificial tendons in place, Raman’s group is moving forward to develop other elements, such as skin-like protective casings, to enable muscle-powered robots in practical, real-world settings.
This research was supported, in part, by the U.S. Department of Defense Army Research Office, the MIT Research Support Committee, and the National Science Foundation.
Tech
The Best Cyber Monday Streaming Deals With a Convenient Roommate’s Email Address
HBO knows you’re bored and cold. It wants you to Max and chill with Noah Wyle in scrubs. The company offers some of the best Cyber Monday streaming deals with a ridiculously low-priced $3/month offer for basic HBO Max (it’s the version with ads and 2K streaming, but still, super-cheap). Disney Plus and Hulu deals are bundled up for $5/month. Apple TV wants back in your life for $6.
Of course, this deal is only meant for new customers. Not boring ol’ existing customers. If you already have basic HBO Max, you’re already paying $11 for the same service, and HBO would like you to keep doing that. Streaming apps are banking on you being complacent and happy in your streaming life. Maybe they’re even taking you for granted.
Sometimes you can get the current deal just by threatening to cancel, or actually canceling, your account. Suddenly, you’re an exciting new customer again! Another method is by using an alternate email account (perhaps your spouse’s or roommate’s?) and alternate payment information as a new customer. If you do use a burner email (you did not hear this from me), check in on your favorite app’s terms of service to make sure you’re not in violation by re-enrolling with different emails. I’ll also issue the caveat that you lose all your viewing data and tailored suggestions if you sign up anew.
But times and wallets are tight! And $3 HBO Max sounds pretty good. After all, every middle-aged American man needs to rewatch The Wire once every five years or so—assuming he’s not the kind of middle-aged man who rewatches The Sopranos instead. Here are the current best streaming deals for Cyber Monday 2025.
Devon Maloney; ARCHIVE ID: 546772
Regular price: $80
-
Sports1 week agoWATCH: Ronaldo scores spectacular bicycle kick
-
Entertainment1 week agoWelcome to Derry’ episode 5 delivers shocking twist
-
Politics1 week agoWashington and Kyiv Stress Any Peace Deal Must Fully Respect Ukraine’s Sovereignty
-
Business1 week agoKey economic data and trends that will shape Rachel Reeves’ Budget
-
Politics1 week ago53,000 Sikhs vote in Ottawa Khalistan Referendum amid Carney-Modi trade talks scrutiny
-
Tech6 days agoWake Up—the Best Black Friday Mattress Sales Are Here
-
Tech1 day agoGet Your Steps In From Your Home Office With This Walking Pad—On Sale This Week
-
Fashion1 week agoCanada’s Lululemon unveils team Canada kit for Milano Cortina 2026


-Reviewer-Photo-SOURCE-Nena-Farrell.jpg)





