MIT researchers have spent more than a decade studying techniques that enable robots to find and manipulate hidden objects by “seeing” through obstacles. Their methods utilize surface-penetrating wireless signals that reflect off concealed items.
Now, the researchers are leveraging generative artificial intelligence models to overcome a longstanding bottleneck that limited the precision of prior approaches. The result is a new method that produces more accurate shape reconstructions, which could improve a robot’s ability to reliably grasp and manipulate objects that are blocked from view.
This new technique builds a partial reconstruction of a hidden object from reflected wireless signals and fills in the missing parts of its shape using a specially trained generative AI model.
The researchers also introduced an expanded system that uses generative AI to accurately reconstruct an entire room, including all the furniture. The system utilizes wireless signals sent from one stationary radar, which reflect off humans moving in the space.
This overcomes one key challenge of many existing methods, which require a wireless sensor to be mounted on a mobile robot to scan the environment. And unlike some popular camera-based techniques, their method preserves the privacy of people in the environment.
These innovations could enable warehouse robots to verify packed items before shipping, eliminating waste from product returns. They could also allow smart home robots to understand someone’s location in a room, improving the safety and efficiency of human-robot interaction.
“What we’ve done now is develop generative AI models that help us understand wireless reflections. This opens up a lot of interesting new applications, but technically it is also a qualitative leap in capabilities, from being able to fill in gaps we were not able to see before to being able to interpret reflections and reconstruct entire scenes,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of two papers on these techniques. “We are using AI to finally unlock wireless vision.”
Adib is joined on the first paper by lead author and research assistant Laura Dodds; as well as research assistants Maisy Lam, Waleed Akbar, and Yibo Cheng; and on the second paper by lead author and former postdoc Kaichen Zhou; Dodds; and research assistant Sayed Saad Afzal. Both papers will be presented at the IEEE Conference on Computer Vision and Pattern Recognition.
Surmounting specularity
The Adib Group previously demonstrated the use of millimeter wave (mmWave) signals to create accurate reconstructions of 3D objects that are hidden from view, like a lost wallet buried under a pile.
These waves, which are the same type of signals used in Wi-Fi, can pass through common obstructions like drywall, plastic, and cardboard, and reflect off hidden objects.
But mmWaves usually reflect in a specular manner, which means a wave reflects in a single direction after striking a surface. So large portions of the surface will reflect signals away from the mmWave sensor, making those areas effectively invisible.
“When we want to reconstruct an object, we are only able to see the top surface and we can’t see any of the bottom or sides,” Dodds explains.
The researchers previously used principles from physics to interpret reflected signals, but this limits the accuracy of the reconstructed 3D shape.
In the new papers, they overcame that limitation by using a generative AI model to fill in parts that are missing from a partial reconstruction.
“But the challenge then becomes: How do you train these models to fill in these gaps?” Adib says.
Usually, researchers use extremely large datasets to train a generative AI model, which is one reason models like Claude and Llama exhibit such impressive performance. But no mmWave datasets are large enough for training.
Instead, the researchers adapted the images in large computer vision datasets to mimic the properties in mmWave reflections.
“We were simulating the property of specularity and the noise we get from these reflections so we can apply existing datasets to our domain. It would have taken years for us to collect enough new data to do this,” Lam says.
The researchers embed the physics of mmWave reflections directly into these adapted data, creating a synthetic dataset they use to teach a generative AI model to perform plausible shape reconstructions.
The complete system, called Wave-Former, proposes a set of potential object surfaces based on mmWave reflections, feeds them to the generative AI model to complete the shape, and then refines the surfaces until it achieves a full reconstruction.
Wave-Former was able to generate faithful reconstructions of about 70 everyday objects, such as cans, boxes, utensils, and fruit, boosting accuracy by nearly 20 percent over state-of-the-art baselines. The objects were hidden behind or under cardboard, wood, drywall, plastic, and fabric.
Seeing “ghosts”
The team used this same approach to build an expanded system that fully reconstructs entire indoor scenes by leveraging mmWave reflections off humans moving in a room.
Human motion generates multipath reflections. Some mmWaves reflect off the human, then reflect again off a wall or object, and then arrive back at the sensor, Dodds explains.
These secondary reflections create so-called “ghost signals,” which are reflected copies of the original signal that change location as a human moves. These ghost signals are usually discarded as noise, but they also hold information about the layout of the room.
“By analyzing how these reflections change over time, we can start to get a coarse understanding of the environment around us. But trying to directly interpret these signals is going to be limited in accuracy and resolution.” Dodds says.
They used a similar training method to teach a generative AI model to interpret those coarse scene reconstructions and understand the behavior of multipath mmWave reflections. This model fills in the gaps, refining the initial reconstruction until it completes the scene.
They tested their scene reconstruction system, called RISE, using more than 100 human trajectories captured by a single mmWave radar. On average, RISE generated reconstructions that were about twice as precise than existing techniques.
In the future, the researchers want to improve the granularity and detail in their reconstructions. They also want to build large foundation models for wireless signals, like the foundation models GPT, Claude, and Gemini for language and vision, which could open new applications.
This work is supported, in part, by the National Science Foundation (NSF), the MIT Media Lab, and Amazon.
When I started hiking, big leather boots were the only real option. They were burly, stiff, and difficult to break in, but one pair would last you decades. Technology has mercifully caught up, however. If you head to the trails today, most hikers and backpackers are opting for more lightweight, low-cut options. While an influx of new shoes from brands like Hoka, Merrell, Danner, and Salomon has transformed the footwear industry, that doesn’t mean the hiking boot has had its day. It just depends on what you’re looking to do and when you’re doing it.
Which shoes should you pick to go out for the day? I tested countless pairs of great hiking boots, trail runners, and hiking shoes across a variety of terrain, from forest trails and coastal paths to high alpine terrain. To get a better understanding of the differences between the many options available—and which is right for you—I grilled Ingrid Johnson, a leading footwear product specialist at REI. (For what it’s worth, Johnson’s personal recommendation is the Salomon XA Pro).
Update March 2026: We added links to recent coverage, added the On Running Cloudrock Low, and updated links and prices.
Jump to Section
Here’s When You Need Boots
If you’re carrying a heavy pack over rough terrain, or if it’s wet or snowy, you need hiking boots. They tend to be higher at the ankle, with stiff midsoles and protective toe caps, and they are generally made from very durable materials like leather and tough synthetic fabrics like Cordura. Hiking boots prioritize stability, protection, and durability.
Boots generally have thick, deep lugs, tougher soles, stronger toe guards, and sturdier ankle support. They protect you from rock impact, uneven ground, moisture, and often colder conditions. The high-cut designs also offer more ankle support, something I found reassuring when coming back from a recent injury.
But don’t think that hiking boot brands are stuck in the dark ages. Borrowing lightweight features and materials from trail running, brands are able to offer technical boots with cushioning, grip, and stability. They’re still heavy, but featherweight compared to a traditional leather boot. Hoka’s Kaha 3 GTX ($240) is one of the best boots available, blending soft nubuck leather, Vibram Megagrip sole, and bags of cushioning. Here are a few other picks:
Perennially popular for good reason, these Salomons boast superb levels of comfort and support without the bulk typically associated with traditional walking boots. They feel like ski boots, but that’s not a criticism; the height and support is most welcome when walking all day carrying a full pack.
Despite being declared the third-hottest year on record, 2025 was a relatively quiet year for climate disasters in the US. No major hurricanes made landfall, while the total number of acres burned in wildfires last year—a way of measuring the intensity of wildfire season—fell below the 10-year average.
But starting this week, the West is experiencing what looks to be a record-breaking heat wave, while forecasting models predict that a strong El Niño event is likely to emerge later this year. These two unrelated phenomena could set the stage for a long stretch of unpredictable and extreme weather reaching into next year, compounding the effects of a climate that’s getting hotter and hotter thanks to human activity.
First, there’s the heat. Beginning this week and heading into next, a massive ridge of high-pressure air will bring record-breaking temperatures to the American West. The National Weather Service predicts that temperature records across multiple states are set to be broken in dozens of locations, stretching as far east as Missouri and Tennessee. The NWS has issued heat warnings for parts of California, Arizona, and Nevada, as well as fire warnings for parts of Wyoming, Nebraska, South Dakota, and Colorado.
“This will be the single strongest ridge we’ve observed outside of summer in any month,” says Daniel Swain, a climate scientist at the University of California Agriculture and Natural Resources.
The other remarkable thing about this heat wave, Swain says, is just how long it’s going to last. “This is not a day or two of extreme heat,” he says. “We’ve already in some of these places been seeing record highs every day for a week, and we expect to see them every day for another at least seven to 10 days.” The later end of March will be much more intense, with temperatures in some places breaking April and May records. “There aren’t that many weather patterns that can result in an 85- or 90-degree temperature in San Francisco, Salt Lake City, and Denver in the same week.”
This late winter heat wave is adding on to an already warm winter in the West—with big implications for the summer. A month ago, snowpack levels across multiple states were at record lows thanks to warmer-than-average temperatures. According to data provided by the Department of Agriculture, snowpack levels were still sitting below 50 percent of average across many Western states. Snowpack is a critical natural reservoir for rivers in the West; between 60 to 70 percent of the region’s water supply in many areas comes from melting snow. Low snowpack is a bad sign for already-stressed rivers like the Colorado, which supplies water for 40 million people in seven states.
The ongoing heat wave, Swain says, will more than likely make conditions even worse. “April 1st is typically the point at which snowpack would be, at least historically, at its peak,” he says. Even if temperatures cool off until summer, these low snowpack levels are also a worrisome sign for the upcoming fire season. Snow droughts like the one the West is experiencing can dry out soil, kill trees, and lessen stream flow: ideal conditions for a wildfire to grow. Meanwhile, the water supply in the Colorado River could drop even lower. States that rely on the river are already facing a political crisis as they attempt to renegotiate water rights; a drought would only up the ante.
Then there’s El Niño. Last week, the National Weather Service announced that there was more than a 60 percent chance of an El Niño event emerging in August or September. Various weather models suggest that this El Niño could be particularly strong. While we likely won’t know for sure until summer, “the fact that [all the models] are moving upwards is worth watching,” says Zeke Hausfather, a research scientist at Berkeley Earth.
The UK government recently unveiled its UK fusion strategy 2026, which includes £125m of funding to develop the artificial intelligence (AI) growth zone at Culham, Oxfordshire. This includes a £45m investment in “Sunrise”, the new fusion-dedicated supercomputer.
One area in which Sunrise will be used is accelerating simulation, surrogates and design, where AI could simplify simulations or learn the behaviour of complex systems such as plasmas to speed up simulations that previously took weeks or months to run.
It will also be used for data management, making the UK Atomic Energy Authority’s (UKAEA) fusion research and experimental data consistent, accessible and electronically readable. In addition, Sunrise offers the UKAEA – an executive non-departmental public body, sponsored by the Department for Energy Security and Net Zero – the ability to enhance experimental operations and control in real-time diagnostics, where AI can be trained to spot anomalies and flag issues.
The role of high-performance computing (HPC) AI acceleration hardware within the government’s strategy for nuclear fusion is to prepare fusion data for AI applications to ensure that researchers from small and medium-sized enterprises (SMEs) and academic institutions can access data, supporting greater collaboration and engagement with industry partners.
The 6.76 exaflops Sunrise AI supercomputer involves a collaboration between AMD, DESNZ, the Department for Science, Innovation and Technology (DSIT), Dell Technologies, Intel, UKAEA, the University of Cambridge, and Weka, a data platform provider.
Looking at its headline performance data, Rob Akers, UKAEA’s director for computing programmes, says: “It’s very challenging to define how powerful a piece of hardware like Sunrise is, because it depends on your metric for success.”
Sunrise offers the full spectrum of floating point precisions, from 8-bit right the way up to 64-bit precision, but, as Akers points out, each one of those targets a different part of the problem. “The important thing for us is that we can’t forego 64-bit precision, because that’s what’s going to feed the artificial intelligence algorithms that we’ll be applying when using Sunrise as an engineering tool,” he says.
“Sunrise is not just a very powerful laptop – it is a very complex piece of machinery that we’ll be putting to the task of solving a very large set of complex problems”
Rob Akers, UKAEA
AI makes it possible to collapse high-fidelity models that need very high bit precision down into what UKAEA calls “surrogate” models, according to Akers, who adds that these surrogates can run on a workstation or a laptop in a tiny fraction of the time it would take the big solvers running on large supercomputers.
“It’s almost like an instrument for discovery,” he adds. “Sunrise is not like a laptop. It’s not just a very powerful laptop – it is a very complex piece of machinery that we’ll be putting to the task of solving a very large set of complex problems.”
One of the interesting numbers that pop up in the specification for Sunrise is the figure for 8-bit precision, especially given that 8-bit computing harks back to the era of the home computer some 50 years ago.
“The interesting thing is that 8-bit precision has become an incredibly powerful part of the computing landscape now because of large language models [LLMs],” says Akers.
Running LLMs is in the UKAEA’s plans. “We are going to be doing work in that space, building very bespoke models that will ingest text document archives that have been collected over many, many decades, and turning that into useful information and knowledge,” he says.
Digital twins
Akers says this information will be put together with the Mega Amp Spherical Tokamak (MAST) experimental data run at Culham. “Working out how to achieve this needs the full spectrum of precision,” he says.
Although 8-bit precision is the domain of the LLMs that need to process tokens as quickly as possible to understand volumes of textural information, Akers says 64-bit precision is the realm of high-fidelity simulation, which needs to achieve a high degree of accuracy. “Because of the way we run models forward in time, we can’t allow them to drift. They need to preserve certain physical quantities to ensure the simulations are meaningful,” he says.
Sunrise will allow us to take on a moonshot-like problem, a lot more cost-effectively, to reduce risk and accelerate the time to deliver commercial fusion Rob Akers, UKAEA
So, while floating point precision is regarded as a metric for comparisons against other AI machines, for Akers, it is not necessarily the best metric to measure the outright performance of an AI scientific machine. What is needed, he says, is “the ability to simulate very high-fidelity, strongly coupled models”.
This is due to the sheer complexity of a machine that aims to mimic the way the sun generates its power. “In a nuclear fusion power plant, there are lots of different physical mechanisms that couple the plant together – everything from structural forces due to gravity, but also due to electromagnetism. Then there’s the heat flow and radiation flow across the system. Everything’s coupled together,” says Akers.
Historically, UKAEA has not been able to simulate this environment at scale. “What we worry about is the black swans or emergent behaviour that is a result of that coupling,” he adds.
Akers says digital twins running on Sunrise will be able to model these very complex systems, which can then be compared with the results of experiments. “We are able to tune up our ability to step forward in time or step outside where we’ve been before, or indeed to create new pieces of machinery that we’ve never seen before, and take a giant leap where we have confidence in having nailed down the known unknowns into the simulations,” he says.
“Test-based design is expensive, and it’s slow,” Akers adds. The goal is to use Sunrise to reduce the amount of test-based design that UKAEA has to do. “It will allow us to take on a moonshot-like problem, a lot more cost-effectively, to reduce risk and accelerate the time to deliver commercial fusion.”