Tech
Next-generation humanoid robot can do the moonwalk
KAIST research team’s independently developed humanoid robot boasts world-class driving performance, reaching speeds of 12km/h, along with excellent stability, maintaining balance even with its eyes closed or on rough terrain. Furthermore, it can perform complex human-specific movements such as the duckwalk and moonwalk, drawing attention as a next-generation robot platform that can be utilized in actual industrial settings.
Professor Park Hae-won’s research team at the Humanoid Robot Research Center (HuboLab) of KAIST’s Department of Mechanical Engineering developed the lower body platform for a next-generation humanoid robot. The developed humanoid is characterized by its design tailored for human-centric environments, targeting a height (165cm) and weight (75kg) similar to that of a human.
The significance of the newly developed lower body platform is immense as the research team directly designed and manufactured all core components, including motors, reducers, and motor drivers. By securing key components that determine the performance of humanoid robots with their own technology, they have achieved technological independence in terms of hardware.
In addition, the research team trained an AI controller through a self-developed reinforcement learning algorithm in a virtual environment, successfully applied it to real-world environments by overcoming the Sim-to-Real Gap, thereby securing technological independence in terms of algorithms as well.
Currently, the developed humanoid can run at a maximum speed of 3.25m/s (approximately 12km/h) on flat ground and has a step-climbing capability of over 30cm (a performance indicator showing how high a curb, stairs, or obstacle can be overcome). The team plans to further enhance its performance, aiming for a driving speed of 4.0m/s (approximately 14km/h), ladder climbing, and over 40cm step-climbing capability.
Professor Hae-Won Park’s team is collaborating with Professor Jae-min Hwangbo’s team (arms) from KAIST’s Department of Mechanical Engineering, Professor Sangbae Kim’s team (hands) from MIT, Professor Hyun Myung’s team (localization and navigation) from KAIST’s Department of Electrical Engineering, and Professor Jae-hwan Lim’s team (vision-based manipulation intelligence) from KAIST’s Kim Jaechul AI Graduate School to implement a complete humanoid hardware with an upper body and AI.
Through this, they are developing technology to enable the robot to perform complex tasks such as carrying heavy objects, operating valves, cranks, and door handles, and simultaneously walking and manipulating when pushing carts or climbing ladders. The ultimate goal is to secure versatile physical abilities to respond to the complex demands of actual industrial sites.

During this process, the research team also developed a single-leg “hopping” robot. This robot demonstrated high-level movements, maintaining balance on one leg and repeatedly hopping, and even exhibited extreme athletic abilities such as a 360-degree somersault.
Especially in a situation where imitation learning was impossible due to the absence of a biological reference model, the research team achieved significant results by implementing an AI controller through reinforcement learning that optimizes the center of mass velocity while reducing landing impact.
Professor Park Hae-won stated, “This achievement is an important milestone that has achieved independence in both hardware and software aspects of humanoid research by securing core components and AI controllers with our own technology.
“We will further develop it into a complete humanoid, including an upper body to solve the complex demands of actual industrial sites and furthermore, foster it as a next-generation robot that can work alongside humans.”

The results of this research will be presented by JongHun Choe, a Ph.D. candidate in Mechanical Engineering, as the first author, on hardware development at Humanoids 2025, an international humanoid robot specialized conference held on October 1st.
Additionally, Ph.D. candidates Dongyun Kang, Gijeong Kim, and JongHun Choe from Mechanical Engineering will present the AI algorithm achievements as co-first authors at CoRL 2025, the top conference in robot intelligence, held on September 29th.
The presentation papers are available on the arXiv preprint server.
More information:
Dongyun Kang et al, Learning Impact-Rich Rotational Maneuvers via Centroidal Velocity Rewards and Sim-to-Real Techniques: A One-Leg Hopper Flip Case Study, arXiv (2025). DOI: 10.48550/arxiv.2505.12222
JongHun Choe et al, Design of a 3-DOF Hopping Robot with an Optimized Gearbox: An Intermediate Platform Toward Bipedal Robots, arXiv (2025). DOI: 10.48550/arxiv.2505.12231
Citation:
Next-generation humanoid robot can do the moonwalk (2025, September 24)
retrieved 24 September 2025
from https://techxplore.com/news/2025-09-generation-humanoid-robot-moonwalk.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
New algorithm enhances Doppler resolution of unmanned vehicle radars
A research team has developed an extrapolation-based Doppler resolution enhancement algorithm for frequency modulated continuous wave radars. The algorithm improves system performance, offering an advancement that is superior to existing ultra-high-resolution technologies.
The findings are published in the Journal of Electrical Engineering & Technology. The team was led by Sang-dong Kim and Bong-seok Kim, affiliated with the DGIST Division of Mobility Technology, in collaboration with a team led by Professor Youngdoo Choi, affiliated with the Republic of Korea Naval Academy (ROKNA).
Improving radar accuracy without extra hardware
This research introduces a technology that improves radar detection accuracy without the need for additional complex computations or hardware. The technology is expected to contribute to enhancing radar system performance on various intelligent unmanned platforms such as unmanned aerial vehicles (UAVs), unmanned ships, and autonomous vehicles.
Conventional radar systems analyze the Doppler effect to determine the velocity of a target, but the fast Fourier transform (FFT)-based approach has limitations regarding resolution (i.e., the accuracy of velocity discrimination). To address this, the joint DGIST–ROKNA research team applied a signal extrapolation technique and has proposed a new algorithm that enhances Doppler resolution without extending observation time.
Performance gains and real-world applications
The proposed method successfully reduces the root mean square error of velocity estimation by up to 33% and decreases the target miss rate by up to 68%, representing a substantial improvement over the conventional approach. Notably, the proposed method maintains the same computational complexity level as the conventional FFT method, thereby simultaneously achieving fast processing speed and high efficiency.
This technology can effectively solve the problem of signal overlap between targets moving at similar velocities, particularly when UAVs or radar systems detect multiple objects simultaneously. It can therefore greatly enhance the ability to distinguish closely spaced targets and improve detection accuracy, marking a new milestone in the advancement of high-resolution target detection technology.
Additionally, the technology is highly regarded for its industrial applicability because it requires no additional hardware resources and features a simple computational structure that enables real-time implementation.
Sang-dong Kim, principal researcher at the Division of Mobility Technology (concurrently serving the interdisciplinary engineering major), said, “This study demonstrates an improvement in both the efficiency and precision of radar signal processing, enabling more accurate target detection without the need for additional equipment. It is expected to evolve into a key technology for defense, autonomous driving, and unmanned systems.”
More information:
Youngdoo Choi et al, Doppler Resolution Enhancement Algorithm Based on Extrapolation for FMCW Radar, Journal of Electrical Engineering & Technology (2025). DOI: 10.1007/s42835-025-02453-6
Citation:
New algorithm enhances Doppler resolution of unmanned vehicle radars (2025, November 11)
retrieved 11 November 2025
from https://techxplore.com/news/2025-11-algorithm-doppler-resolution-unmanned-vehicle.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Spray 3D concrete printing simulator boosts strength and design
Concrete 3D printing reduces both time and cost by eliminating traditional formwork, the temporary mold for casting. Yet most of today’s systems rely on extrusion-based methods, which deposit material very close to a nozzle layer by layer. This makes it impossible to print around reinforcement bars (rebars) without risk of collision, limiting both design flexibility and structural integrity of builds.
Kenji Shimada and researchers in his Carnegie Mellon University’s Computational Engineering and Robotics Laboratory (CERLAB), are breaking through that limitation with a new simulation tool for spray-based concrete 3D printing.
“Spray-based concrete 3D printing is a new process with complicated physical phenomena,” said Shimada, a professor of mechanical engineering. “In this method, a modified shotcrete mixture is sprayed from a nozzle to build up on a surface, even around rebar.”
The ability to print freely around reinforcement is especially important in places like Japan and California, where earthquakes are an imminent threat and structural strength is critical.
“To make this technology viable, we must be able to predict exactly how the concrete will spray and dry into the final shape,” Shimada explained. “That’s why we developed a simulator for concrete spray 3D printing.”
The new simulator can model the viscoelastic behaviors of shotcrete mixtures, including drip, particle rebound, spread, and solidification time. This way, contractors can assess multiple printing paths based on a CAD design with the simulator to evaluate whether spray 3D printing is a feasible fabrication technique for their structure.
The team traveled to Tokyo, Japan, where Shimizu Corporation already operates spray 3D printing robots to validate their model. In the first test, the team focused on the simulator’s ability to predict shape based on the speed of the nozzle’s movement. With 90.75% accuracy, the simulator could predict the height of the sprayed concrete. The second test showed that the simulator could predict printing over rebar with 92.3% and 97.9% accuracy for width and thickness, respectively.
According to Soji Yamakawa, a research scientist in Shimada’s lab and the lead author of the team’s research paper published in IEEE Robotics and Automation Letters, a simulation of this kind would typically take hours, if not days, to run.
“By making wild assumptions, we were able to successfully simplify a super complex physics simulation into a combination of efficient algorithms and data structures and still achieved highly realistic output,” Yamakawa said.
Future work will aim to increase accuracy by identifying environmental parameters like humidity, optimizing performance, and adding plastering simulation to create smoother finished products.
“There are still so many applications and technologies that we can develop with robotics,” said Kyshalee Vazquez-Santiago, a co-author of the paper and a mechanical engineering Ph.D. candidate leading the Mobile Manipulators research group within CERLAB.
“Even in concrete 3D printing, we are working with an entirely new type of application and approach that has so many advantages but leaves so much room for further development.”
More information:
Soji Yamakawa et al, Concrete Spray 3D Printing Simulator for Nozzle Trajectory Planning, IEEE Robotics and Automation Letters (2025). DOI: 10.1109/lra.2025.3615038
Citation:
Spray 3D concrete printing simulator boosts strength and design (2025, November 11)
retrieved 11 November 2025
from https://techxplore.com/news/2025-11-spray-3d-concrete-simulator-boosts.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Mind readers: How large language models encode theory-of-mind
Imagine you’re watching a movie, in which a character puts a chocolate bar in a box, closes the box and leaves the room. Another person, also in the room, moves the bar from a box to a desk drawer. You, as an observer, know that the treat is now in the drawer, and you also know that when the first person returns, they will look for the treat in the box because they don’t know it has been moved.
You know that because as a human, you have the cognitive capacity to infer and reason about the minds of other people—in this case, the person’s lack of awareness regarding where the chocolate is. In scientific terms, this ability is described as Theory of Mind (ToM). This “mind-reading” ability allows us to predict and explain the behavior of others by considering their mental states.
We develop this capacity at about the age of four, and our brains are really good at it.
“For a human brain, it’s a very easy task,” says Zhaozhuo Xu, Assistant Professor of Computer Science at the School of Engineering—it barely takes seconds to process.
“And while doing so, our brains involve only a small subset of neurons, so it’s very energy efficient,” explains Denghui Zhang, Assistant Professor in Information Systems and Analytics at the School of Business.
How LLMs differ from human reasoning
Large language models or LLMs, which the researchers study, work differently. Although they were inspired by some concepts from neuroscience and cognitive science, they aren’t exact mimics of the human brain. LLMs were built on artificial neural networks that loosely resemble the organization of biological neurons, but the models learn from patterns in massive amounts of text and operate using mathematical functions.
That gives LLMs a definitive advantage over humans in processing loads of information rapidly. But when it comes to efficiency, particularly with simple things, LLMs lose to humans. Regardless of the complexity of the task, they must activate most of their neural network to produce the answer. So whether you’re asking an LLM to tell you what time it is or summarize “Moby Dick,” a whale of a novel, the LLM will engage its entire network, which is resource-consuming and inefficient.
“When we, humans, evaluate a new task, we activate a very small part of our brain, but LLMs must activate pretty much all of their network to figure out something new even if it’s fairly basic,” says Zhang. “LLMs must do all the computations and then select the one thing you need. So you do a lot of redundant computations, because you compute a lot of things you don’t need. It’s very inefficient.”
New research into LLMs’ social reasoning
Working together, Zhang and Xu formed a multidisciplinary collaboration to better understand how LLMs operate and how their efficiency in social reasoning can be improved.
They found that LLMs use a small, specialized set of internal connections to handle social reasoning. They also found that LLMs’ social reasoning abilities depend strongly on how the model represents word positions, especially through a method called rotary positional encoding (RoPE). These special connections influence how the model pays attention to different words and ideas, effectively guiding where its “focus” goes during reasoning about people’s thoughts.
“In simple terms, our results suggest that LLMs use built-in patterns for tracking positions and relationships between words to form internal “beliefs” and make social inferences,” Zhang says. The two collaborators outlined their findings in the study titled “How large language models encode theory-of-mind: a study on sparse parameter patterns,” published in npj Artificial Intelligence.
Looking ahead to more efficient AI
Now that researchers better understand how LLMs form their “beliefs,” they think it may be possible to make the models more efficient.
“We all know that AI is energy-expensive, so if we want to make it scalable, we have to change how it operates,” says Xu. “Our human brain is very energy efficient, so we hope this research brings us back to thinking about how we can make LLMs to work more like the human brain, so that they activate only a subset of parameters in charge of a specific task. That’s an important argument we want to convey.”
More information:
Yuheng Wu et al, How large language models encode theory-of-mind: a study on sparse parameter patterns, npj Artificial Intelligence (2025). DOI: 10.1038/s44387-025-00031-9
Citation:
Mind readers: How large language models encode theory-of-mind (2025, November 11)
retrieved 11 November 2025
from https://techxplore.com/news/2025-11-mind-readers-large-language-encode.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Tech1 week agoCISOs in court: Balancing cyber resilience and legal accountability | Computer Weekly
-
Fashion1 week agoCoach reconnects with Bank & Vogue for upcycled bags using corduroy
-
Business1 week agoFirst new Amazon electric heavy goods vehicles hit UK roads
-
Sports1 week agoNFL broadcaster Cris Collinsworth makes government shutdown joke as Seahawks clobber Commanders
-
Fashion1 week agoGermany’s Adidas achieves highest-ever quarterly sales in Q3 2025
-
Business1 week agoReeves lays ground for painful Budget, but will it be worth it?
-
Business6 days agoSetback for expatriates? Delhi HC upholds mandatory EPFO membership; what this means for foreign staff – The Times of India
-
Business1 week agoHow To Claim Investments Of Deceased Holders: A Step-By-Step Guide For Mutual Funds & Bank Accounts
