Tech
Robots can now learn to use tools—just by watching us
Despite decades of progress, most robots are still programmed for specific, repetitive tasks. They struggle with the unexpected and can’t adapt to new situations without painstaking reprogramming. But what if they could learn to use tools as naturally as a child does by watching videos?
I still remember the first time I saw one of our lab’s robots flip an egg in a frying pan. It wasn’t pre-programmed. No one was controlling it with a joystick. The robot had simply watched a video of a human doing it, and then did it itself. For someone who has spent years thinking about how to make robots more adaptable, that moment was thrilling.
Our team at the University of Illinois Urbana-Champaign, together with collaborators at Columbia University and UT Austin, has been exploring that very question. Could robots watch someone hammer a nail or scoop a meatball, and then figure out how to do it themselves, without costly sensors, motion capture suits, or hours of remote teleoperation?
That idea led us to create a new framework we call “Tool-as-Interface,” currently available on the arXiv preprint server. The goal is straightforward: teach robots complex, dynamic tool-use skills using nothing more than ordinary videos of people doing everyday tasks. All it takes is two camera views of the action, something you could capture with a couple of smartphones.
Here’s how it works. The process begins with those two video frames, which a vision model called MASt3R uses to reconstruct a three-dimensional model of the scene. Then, using a rendering method known as 3D Gaussian splatting—think of it as digitally painting a 3D picture of the scene—we generate additional viewpoints so the robot can “see” the task from multiple angles.
But the real magic happens when we digitally remove the human from the scene. With the help of “Grounded-SAM,” our system isolates just the tool and its interaction with the environment. It is like telling the robot, “Ignore the human, and only pay attention to what the tool is doing.”
This “tool-centric” perspective is the secret ingredient. It means the robot isn’t trying to copy human hand motions, but is instead learning the exact trajectory and orientation of the tool itself. This allows the skill to transfer between different robots, regardless of how their arms or cameras are configured.
We tested this on five tasks: hammering a nail, scooping a meatball, flipping food in a pan, balancing a wine bottle, and even kicking a soccer ball into a goal. These are not simple pick-and-place jobs; they require speed, precision, and adaptability. Compared to traditional teleoperation methods, Tool-as-Interface achieved 71% higher success rates and gathered training data 77% faster.
One of my favorite tests involved a robot scooping meatballs while a human tossed in more mid-task. The robot didn’t hesitate, it just adapted. In another, it flipped a loose egg in a pan, a notoriously tricky move for teleoperated robots.
“Our approach was inspired by the way children learn, which is by watching adults,” said my colleague and lead author Haonan Chen. “They don’t need to operate the same tool as the person they’re watching; they can practice with something similar. We wanted to know if we could mimic that ability in robots.”
These results point toward something bigger than just better lab demos. By removing the need for expert operators or specialized hardware, we can imagine robots learning from smartphone videos, YouTube clips, or even crowdsourced footage.
“Despite a lot of hype around robots, they are still limited in where they can reliably operate and are generally much worse than humans at most tasks,” said Professor Katie Driggs-Campbell, who leads our lab.
“We’re interested in designing frameworks and algorithms that will enable robots to easily learn from people with minimal engineering effort.”
Of course, there are still challenges. Right now, the system assumes the tool is rigidly fixed to the robot’s gripper, which isn’t always true in real life. It also sometimes struggles with 6D pose estimation errors, and synthesized camera views can lose realism if the angle shift is too extreme.
In the future, we want to make the perception system more robust, so that a robot could, for example, watch someone use one kind of pen and then apply that skill to pens of different shapes and sizes.
Even with these limitations, I think we’re seeing a profound shift in how robots can learn, away from painstaking programming and toward natural observation. Billions of cameras are already recording how humans use tools. With the right algorithms, those videos could become training material for the next generation of adaptable, helpful robots.
This research, which was honored with the Best Paper Award at the ICRA 2025 Workshop on Foundation Models and Neural-Symbolic (NeSy) AI for Robotics, is a critical step toward unlocking that potential, transforming the vast ocean of human recorded video into a global training library for robots that can learn and adapt as naturally as a child does.
This story is part of Science X Dialog, where researchers can report findings from their published research articles. Visit this page for information about Science X Dialog and how to participate.
More information:
Haonan Chen et al, Tool-as-Interface: Learning Robot Policies from Human Tool Usage through Imitation Learning, arXiv (2025). DOI: 10.48550/arxiv.2504.04612
Cheng Zhu is second author of Tool-as-Interface: Learning Robot Policies from Human Tool Usage through Imitation Learning, UIUC BS Computer Engineering, UPenn MSE ROBO
Citation:
Robots can now learn to use tools—just by watching us (2025, August 23)
retrieved 23 August 2025
from https://techxplore.com/news/2025-08-robots-tools.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Cyber body ISC2 signs on as UK software security ambassador | Computer Weekly
ISC2, the non-profit cyber professional membership association, has joined the UK government’s recently launched Software Security Ambassador Scheme as an expert adviser.
Set up at the beginning of the year by the National Cyber Security Centre (NCSC) and the Department for Science, Innovation and Technology (DSIT), the scheme forms part of a wider £210m commitment by Westminster to remodel approaches to public sector cyber resilience from the ground up, acknowledging that previous approaches to the issue have basically gone nowhere and that previously set targets for resilience are unachievable.
It is designed to incentivise organisations to pay more attention to the security of software products, and supports the wider adoption of the Software Security Code of Practice, a set of voluntary principles defining what secure software looks like.
ISC2 joins a number of tech suppliers, including Cisco, Palo Alto Networks and Sage; consultancies and service providers including Accenture and NCC Group; and financial services firms including Lloyds Banking Group and Santander. Fellow cyber association ISACA is also involved.
“Promoting secure software practices that strengthen the resilience of systems underpinning the economy, public services and national infrastructure is central to ISC2’s mission,” said ISC2’s executive vice-president for advocacy and strategic engagement, Tara Wisniewski.
“The code moves software security beyond narrow compliance and elevates it to a board-level resilience priority. As supply chain attacks continue to grow in scale and impact, a shared baseline is essential and through our global community and expertise, ISC2 is committed to helping professionals build the skills needed to put secure-by-design principles into practice,” she said.
Software vulns a huge barrier to resilience
A study of wider supply chain risks conducted last year by ISC2 found that a little over half of organisations worldwide reported that vulnerabilities in their software suppliers’ products represented the most disruptive cyber security threat to their overall supply chain.
And the World Economic Forum’s (WEF’s) Global Cybersecurity Outlook report, published on 12 January, revealed that third-party and supply chain vulnerabilities were seen as a huge barrier to building cyber resilience by C-suite executives.
A total of 65% of respondents to the WEF’s annual poll flagged such flaws as the greatest challenge their organisation faced on its pathway to resilience, compared to 54% at the beginning of 2025. This outpaced factors such as the evolving threat landscape and emerging AI technology, use of legacy IT systems, regulatory compliance and governance, and cyber skills shortages.
Pressed on the top supply chain cyber risks, respondents were most concerned about their ability to assure the integrity of software and other IT services, ahead of a lack of visibility into their supplier’s supply chains and overdependence on critical third-party suppliers.
The UK’s Code of Practice seeks to answer this challenge by establishing expectations and best practices for tech providers and any other organisations that either develop, sell or buy software products. It covers aspects such as secure design and development, the security of build environments, deployment and ongoing upkeep, and transparent communication with customers and users.
As part of its role as an ambassador, ISC2 will assist in developing and improving the Code of Practice, while championing it by embedding its guiding principles into its own cyber education and professional development services – the organisation boasts 10,000 UK members and associates.
It will also help to drive adoption of the Code of Practice through various awareness campaigns, incorporating it into its certifications, training and guidance, engaging with industry stakeholders and members to encourage implementation, and incorporating its provisions into its work with its own commercial suppliers.
Tech
Asus Made a Split Keyboard for Gamers—and Spared No Expense
The wheel on the left side has options to adjust actuation distance, rapid-trigger sensitivity, and RGB brightness. You can also adjust volume and media playback, and turn it into a scroll wheel. The LED matrix below it is designed to display adjustments to actuation distance but feels a bit awkward: Each 0.1 mm of adjustment fills its own bar, and it only uses the bottom nine bars, so the screen will roll over four times when adjusting (the top three bars, with dots next to them, illuminate to show how many times the screen has rolled over during the adjustment). The saving grace of this is that, when adjusting the actuation distance, you can press down any switch to see a visualization of how far you’re pressing it, then tweak the actuation distance to match.
Alongside all of this, the Falcata (and, by extension, the Falchion) now has an aftermarket switch option: TTC Gold magnetic switches. While this is still only two switches, it’s an improvement over the singular switch option of most Hall effect keyboards.
Split Apart
Photograph: Henri Robbins
The internal assembly of this keyboard is straightforward yet interesting. Instead of a standard tray mount, where the PCB and plate bolt directly into the bottom half of the shell, the Falcata is more comparable to a bottom-mount. The PCB screws into the plate from underneath, and the plate is screwed onto the bottom half of the case along the edges. While the difference between the two mounting methods is minimal, it does improve typing experience by eliminating the “dead zones” caused by a post in the middle of the keyboard, along with slightly isolating typing from the case (which creates fewer vibrations when typing).
The top and bottom halves can easily be split apart by removing the screws on the plate (no breakable plastic clips here!), but on the left half, four cables connect the top and bottom halves of the keyboard, all of which need to be disconnected before fully separating the two sections. Once this is done, the internal silicone sound-dampening can easily be removed. The foam dampening, however, was adhered strongly enough that removing it left chunks of foam stuck to the PCB, making it impossible to readhere without using new adhesive. This wasn’t a huge issue, since the foam could simply be placed into the keyboard, but it is still frustrating to see when most manufacturers have figured this out.
Tech
These Sub-$300 Hearing Aids From Lizn Have a Painful Fit
Don’t call them hearing aids. They’re hearpieces, intended as a blurring of the lines between hearing aid and earbuds—or “earpieces” in the parlance of Lizn, a Danish operation.
The company was founded in 2015, and it haltingly developed its launch product through the 2010s, only to scrap it in 2020 when, according to Lizn’s history page, the hearing aid/earbud combo idea didn’t work out. But the company is seemingly nothing if not persistent, and four years later, a new Lizn was born. The revamped Hearpieces finally made it to US shores in the last couple of weeks.
Half Domes
Photograph: Chris Null
Lizn Hearpieces are the company’s only product, and their inspiration from the pro audio world is instantly palpable. Out of the box, these look nothing like any other hearing aids on the market, with a bulbous design that, while self-contained within the ear, is far from unobtrusive—particularly if you opt for the graphite or ruby red color scheme. (I received the relatively innocuous sand-hued devices.)
At 4.58 grams per bud, they’re as heavy as they look; within the in-the-ear space, few other models are more weighty, including the Kingwell Melodia and Apple AirPods Pro 3. The units come with four sets of ear tips in different sizes; the default mediums worked well for me.
The bigger issue isn’t how the tip of the device fits into your ear, though; it’s how the rest of the unit does. Lizn Hearpieces need to be delicately twisted into the ear canal so that one edge of the unit fits snugly behind the tragus, filling the concha. My ears may be tighter than others, but I found this no easy feat, as the device is so large that I really had to work at it to wedge it into place. As you might have guessed, over time, this became rather painful, especially because the unit has no hardware controls. All functions are performed by various combinations of taps on the outside of either of the Hearpieces, and the more I smacked the side of my head, the more uncomfortable things got.
-
Politics1 week agoUK says provided assistance in US-led tanker seizure
-
Entertainment1 week agoDoes new US food pyramid put too much steak on your plate?
-
Entertainment1 week agoWhy did Nick Reiner’s lawyer Alan Jackson withdraw from case?
-
Business1 week agoTrump moves to ban home purchases by institutional investors
-
Sports5 days agoClock is ticking for Frank at Spurs, with dwindling evidence he deserves extra time
-
Sports1 week agoPGA of America CEO steps down after one year to take care of mother and mother-in-law
-
Business1 week agoBulls dominate as KSE-100 breaks past 186,000 mark – SUCH TV
-
Sports6 days ago
Commanders go young, promote David Blough to be offensive coordinator
