Connect with us

Tech

This Robot Only Needs a Single AI Model to Master Humanlike Movements

Published

on

This Robot Only Needs a Single AI Model to Master Humanlike Movements


While there is a lot of work to do, Tedrake says all of the evidence so far suggests that the approaches used to LLMs also work for robots. “I think it’s changing everything,” he says.

Gauging progress in robotics has become more challenging of late, of course, with videoclips showing commercial humanoids performing complex chores, like loading refrigerators or taking out the trash with seeming ease. YouTube clips can be deceptive, though, and humanoid robots tend to be either teleoperated, carefully programmed in advance, or trained to do a single task in very controlled conditions.

The new Atlas work is a big sign that robots are starting to experience the kind of equivalent advances in robotics that eventually led to the general language models that gave us ChatGPT in the field of generative AI. Eventually, such progress could give us robots that are able to operate in a wide range of messy environments with ease and are able to rapidly learn new skills—from welding pipes to making espressos—without extensive retraining.

“It’s definitely a step forward,” says Ken Goldberg, a roboticist at UC Berkeley who receives some funding from TRI but was not involved with the Atlas work. “The coordination of legs and arms is a big deal.”

Goldberg says, however, that the idea of emergent robot behavior should be treated carefully. Just as the surprising abilities of large language models can sometimes be traced to examples included in their training data, he says that robots may demonstrate skills that seem more novel than they really are. He adds that it is helpful to know details about how often a robot succeeds and in what ways it fails during experiments. TRI has previously been transparent with the work it’s done on LBMs and may well release more data on the new model.

Whether simple scaling up the data used to train robot models will unlock ever-more emergent behavior remains an open question. At a debate held in May at the International Conference on Robotics and Automation in Atlanta, Goldberg and others cautioned that engineering methods will also play an important role going forward.

Tedrake, for one, is convinced that robotics is nearing an inflection point—one that will enable more real-world use of humanoids and other robots. “I think we need to put these robots out of the world and start doing real work,” he says.

What do you think of Atlas’ new skills? And do you think that we are headed for a ChatGPT-style breakthrough in robotics? Let me know your thoughts on ailab@wired.com.


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Lenovo’s New Laptop Concept Can Swivel the Screen From Landscape to Portrait

Published

on

Lenovo’s New Laptop Concept Can Swivel the Screen From Landscape to Portrait


Lenovo isn’t shy about trying new things. Last year, the PC maker teased a concept laptop with a transparent screen. Earlier this year, the ThinkBook Flip concept employed a flexible OLED display that folded over the top of the laptop lid, ready to flip up whenever you needed the extra screen space. At CES 2025, we saw a ThinkBook with a rollable OLED screen that expanded upward automatically at the touch of a button—this one is a real product you can actually buy.

Get ready for another whacky concept. At IFA 2025, the tech exhibition in Berlin, Lenovo unveiled its latest idea: the Lenovo ThinkBook VertiFlex. This is a laptop with a screen that can manually swivel from a standard horizontal orientation to vertical.

Portrait Mode

By default, the ThinkBook VertiFlex Concept looks like a normal 14-inch laptop. Look closely at the screen’s edge, however, and you’ll see a second layer jutting out; that’s the actual screen. Grab the right corner edge of the screen and push it upward, and the display will smoothly swivel up into a vertical orientation.

The back panel the screen is mounted on has a felt backing to keep everything smooth and scratch-free, and you can even prop a phone up here in this orientation. There’s a mechanism inside that manages the motion and keeps it operating smoothly. Despite this, the PC is still fairly slim at 17.9 mm, and it weighs roughly 3 pounds. (The 14-inch MacBook Pro is around 15 mm thick and weighs 3.4 pounds.)

I use a dual-screen setup with one vertical monitor next to my main ultrawide monitor at home. Having a vertical screen is a game-changer, as it’s perfect for applications that utilize more vertical space. Email is a great example, so are apps like Slack, anything to do with PDFs, and even most word processing software. But I’ve yet to change my screen orientation in the middle of a workflow.



Source link

Continue Reading

Tech

Get Our Favorite Smart Lock for Just $164 Right Now

Published

on

Get Our Favorite Smart Lock for Just 4 Right Now


Is your current smart lock frustrating you endlessly, like mine is? The Yale Approach Smart Lock (8/10, WIRED Review) is currently marked down to just $164 on Amazon, a healthy 32% discount on our editors’ top pick for smart locks. This sale comes at a perfect time, because I was just complaining about the fingerprint reader on mine no longer working.

  • Photograph: Nena Farrell

  • Courtesy of Yale

The Yale Approach uses part of your existing deadbolt, which is great news for renters who don’t want to make major changes. You’ll also get to use your existing keys to unlock the deadbolt, which can save you a trip to the locksmith. There’s also a wi-fi bridge that needs a nearby plug to provide other services, but that’s not uncommon for smart locks. Our reviewer, Nena Farrell, even said it “works perfectly,” which is great news, because I have to unplug mine and plug it back in at least once a week.

Approach isn’t just a name, as this smart deadbolt’s standout feature is auto-unlock. By setting up your location in the Yale Access App, you can set the bolt to unlock as your get close to home, which our reviewer said “worked smoothly”, as long as she got far enough away from home for it to recognize her return. There’s an auto-lock, too, using timers from 10 seconds to 30 minutes.

This version of the Yale Approach includes the touchscreen keypad, which needs its own flat space to either stick or screw to. In exchange, it lets you set codes for yourself or friends, with options for time and access limits if you need to manage entry to your home more carefully. It also gives you an easy button to press to lock the deadbolt as you leave the house, and a biometric fingerprint scanner.

No matter what smart lock you buy, there’s going to be a little bit of hassle, that just comes with the territory, unfortunately. The Yale smooths out a lot of the worst parts by adapting to your existing hardware, and mostly stays out of the way afterwards. The auto-unlock feature isn’t totally unique to the Approach, but it is currently our favorite implementation. The price is normally a bit on the high side, so the discount here makes this a very appealing pickup for anyone ready to relegate their old front door lock to the garage door, like I’m about to.



Source link

Continue Reading

Tech

Similarities between human and AI learning offer intuitive design insights

Published

on

Similarities between human and AI learning offer intuitive design insights


Credit: Unsplash/CC0 Public Domain

New research has found similarities in how humans and artificial intelligence integrate two types of learning, offering new insights about how people learn as well as how to develop more intuitive AI tools.

The study is published in the Proceedings of the National Academy of Sciences.

Led by Jake Russin, a postdoctoral research associate in at Brown University, the study found by training an AI system that flexible and incremental learning modes interact similarly to working memory and long-term memory in humans.

“These results help explain why a human looks like a rule-based learner in some circumstances and an incremental learner in others,” Russin said. “They also suggest something about what the newest AI systems have in common with the human brain.”

Russin holds a joint appointment in the laboratories of Michael Frank, a professor of cognitive and psychological sciences and director of the Center for Computational Brain Science at Brown’s Carney Institute for Brain Science, and Ellie Pavlick, an associate professor of computer science who leads the AI Research Institute on Interaction for AI Assistants at Brown.

Depending on the task, humans acquire new information in one of two ways. For some tasks, such as learning the rules of tic-tac-toe, “in-context” learning allows people to figure out the rules quickly after a few examples. In other instances, incremental learning builds on information to improve understanding over time—such as the slow, sustained practice involved in learning to play a song on the piano.

While researchers knew that humans and AI integrate both forms of learning, it wasn’t clear how the two learning types work together. Over the course of the research team’s ongoing collaboration, Russin—whose work bridges machine learning and —developed a theory that the dynamic might be similar to the interplay of human working memory and long-term memory.

To test this theory, Russin used “meta-learning”—a type of training that helps AI systems learn about the act of learning itself—to tease out key properties of the two learning types. The experiments revealed that the AI system’s ability to perform in-context learning emerged after it meta-learned through multiple examples.

One experiment, adapted from an experiment in humans, tested for in-context learning by challenging the AI to recombine similar ideas to deal with new situations. If taught about a list of colors and a list of animals, could the AI correctly identify a combination of color and animal (e.g. a green giraffe) it had not seen together previously? After the AI meta-learned by being challenged to 12,000 similar tasks, it gained the ability to successfully identify new combinations of colors and animals.

The results suggest that for both humans and AI, quicker, flexible in-context learning arises after a certain amount of incremental learning has taken place.

“At the first board game, it takes you a while to figure out how to play,” Pavlick said. “By the time you learn your hundredth board game, you can pick up the rules of play quickly, even if you’ve never seen that particular game before.”

The team also found trade-offs, including between learning retention and flexibility: Similar to humans, the harder it is for AI to correctly complete a task, the more likely it will remember how to perform it in the future. According to Frank, who has studied this paradox in humans, this is because errors cue the brain to update information stored in long-term memory, whereas error-free actions learned in context increase flexibility but don’t engage long-term memory in the same way.

For Frank, who specializes in building biologically inspired computational models to understand human learning and decision-making, the team’s work showed how analyzing strengths and weaknesses of different learning strategies in an artificial neural network can offer new insights about the .

“Our results hold reliably across multiple tasks and bring together disparate aspects of human learning that neuroscientists hadn’t grouped together until now,” Frank said.

The work also suggests important considerations for developing intuitive and trustworthy AI tools, particularly in sensitive domains such as mental health.

“To have helpful and trustworthy AI assistants, human and AI cognition need to be aware of how each works and the extent that they are different and the same,” Pavlick said. “These findings are a great first step.”

More information:
Jacob Russin et al, Parallel trade-offs in human cognition and neural networks: The dynamic interplay between in-context and in-weight learning, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2510270122

Provided by
Brown University


Citation:
Similarities between human and AI learning offer intuitive design insights (2025, September 4)
retrieved 4 September 2025
from https://techxplore.com/news/2025-09-similarities-human-ai-intuitive-insights.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending