Connect with us

Tech

The MOTIF Hand: A tool advancing the capabilities of previous robot hand technology

Published

on

The MOTIF Hand: A tool advancing the capabilities of previous robot hand technology


The MOTIF hand, the robotic hand developed by the researchers. Credit: Zhou et al

Growing up, we learn to push just hard enough to move a box and to avoid touching a hot pan with our bare hands. Now, a robot hand has been developed that also has these instincts.

The MOTIF Hand, developed by a student team in collaboration with Daniel Seita, a USC Viterbi assistant professor of computer science, is built on the idea of being multimodal—that is, having several sensory abilities. The most prominent of these abilities relate to temperature and force, with built-in sensors for depth, force and temperature allowing the hand to sense and react to these factors.

These capabilities create potential not only for better research involving robotic hands, but they also allow these hands to last longer by avoiding temperature-related damage. Force-related capabilities could also have a surprisingly practical real-world use.

“In factories and other domains, a robot would have to push to get objects into their targets, and that requires measuring some amount of force,” Seita said. “That type of force sensor can help in those cases, just to check that the robot is exerting the right amount of force.

“We haven’t seen people build this type of hand before,” he added.

Hot stuff

The MOTIF Hand builds on the foundation of the LEAP Hand, which was built by a research team at Carnegie Mellon in 2023. MOTIF’s key advancement is the addition of human-like sensory capabilities. This MOTIF Hand, which contains far more accurate human-like features and abilities, could have myriad applications, including in factory work and even cooking or welding, Seita said.







The MOTIF Hand, developed by a student team in collaboration with Daniel Seita, a USC Viterbi assistant professor of computer science, is built on the idea of being multimodal—that is, having several sensory abilities. Credit: USC Viterbi School of Engineering and Canva Licensed Footage

The robot’s ability to sense temperature comes from a thermal camera built into the palm of the hand. Seita and his team of USC Viterbi graduate students aimed to create a hand that would simulate a human understanding of temperature.

“If we’re cooking, we have a pot that’s very hot. We might put our hand near it to check if it’s safe to touch before we actually touch it, to avoid burns and damage,” Seita said. “We wanted that same intuition conveyed into a robot system.”

It’s an intuitive system that requires the hand to be close to the material whose temperature it’s detecting, said Hanyang Zhou, a co-author of the research paper, “The MOTIF Hand: A Robotic Hand for Multimodal Observations with Thermal, Inertial, and Force Sensors,” who recently graduated from the Viterbi School with a master’s in computer science. The paper is published on the arXiv preprint server.

“We were thinking, is it possible in some certain way to get a signal but not touch anything? So, we put an infrared-based camera right in the palm,” he added.

In other words, the MOTIF Hand can detect temperature through this thermal camera without even touching an object—just placing the hand close enough for the camera to examine it does the job.

  • The MOTIF Hand: A tool advancing the capabilities of previous robot hand technology
    The proposed MOTIF hand. Credit: Zhou, Lou, Liu, et al.
  • The MOTIF Hand: A tool advancing the capabilities of previous robot hand technology
    Data processing pipeline for thermal-based grasping. a) First, researchers collect images from diverse viewpoints of the object and use SAM2 [18] to extract the object mask. b) Then they reconstruct the 3D mesh and point cloud, perform thermal-RGB data alignment, and do reprojection. Credit: arXiv (2025). DOI: 10.48550/arxiv.2506.19201

‘You have to feel it’

The work done by Seita, Zhou and their team was designed to make the process of testing temperature and force feel more natural—in other words, true to human experiences with these things. For example, force is something that humans can’t see, just feel. The MOTIF Hand is designed around the same sensations we use to understand force-related properties, such as an object’s weight, allowing for more life-like robotic reactions to force.

“We as humans cannot distinguish [force] as a vision you have to feel it. But how is that possible for a ?” Zhou asked. “If I don’t know whether a is full of water, I just flick it. I’ll shake it, right?”

The IMU sensors built into the MOTIF Hand bring this simple test to robotics. The hand, like our own, merely needs to flick or shake an object to determine its weight.

The MOTIF Hand was based on Carnegie Mellon’s LEAP Hand, which was open source. To further advance this sensory technology, Seita and his team have promised to make the MOTIF Hand as well.

“Open-sourcing research advancement is really important to advance the community,” Seita said. “The more people that use our hand, the better it is for research.”

Zhou described the MOTIF Hand’s sensory advancements as a “platform” that he hopes the entire robotics community will build on for the future.

“We should make it easy [and] accessible for more and more research teams, as long as they are interested in such a platform,” Zhou said.

More information:
Hanyang Zhou et al, The MOTIF Hand: A Robotic Hand for Multimodal Observations with Thermal, Inertial, and Force Sensors, arXiv (2025). DOI: 10.48550/arxiv.2506.19201

Journal information:
arXiv


Citation:
The MOTIF Hand: A tool advancing the capabilities of previous robot hand technology (2025, September 14)
retrieved 14 September 2025
from https://techxplore.com/news/2025-09-motif-tool-advancing-capabilities-previous.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

Hollywood Is Losing Audiences to AI Fatigue

Published

on

Hollywood Is Losing Audiences to AI Fatigue


An insurrectionist robot unleashed by a mad inventor in Fritz Lang’s Metropolis. HAL 9000 sabotaging a manned mission to Jupiter in 2001: A Space Odyssey. Skynet, the self-aware global defense network that seeks to exterminate humanity throughout the Terminator franchise.

Hollywood has never wanted for audacious depictions of artificial intelligence or the ways in which it could alter the fate of our species. But the rapid integration of AI into the studio system and our now unavoidable interactions with it have severely compromised the genre, not to mention film as a medium.

On the one hand, it’s perfectly understandable that screenwriters and studios would return to the subject of AI in recent years, particularly since it provokes such fierce debate within the industry. (A major cause of the 2023 labor strikes was the threat that AI posed to creative jobs.) Still, the novelty faded fast.

Consider M3GAN, a campy horror flick about an artificially intelligent doll who starts killing people, released just a week after the debut of ChatGPT in 2022: It was a surprise box-office smash. Last year’s sequel? A critical and commercial flop. Mission: Impossible—Dead Reckoning (2023) introduced a rogue AI called The Entity as a final adversary for Ethan Hunt and crew. The resolution of its cliff-hanger ending and blockbuster finale for the spy saga, Mission: Impossible—The Final Reckoning (2025), underperformed its predecessor, and neither quite justified their expense.

The latest AI-themed bomb is Mercy, a crime thriller starring Chris Pratt as an LAPD detective strapped into a chair who has 90 minutes to pull enough evidence from security cameras and phone records to convince a stern judge bot (Rebecca Ferguson) that he didn’t kill his wife—or else face instant execution. Despite releasing in January, one reviewer has already declared it “the worst movie of 2026,” and judging by its mediocre ticket sales, many US moviegoers decided as much from the trailer alone. It’s almost as if nobody cared whether a fictional software program might be capable of sparing a life when real health insurance claims are being denied by algorithms already.

For those few who did see it, Mercy fell far short of its dystopian premise, failing to grapple with the ethics of such a surveillance state and its medieval-modern justice system in favor of cheap relativism. Spoiler: Pratt’s character and the AI ultimately team up to stop the real bad guys as the bot begins to show signs of unrobotic emotion and doubt, which manifest as glitches in the program. By the end, Pratt is delivering a true groaner of a we’re-not-so-different speech to the holographic Ferguson. “Human or AI, we all make mistakes,” he says. “And we learn.”

While the naive belief in AI’s progress toward enlightenment feels dated on arrival, you are also reminded of how prophetically cynical something like Paul Verhoeven’s RoboCop, now almost 40 years old, was in addressing a future of cybernetic fascism. Contrary to that kind of pitch-black, violent satire, the current trend seems to be propagandistic narratives about how AIs are scary at first but secretly good. (See also: Tron: Ares, Disney’s wildly misguided attempt to leverage an old IP for the era of large language models, another cinematic train wreck of 2025.)

In fact, the insistence on some inborn value or honor to artificial intelligence may be the driving force behind the new Time Studios web series On This Day…1776. Conceived as a blow-by-blow account of the year the American colonies declared independence from the British crown, it consists of short YouTube videos generated in part by Google DeepMind (though actual actors supply voiceovers). The project has drawn serious attention and scorn because acclaimed director Darren Aronofsky served as executive producer via his creative studio Primordial Soup, launched last year in a partnership with Google to explore the applications of AI in filmmaking. It probably doesn’t help that Aronofsky and company are valorizing the country’s founders in the same aesthetic that has defined the authoritarian meme culture of Donald Trump’s second term.



Source link

Continue Reading

Tech

Half of Google’s software development now AI-generated | Computer Weekly

Published

on

Half of Google’s software development now AI-generated | Computer Weekly


As much as half of all the code produced at Alphabet, the parent company of Google, is being generated by artificial intelligence (AI) coding agents.

The use of AI to drive operational efficiency and free up more money to invest in AI capacity was one of the points made by Anat Ashkenazi, senior vice-president and chief financial officer of Alphabet and Google, during the company’s latest quarterly earnings call.

For its fourth quarter of 2025, Alphabet reported revenue of $114bn, up 18% year over year. For the full year, it posted revenue of $403bn, a 15% increase from the previous year.

The company is seeing a huge increase in demand for Google Cloud and its AI-powered services. During its latest earnings call, in a response that suggests Alphabet does not need to expand its software developer workforce, Ashkenazi said: “We look at coding productivity. About 50% of our code is written by coding agents, which are then reviewed by our own engineers. This certainly helps our engineers do more and move faster with the current footprint.”

Ashkenazi said 60% of Alphabet’s 2025 capital expenditure (capex) was allocated to servers, with the remaining 40% directed towards datacentres and networking equipment. A similar amount looks set to be spent in 2026, with Alphabet predicting it will spend between $175bn and $185bn on servers, datacentres and networking equipment. Its latest quarterly earnings call suggests capex is primarily focused on AI infrastructure and technical innovation to meet growing demand.

While Google does not have the breadth of AWS services or the deep corporate foothold of Microsoft, its steady effort to win enterprise customers is now turbocharged by its AI-native cloud offerings
Lee Sustar, Forrester

There is increasing concern in stock markets that the huge investments in AI infrastructure will not deliver a return on investment. In response to questions about AI capacity challenges and compute demand, Sundar Pichai, CEO of Alphabet and Google, said: “We’ve been supply-constrained, even as we’ve been ramping up our capacity. Obviously, our capex spend this year is an eye towards the future, and you have to keep in mind, some of the time, horizons are increasing in the supply chain. So, we are constantly planning for the long-term and working towards that. And, obviously, how we close the gap this year is a function of what we have done in the prior years. And so there is that time delay to keep in mind.”

The investment in AI infrastructure is needed to support demand for Google Cloud and the AI services the company provides. The quarterly filing shows Google Cloud’s annual run rate is over $70bn.

Pichai said Google Cloud has sold more than eight million paid seats of Gemini Enterprise, its AI platform, to over 2,800 companies. He also stated that over 120,000 enterprises use Google’s Gemini AI models, including major companies such as Airbus, Honeywell, Salesforce and Shopify, with existing customers increasing their spending, outpacing their initial commitments by over 30%.

“Nearly 75% of Google Cloud customers have used our vertically optimised AI, from chips, to models, to AI platforms, and enterprise AI agents, which offer superior performance, quality, security and cost-efficiency. These AI customers use 1.8 times as many products as those who do not, enabling us to diversify our product portfolio, deepen customer relationships and accelerate revenue growth,” added Pichai.

Forrester’s principal analyst, Lee Sustar, said: “Google Cloud’s quarterly revenue jump of 48% over the same period a year earlier is decisive evidence that it is a full-blown enterprise challenger to AWS [Amazon Web Services] and Microsoft Azure. While Google does not have the breadth of AWS services or the deep corporate foothold of Microsoft, its steady effort to win enterprise customers is now turbocharged by its AI-native cloud offerings. But this comes at a hefty price for parent Alphabet, which saw capital expenditure for the fourth quarter effectively double the amount of a year earlier.”



Source link

Continue Reading

Tech

Netflix Says if the HBO Merger Makes It Too Expensive, You Can Always Cancel

Published

on

Netflix Says if the HBO Merger Makes It Too Expensive, You Can Always Cancel


There is concern that subscribers might be negatively affected if Netflix acquires Warner Bros. Discovery’s streaming and movie studios businesses. One of the biggest fears is that the merger would lead to higher prices due to less competition for Netflix.

During a US Senate hearing Tuesday, Netflix co-CEO Ted Sarandos suggested that the merger would have an opposite effect.

Sarandos was speaking at a hearing held by the US Senate Judiciary Committee’s Subcommittee on Antitrust, Competition Policy, and Consumer Rights, “Examining the Competitive Impact of the Proposed Netflix-Warner Brothers Transaction.”

Sarandos aimed to convince the subcommittee that Netflix wouldn’t become a monopoly in streaming or in movie and TV production if regulators allowed its acquisition to close. Netflix is the largest subscription video-on-demand provider by subscribers (301.63 million as of January 2025), and Warner Bros. Discovery is the third (128 million streaming subscribers, including users of HBO Max and, to a smaller degree, Discovery+).

Speaking at the hearing, Sarandos said: “Netflix and Warner Bros. both have streaming services, but they are very complementary. In fact, 80 percent of HBO Max subscribers also subscribe to Netflix. We will give consumers more content for less.”

During the hearing, Democratic senator Amy Klobuchar of Minnesota asked Sarandos how Netflix can ensure that streaming remains “affordable” after a merger, especially after Netflix issued a price hike in January 2025 despite adding more subscribers.

Sarandos said the streaming industry is still competitive. The executive claimed that previous Netflix price hikes have come with “a lot more value” for subscribers.

“We are a one-click cancel, so if the consumer says, ‘That’s too much for what I’m getting,’ they can cancel with one click,” Sarandos said.

When pressed further on pricing, the executive argued that the merger doesn’t pose “any concentration risk” and that Netflix is working with the US Department of Justice on potential guardrails against more price hikes.

Sarandos claimed that the merger would “create more value for consumers.” However, his idea of value isn’t just about how much subscribers pay to stream but about content quality. By his calculations, which he provided without further details, Netflix subscribers spend an average of 35 cents per hour of content watched, compared to 90 cents for Paramount+.

The Netflix stat is similar to one provided by MoffettNathanson in January 2025, finding that in the prior quarter, on average, Netflix generated 34 cents in subscription fees per hour of content viewed per subscriber. At the time, the research firm said Paramount+ made an average of 76 cents per hour of content viewed per subscriber.

Downplaying Monopoly Concerns

Netflix views Warner as “both a competitor and a supplier,” Sarandos said when subcommittee chair Republican senator Mike Lee of Utah asked why Netflix wants to buy WB’s film studios, per Variety. The streaming executive claimed that Netflix’s “history is about adding more and more” content and choice.

During the hearing, Sarandos argued that streaming is a competitive business and pointed to Google, Apple, and Amazon as “deep-pocketed tech companies trying to run away with the TV business.” He tried to downplay concerns that Netflix could become a monopoly by emphasizing YouTube’s high TV viewership. Nielsen’s The Gauge tracker shows which platforms Americans use most when using their TVs (as opposed to laptops, tablets, or other devices). In December, it said that YouTube, not including YouTube TV, had more TV viewership (12.7 percent) than any other streaming video-on-demand service, including second-place Netflix (9 percent). Sarandos claimed that Netflix would have 21 percent of the streaming market if it merged with HBO Max.



Source link

Continue Reading

Trending