Connect with us

Tech

Mind readers: How large language models encode theory-of-mind

Published

on

Mind readers: How large language models encode theory-of-mind


A ToM task. In Question (a), LLMs should fill in the blank with “popcorn.” In Question (b), the blank should be filled with “chocolate.”. Credit: npj Artificial Intelligence (2025). DOI: 10.1038/s44387-025-00031-9

Imagine you’re watching a movie, in which a character puts a chocolate bar in a box, closes the box and leaves the room. Another person, also in the room, moves the bar from a box to a desk drawer. You, as an observer, know that the treat is now in the drawer, and you also know that when the first person returns, they will look for the treat in the box because they don’t know it has been moved.

You know that because as a human, you have the to infer and reason about the minds of other people—in this case, the person’s lack of awareness regarding where the chocolate is. In scientific terms, this ability is described as Theory of Mind (ToM). This “mind-reading” ability allows us to predict and explain the behavior of others by considering their mental states.

We develop this capacity at about the age of four, and our brains are really good at it.

“For a , it’s a very easy task,” says Zhaozhuo Xu, Assistant Professor of Computer Science at the School of Engineering—it barely takes seconds to process.

“And while doing so, our brains involve only a small subset of neurons, so it’s very energy efficient,” explains Denghui Zhang, Assistant Professor in Information Systems and Analytics at the School of Business.

How LLMs differ from human reasoning

Large language models or LLMs, which the researchers study, work differently. Although they were inspired by some concepts from neuroscience and , they aren’t exact mimics of the human brain. LLMs were built on that loosely resemble the organization of biological neurons, but the models learn from patterns in massive amounts of text and operate using mathematical functions.

That gives LLMs a definitive advantage over humans in processing loads of information rapidly. But when it comes to efficiency, particularly with simple things, LLMs lose to humans. Regardless of the complexity of the task, they must activate most of their neural network to produce the answer. So whether you’re asking an LLM to tell you what time it is or summarize “Moby Dick,” a whale of a novel, the LLM will engage its entire network, which is resource-consuming and inefficient.

“When we, humans, evaluate a new task, we activate a very small part of our brain, but LLMs must activate pretty much all of their network to figure out something new even if it’s fairly basic,” says Zhang. “LLMs must do all the computations and then select the one thing you need. So you do a lot of redundant computations, because you compute a lot of things you don’t need. It’s very inefficient.”

New research into LLMs’ social reasoning

Working together, Zhang and Xu formed a multidisciplinary collaboration to better understand how LLMs operate and how their efficiency in social reasoning can be improved.

They found that LLMs use a small, specialized set of internal connections to handle social reasoning. They also found that LLMs’ social reasoning abilities depend strongly on how the model represents word positions, especially through a method called rotary positional encoding (RoPE). These special connections influence how the model pays attention to different words and ideas, effectively guiding where its “focus” goes during reasoning about people’s thoughts.

“In simple terms, our results suggest that LLMs use built-in patterns for tracking positions and relationships between words to form internal “beliefs” and make social inferences,” Zhang says. The two collaborators outlined their findings in the study titled “How encode theory-of-mind: a study on sparse parameter patterns,” published in npj Artificial Intelligence.

Looking ahead to more efficient AI

Now that researchers better understand how LLMs form their “beliefs,” they think it may be possible to make the models more efficient.

“We all know that AI is energy-expensive, so if we want to make it scalable, we have to change how it operates,” says Xu. “Our human brain is very energy efficient, so we hope this research brings us back to thinking about how we can make LLMs to work more like the human brain, so that they activate only a subset of parameters in charge of a specific task. That’s an important argument we want to convey.”

More information:
Yuheng Wu et al, How large language models encode theory-of-mind: a study on sparse parameter patterns, npj Artificial Intelligence (2025). DOI: 10.1038/s44387-025-00031-9

Citation:
Mind readers: How large language models encode theory-of-mind (2025, November 11)
retrieved 11 November 2025
from https://techxplore.com/news/2025-11-mind-readers-large-language-encode.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Tech

SpaceLocker launches first shared satellite mission | Computer Weekly

Published

on

SpaceLocker launches first shared satellite mission | Computer Weekly


In-orbit hosting services provider SpaceLocker is claiming to have reached a milestone in its history by transitioning into the ranks of satellite operators and towards a gateway to space through Out of the Box, a shared satellite model offering a direct response to both economic and environmental challenges.

SpaceLocker was founded in 2022 with the aim of becoming a global reference for access to orbit.

In the long term, the company aims to operate across multiple orbital regimes, scale its mission cadence and open space to a new generation of users.

Rather than multiplying dedicated satellites, the French orbital hosting firm said it was maximising existing capacity by hosting multiple missions on a single platform. This approach, it believes, not only reduces costs, but also helps limit space debris and decrease total mass launched into orbit.

The new phase for SpaceLocker comes a year after its first in-orbit mission, and Out of the Box is its first fully owned and operated satellite. At the core of the new service is a patented “universal space port” technology, comparable to a USB port for satellites. Plug-and-play and payload-agnostic, it is designed to transform satellites into shared infrastructures capable of hosting multiple payloads simultaneously.

Offering more detail on this transition from dedicated satellites to a “space cloud”, the company said that until now, sending technology to orbit required designing or procuring an entire satellite – a long, costly and inflexible process that has remained largely unchanged for decades. In addition, it argued that currently, nearly one in five space missions is dedicated to technology demonstration, yet these opportunities remain complex and expensive to execute. By simplifying access to orbit, SpaceLocker said it was positioning itself as a key enabler of space innovation.

“We want to do for space what cloud computing did for IT: shift from ownership to shared infrastructure,” said SpaceLocker CEO and co-founder Théophile Lagraulet. “In the future, sending an instrument to orbit won’t require building a satellite. Access to space can become a standardised service.”

With Out of the Box, SpaceLocker says it has reached a key inflection point – becoming a satellite operator and building its own mission portfolio, demonstrating rapid execution in a sector known for long development cycles.

It is deploying a 16U CubeSat (~20kg) carrying five European customers – making access to space possible without building a dedicated satellite. Customers develop their payloads independently and integrate them into a standardised “container” using the company’s universal space port. SpaceLocker then manages the full orbital stack, from integration to operations. 

The company claims that such a model reduces costs “dramatically”, up to three times cheaper than traditional missions, while cutting time-to-orbit in half. It also significantly lowers environmental impact through resource sharing, and helps limit space debris and decrease total mass launched into orbit.

The Out of the Box mission carries five payloads from across the European ecosystem, showcasing the diversity of next-generation space applications. Among the customers onboard, the Out of the Box mission brings together four European players.

EDGX, which develops technologies that enable compute in orbit, will demonstrate edge computing capabilities, enabling satellites to process data onboard and reduce reliance on ground infrastructure. Fédération Open Space Makers will fly FOSM-1, a payload dedicated to amateur radio and open communication experiments, supported by CNES. Solar MEMS will operate a high-precision star tracker for satellite orientation, while Arcsec will test two advanced star trackers to demonstrate high-performance attitude determination for small satellites.



Source link

Continue Reading

Tech

The Best Babbel Promo Codes and Deals for April 2026

Published

on

The Best Babbel Promo Codes and Deals for April 2026


I’ve been trying to become fluent in Spanish for the last decade. After spending most of my adult life surrounded by multilinguals, I often feel like I’m playing an impossible game of catch-up. Like everyone else, I’ve tried to become regimented with practicing on an in-phone app like Duolingo, which attempts to ‘game-ify’ language learning, but mostly ends up with a sad and sick-looking green bird icon guilting me to practice every time I open up my phone.

Babbel aims to help people actually learn the language through practical conversation and grammar, using proven pedagogical methods and speech recognition technology. Each lesson is short, with 10 to 15 minute lessons developed by a team of over 150 linguists. Instead of learning the same simple phrases in ad-ridden games on an endless loop, take charge of your language learning this year and make that commitment a reality. No more excuses—we’ve got a Babbel promo code and a Babbel coupon to help you hit your goals. Maybe you’ll be fluent by your next vacation (or at least able to order a chopped cheese with confidence at the bodega).

Unlock Your Babbel Promo Code and Save Big in April 2026

Not only is Babbel a helpful interactive app to simplify language learning, but it also has holistic services to help introduce the language to every part of your life. These are things like Babbel videos, which do a deep dive into what makes a language so fascinating, Babbel podcasts, which are led by Babbel experts who take an inside look at local culture and break down language secrets, and Babbel magazine, which highlights stories from around the world so you can better understand the history, culture, and people from the language you’re learning (and maybe will inspire you to take a trip to practice that language IRL!).

Make sure you check back often to find the latest Babbel promo code for sitewide savings. There are often discounts on the subscription tiers, which range from three month plans to annual memberships. Plus, springtime is usually when there are significant Babbel discounts for new users. And, if you sign up for the Babbel newsletter, you can receive a link for a Babbel coupon in your inbox.

Save 60% on 6-Month Plans With the Healthcare Workers Discount

As stated, knowing another language is an invaluable life skill, and a skill that is immeasurably valuable to healthcare workers, who may be able to more easily give lifesaving care. Healthcare professionals and nurses get a Babbel discount of 60% off a six-month Babbel subscription. To claim the Babbel discount, users just need to verify their medical credentials via ID.me.

Claim Your 60% Military Discount on 6-Month Subscriptions

This Babbel discount also applies to active duty military, veterans, and their families, who are also eligible for 60% off six-month Babbel subscriptions. This Babbel military coupon is valid for National Guard, reserve members, and immediate family members of service personnel, and all you need to do is verify your status at ID.me.

Snag a 60% Teacher Discount on Your Next 6 Months

Babbel is also extending the 60% discount to the real unsung heroes, teachers. Knowing more than one language is an invaluable tool for educators to be able to talk more effectively to parents or guardians, as well as to more deeply understand their students’ cultural identities. Educators and teachers, like K-12 teachers, university professors, and other educational staff members, are eligible for 60% off a six-month Babbel subscription. And like the others, you just need to verify credentials through ID.me.

Grab Top Lifetime Subscription Deals and Save in April 2026

Everyone knows that learning a language is a lifetime process, and Babbel wants to make it even easier for you to commit to it. If you pay once, you’ll get access to all available Babbel languages forever with Lifetime deals. You’ll just need to look for the “Lifetime Subscription” Babbel promos that could potentially save you hundreds of dollars over several years. Be sure to check back often, as these rotating deals often pop up during major holiday sales. While the upfront cost is higher, you’ll get access to all 14 available languages with this Babbel promo code lifetime subscription deal.



Source link

Continue Reading

Tech

Robotaxi Outage in China Leaves Passengers Stranded on Highways

Published

on

Robotaxi Outage in China Leaves Passengers Stranded on Highways


An unknown technical problem caused a number of robotaxis owned by the Chinese tech giant Baidu to freeze on Tuesday in the middle of traffic, trapping some passengers in the vehicles for more than an hour.

In Wuhan, a city in central China where Baidu has deployed hundreds of its Apollo Go self-driving taxis, people on Chinese social media reported witnessing the cars suddenly malfunction and stop operating. Photos and videos shared online show the Baidu cars halted on busy highways, often in the fast lane.

A college student in Wuhan tells WIRED that she was stuck in a Baidu robotaxi with two friends for about 90 minutes on Tuesday. (She asked to be only identified with her last name, He, to protect her privacy.) The student says the car malfunctioned and stopped four or five times during the trip before it eventually parked in front of an intersection in eastern Wuhan. Luckily, it was not a busy road, and the group was not in immediate danger. The screen display in the car asked the passengers to remain in the car with seatbelt on and wait for a company representative to come “in five minutes,” according to a photo He shared with WIRED.

He says it took about 30 minutes to reach a Baidu customer representative on the phone. “They kept saying it would be reported to their superior. But they didn’t explain what caused [the outage] or let us know how long we needed to wait for the staff to come,” He says. But no one ever came, and after another hour of waiting, the three passengers decided to just get out and go home by themselves (the doors weren’t locked).

On Chinese social media, other passengers also complained about being unable to reach Baidu’s customer support. “I tried every way I could think of to call for help using the options the app showed, but the phone line wouldn’t go through, and when I pressed the SOS button it told me it was unavailable. So then what exactly is the SOS for?” wrote one person in a post on RedNote alongside a video showing the button not working. She said she had to force the door to open and get out of the car as traffic halted to a complete stop behind her robotaxi. “Apollo Go, you really owe me an apology,” she wrote.

Baidu didn’t immediately respond to a request for comment. Local police in Wuhan issued a statement around midnight in China that said the situation was “likely caused by a system malfunction,” but the incident is still under investigation. No one was injured and all passengers have exited the vehicles, the police added. It’s unclear how many of Baidu’s robotaxis may have been impacted.

One dash cam recording posted to RedNote shows a car passing 16 Apollo Go vehicles parked on the road in the span of 90 minutes. On several occasions, the video shows the driver narrowly avoiding hitting the robotaxis by braking or changing lanes at the last minute.

Others were apparently not as fortunate. In another RedNote post, a man claimed he crashed into one of the malfunctioning Baidu vehicles. The man wrote in the caption that he was driving over 40 mph on a highway when the car in front of him suddenly changed lanes to avoid the stopped robotaxi. He couldn’t react fast enough and ended up running into the self-driving car. Photos of the man’s orange SUV being towed away show that the car’s front-right fender was completely torn off, and other parts appeared to have sustained major damage.



Source link

Continue Reading

Trending