Tech
BEAST-GB model combines machine learning and behavioral science to predict people’s decisions
A key objective of behavioral science research is to better understand how people make decisions in situations where outcomes are unknown or uncertain, which entail a certain degree of risk.
The ability to predict people’s choices in these situations could be highly advantageous, as it could help to draft effective initiatives aimed at prompting people to make better decisions for themselves and others in their community.
Researchers at Technion (Israel Institute of Technology) and various institutes in the United States recently developed a new computational model called BEAST-GB, which was found to predict people’s decisions in situations that entail risk and uncertainty.
Their proposed model, outlined in a paper published in Nature Human Behavior, combines advanced machine learning algorithms with behavioral science theory.
“Human-decision research is rich in competing theories, yet none reliably and accurately predicts human choices across contexts,” Ori Plonsky, first author of the paper, told Tech Xplore.
“To see which ideas really work, we organized CPC18, a ‘choice prediction competition’ in which anyone could submit a computational model to predict people’s decisions under risk and uncertainty. We were especially interested in knowing if data-driven machine learning, theory-driven behavioral models, or, as was our guess, a hybrid that embeds behavioral theory inside ML, would excel.”
The new machine learning model developed by Plonsky and his colleagues draws from a behavioral science framework known as BEAST (Best Estimate and Sampling Tools). This is a model based on psychological theories that were previously found to predict people’s decisions with good accuracy.
“BEAST assumes that, in choice under risk and uncertainty, people mix several strategies, such as minimizing the chances of immediate regret or hedging against worst outcomes,” explained Plonsky.
“We translated each strategy into a ‘behavioral feature,’ a concise formula that captures how sensitive a decision-maker should be to that consideration in any given choice task. We then fed these theory-based features, plus purely objective task descriptors, into Extreme Gradient Boosting (a machine learning algorithm known to be highly useful in prediction tournaments)—hence the name BEAST-GB.”
With the enhancements implemented by the researchers, the BEAST-GB model could analyze behavioral data and derive the motives driving decisions, as well as the impact of these motives in different decision-making scenarios.
Notably, BEAST-GB won the CPC18 Choice Prediction Competition in 2018, capturing 93% of predictable variation in the data it was fed, and 96% in follow-up tests utilizing a dataset that was 40 times larger.
“BEAST-GB outperformed dozens of mainstream behavioral models and purely data-driven machine learning,” said Plonsky.
“With just 2% of the training data, it has already beat a deep neural network trained on all the training data. The model even accurately predicts choices people make in new experiments it has never seen, implying it captures general human choice patterns. Finally, we used it to improve and enhance the underlying interpretable behavioral theory, so it enhances our ability to explain, not only predict, human decision making.”
This recent work highlights the promise of machine learning models that also draw from behavioral science for predicting people’s decisions and responses in real-world scenarios. In the future, BEAST-GB and other similar models could guide the design of new large-scale interventions aimed at improving people’s decisions via nudges, incentives or other behavioral science-based strategies.
Plonsky and his colleagues eventually plan to collaborate with policymakers and other parties involved in the design or implementation of behavioral science initiatives. This would allow them to test their model “in the wild,” validating its potential in real-world settings, while also yielding insight that could inform its further advancement.
“Other recent publications have suggested that human decision-making and other behaviors can be very effectively predicted using advanced data-driven machine learning methods like large language models tuned on large behavioral data,” added Plonsky.
“We now plan to continue investigating when and how BEAST-like theory can enhance such data-driven methods in predicting behavior. Specifically, we plan to extend our domain of research by including natural-language decision problems, more aligned with the real world.”
Written for you by our author Ingrid Fadelli,
edited by Sadie Harley, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Ori Plonsky et al, Predicting human decisions with behavioural theories and machine learning, Nature Human Behaviour (2025). DOI: 10.1038/s41562-025-02267-6.
© 2025 Science X Network
Citation:
BEAST-GB model combines machine learning and behavioral science to predict people’s decisions (2025, August 14)
retrieved 14 August 2025
from https://techxplore.com/news/2025-08-beast-gb-combines-machine-behavioral.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Anthropic Plots Major London Expansion
Anthropic is moving into a new London office as it seeks to expand its research and commercial footprint in Europe, setting up a scrap between the leading AI labs for talent emerging from British universities.
The company, which opened its first London office in 2023, is moving to the same neighborhood as Google DeepMind, OpenAI, Meta, Wayve, Isomorphic Labs, Synthesia, and various AI research institutions.
Anthropic’s new, 158,000-square-foot office footprint will have space enough for 800 people—four times its current head count—giving it room to potentially outscale OpenAI, which itself recently announced an expansion in London.
“Europe’s largest businesses and fastest-growing startups are choosing Claude, and we’re scaling to match,” says Pip White, head of EMEA North at Anthropic. “The UK combines ambitious enterprises and institutions that understand what’s at stake with AI safety with an exceptional pool of AI talent—we want to be where all of that comes together.
UK government officials had reportedly attempted to coax Anthropic into expanding its presence in London after the company recently fell out with the US administration. Anthropic refused to allow its models to be used in mass surveillance and autonomous weapon systems, leading to an ongoing legal battle between the AI lab and the Pentagon.
As part of the expansion, Anthropic says it will deepen its work with the UK’s AI Security Institute, a government body that this week published a risk evaluation of its latest model, Claude Mythos Preview. According to Politico, the UK government is one of few across Europe to have been granted access to the model, which Anthropic has released to only select parties, citing concerns over the potential for its abuse by cybercriminals.
The increasing concentration of AI companies in the same London district is an important step in creating a pathway for research to translate into AI products, says Geraint Rees, vice-provost at University College London, whose campus is around the corner from Anthropic’s new office.
“This cluster didn’t emerge from a planning document. It grew because serious researchers and companies understand that proximity isn’t a nice-to-have,” he said last month, speaking at an event attended by WIRED. “That’s how the innovation system actually works. It’s not a clean, linear transfer from lab to market. It’s messier, richer, more human than that.”
Tech
LG’s High-End Soundbar System Makes My Living Room Feel Like a Home Theater
Setup was relatively quick and painless. You just have to unbox four speakers, a soundbar, and a subwoofer, attach their power cables, and plug in everything. Pairing happens through the LG ThinQ app, which allows you to set up the Sound Suite system and tune it to exactly where you’re sitting in the room using your cell phone’s microphone.
You can also set up each speaker to play music and group it with any other LG smart speakers you might have around your home, like the more affordable $250 M5 bookshelf speaker, to create a whole-home system.
Once all the components were synced, I plugged the soundbar into the C5 OLED via HDMI, and was able to easily control everything via the TV remote’s volume and mute buttons. More in-depth settings had to happen in the app, but if you’re anything like me, this won’t become a regular chore. You’ll set it how you like it once and move on. While the pairing functionality with the LG TV was nice, it’s not required–the eARC port lets the Sound Suite work perfectly with any modern TV.
The bar itself runs the show, with a black-and-white display on the far left that shows your mode and volume, among other settings. In the center of the bar and below each speaker, an LED light strip that also shows you the volume when you change it, which is a nice touch.
Getting Musical
Photograph: Parker Hall
The sound of the LG Sound Suite is full and cinematic, thanks in no small part to the extra dedicated speakers. Most competitors lack front left and right, simply opting to use the soundbar for these channels. As such, the width and breadth of the soundstage were bigger than most competitors I’ve tried, with only Samsung’s flagship HW-Q990F as a real contender. Even the Samsung lacked the lower-frequency audio quality that these LG speakers provide.
Tech
Cyber Essentials closes the MFA loophole but leaves some organisations adrift | Computer Weekly
On 27 April, the government backed security certification scheme, Cyber Essentials v3.3, takes effect and multi-factor authentication (MFA) becomes a pass-or-fail requirement for the first time.
If a cloud service your organisation uses offers MFA and you have not enabled it, you fail. No discretion, no partial credit, no route to remediate inside the assessment cycle.
This is the right call. I want to say that clearly, because what follows is a problem with the implementation, not the policy. MFA is the single most effective control against credential-based attacks, and the scheme has needed to stop tolerating its absence for a long time. The National Cyber Security Centre (NCSC), part of GCHQ, which developed Cyber Essentials and certification company, IASME have got this decision right.
But in the assessments we have conducted this year, I have seen two organisations that will hit a wall on 27 April, and I do not think they are unusual.
Train company could not deploy MFA
The first is a train operating company in the South East. Station operations rooms run on shared terminals where staff rotate through shifts in time-critical conditions. A transport union raised formal concerns that MFA would introduce delays at the keyboard that could affect train operations and, in their view, the safety of train movements.
The company listened and chose not to enable MFA in those environments. Under v3.2 they passed, with the relevant questions marked as non-compliant but not fatal. Under Cyber Essentials v3.3 they will fail.
Charity run by volunteers faces MFA hurdle
The second is a nationally known charity with hundreds of high street shops. The shops are staffed largely by volunteers many of whom work a few hours a week, and staff turnover is high.
The cost and management overhead of enrolling every volunteer onto MFA, using personal phones they may not have and authenticator apps they would not keep, was considered prohibitive. So MFA was never switched on. Same story: they passed under v3.2. Under v3.3 they fail.
Neither of these organisations is ignoring security. Both made considered decisions based on how their people actually work. The problem is not that they do not want to comply. It is that the standard toolkit of MFA methods, including SMS codes, authenticator apps on personal phones, and push notifications, does not fit a six-person shared terminal that has to be available in seconds, or a volunteer workforce that changes every week.
FIDO2 could offer solutions
The frustrating part is that there is a solution, and it is already proven in healthcare, manufacturing and retail. FIDO2 authentication delivered through NFC badge-taps lets a staff member authenticate in under two seconds: tap a badge, enter a short PIN, session opens.
It satisfies the MFA requirement by combining possession of the badge with knowledge of the PIN. It is faster than typing a password. Crucially, it is compliant, because each badge is enrolled as that individual’s unique FIDO2 credential, so the Cyber Essentials requirement for unique user accounts is met. Shared keys or shared PINs would not work. Individual badges do.
Need for better guidance
v3.3 explicitly recognises FIDO2 authenticators and passkeys as valid MFA methods. The compliance path is clear. What is missing is anyone telling the organisations most affected that this path exists.
That is the gap that must close. The NCSC and IASME have made the right policy decision; the scheme would be weaker without it.
But implementation guidance for shared-terminal, shift-based and high-turnover environments is thin, and these organisations are running out of time to find their way through it. Many of them hold Cyber Essentials because it is required for government contracts or in their supply chains; losing certification has a direct commercial cost.
The answer is not to soften the requirement. The answer is to make sure no one fails for lack of information about how to meet it.
Jonathan Krause is Founder and Managing Director of Forensic Control
-
Entertainment1 week agoQueen Elizabeth II emotional message for Archie, Lilibet sparks speculation
-
Tech1 week agoAzure customers up in arms over ‘full’ UK South region | Computer Weekly
-
Tech1 week agoAs the Strait of Hormuz Reopens, Global Shipping Will Take Months to Recover
-
Fashion1 week agoCII submits 20-pt agenda to Indian govt to back firms hit by Iran war
-
Tech1 week agoThis AI Button Wearable From Ex-Apple Engineers Looks Like an iPod Shuffle
-
Politics6 days agoIndian airlines hit hardest after Dubai limits foreign flights until May 31
-
Entertainment4 days agoPalace left in shock as Prince William cancels grand ceremony
-
Politics6 days agoChinese, Taiwanese will unite, Xi tells Taiwan opposition leader
