Tech
The Pepsi Man Is Coming to Save Samsung From Boring Design
Samsung has one of the biggest product line ups of any tech brand, yet when it comes to design, it’s consistently seen as an “also-ran.” While other companies have forged distinctive and instantly recognizable design languages, such as Nothing, Samsung has found itself behind in the style stakes. When you’ve got Apple as one of your biggest competitors, that’s not a great position to be in.
That’s not to say there haven’t been improvements in the last decade, and the occasional flashes of promise—most notable in its collaborations with external designers, like the Bouroullec brothers, who fashioned the Serif TV for the South Korean company. But that hasn’t stopped complaints of boring and unoriginal design, both internally and externally, and an inertia when it has led, leaving other companies to close the gap.
Being defined by performance over personality has hardly done Samsung’s bottom line any harm—it recently regained its lead from Apple in global smartphone market share and has been the global leader in TVs for almost two decades. But, in 2025, it looks there’s finally a clear desire from Samsung to bridge the gap between form and function, by giving design the focus it’s been lacking for far too long at the company.
Back in April, Samsung hired Mauro Porcini, its first ever chief design officer. Porcini has spent more than 20 years building award-winning design teams at 3M and PepsiCo, most recently leading a successful global rebrand for Pepsi—the company’s first in 14 years.
For a company as big as Samsung, this hire feels late. Apple created the same position for Jony Ive a decade ago, around the same time it was reported that innovation at Samsung was being stifled beneath layers of management. With those structural issues supposedly unpicked, Samsung now has work to do—something Porcini is keen to acknowledge.
Late to the Party
“We are in a moment of change, where the way people interact with any kind of machine or electronic device is going to be radically different in the coming years,” Porcini tells me. “These machines will change the way people live, work, and connect with each other—the way people fulfil their needs. For a company like Samsung, having design at the top, involved in the way you define the future of the portfolio based on those needs—it’s more important than ever.”
The march of AI is, of course, a helpful hook upon which to tie this long overdue move, but Yves Béhar, the founder and principal designer at Fuseproject who worked with Samsung on The Frame TV, tells me this has been years in the making, and something Samsung had initially looked externally to help put the wheels in motion.
“When we started working with Samsung on The Frame [released in January 2017], the CEO at the time, HS Kim, came to us and said—look, we want to transform ourselves from a consumer technology company, into an experience business,” says Béhar. “So we helped them set some principles around that, and worked on getting that message out into the business—of what it means to think about experience versus tech. This is exactly what we did with The Frame TV.”
Tech
Dealing With Hearing Loss? These Over-the-Counter Hearing Aids Could Help
If you’re spending hundreds or thousands of dollars buying an OTC hearing aid, make sure you’re getting a product that offers a sustainable long-term solution to your hearing loss needs. Aside from the obvious things like sound quality, take a few minutes to look into these specs.
What size and style works best for you? Most hearing aids on the market are classified as either behind-the-ear (BTE) or in-the-ear (ITE). BTE hearing aids are probably what you think of when you picture a hearing aid, consisting of a plastic case that contains the electronics, a thin cable that goes over the ear and inside the canal, and a tiny speaker known as a “receiver,” which sends boosted audio from a person’s surroundings into their ear. By contrast, ITE models are self-contained units that look like a standard pair of wireless earbuds. In-the-ear hearing aids are popular for their incognito aesthetic, and they tend to be a lot easier to pop in and out than their behind-the-ear counterparts. Still, contemporary BTE hearing aids are significantly smaller than the ones “back in the day.” It just comes down to what fits you most comfortably.
Replaceable or rechargeable batteries? Much like wireless earbuds, most OTC hearing aids are equipped with rechargeable batteries and (usually) a portable charging case for easy transport. If you take the case’s battery life into account, you’ll find most OTC models last about a week before you need to connect to a power source. Without the case, rechargeable hearing aids offer anywhere from 10 to 24 hours of battery life per charge (but this goes down by a few hours if you’re using them to stream via Bluetooth). Replaceable batteries, such as those found on the Sony CRE-C10, can last for 70 hours or more before the battery dies. Sounds great, but it means having spares on hand and wrestling with tiny cells, which can be difficult for people with dexterity problems.
Are you comfortable making adjustments? While prescription hearing aids are fitted in-office by a licensed hearing care specialist, OTC devices are self-fitting. In most cases, OTC hearing aid users are expected to be able to tune the devices to their ears, usually with the help of a smartphone app. It’s certainly nice to make your own adjustments on the fly, but it may cost you in the way of personalized care.
What’s the company’s customer support like? If only you could count on quality support from every hearing aid manufacturer! Unfortunately, OTC hearing aid companies are just that—companies. There’s no “standard” for customer service in the industry. Companies like Jabra offer patients comprehensive support, but other brands may leave you on your own.
Is there a trial run? If you’re not happy with your hearing aids, you’ll probably want to have the option to return them without writing all that money off as a sunk cost. Most states require manufacturers to provide patients with a minimum trial period, but I recommend playing it safe by seeking out this info before buying.
What about warranties? Equally important to a reasonable trial period is the inclusion of a comprehensive manufacturer’s warranty. Most brands cover manufacturing defects for up to a year, but it goes without saying that the longer the coverage period, the better the deal. No matter which OTC hearing aid you end up with, make sure the warranty covers loss, damage, and wear and tear.
Tech
Gear News of the Week: There’s Yet Another New AI Browser, and Fujifilm Debuts the X-T30 III
An increasingly popular solution is the inclusion of a solar panel to keep that battery topped up, enabling you to install and potentially never touch the camera again. Both Wyze and TP-Link just revealed interesting solar-powered cameras this week. Let’s talk about Wyze first.
The Wyze Solar Cam Pan ($80) is a 2K outdoor security camera that can pan 360 degrees and tilt 70 degrees. It is IP65-rated, easy to mount, and sports a small solar panel that Wyze reckons can keep the camera running on just one hour of sunlight a day (we shall see as I test through the gray depths of a Scottish winter). The Solar Cam Pan also features AI-powered person tracking, two-way audio, color night vision, a spotlight, and a siren, though you need a subscription, starting from $3 per month, to unlock smart features and get cloud video storage.
Wyze also announced a new, impressively affordable Battery Video Doorbell ($66). We started testing Wyze cameras again recently after it beefed up its security policies, but the repeated security breaches, exposing thousands of camera feeds to other customers, may still give you pause.
Meanwhile, TP-Link is the first manufacturer to combine solar power with floodlight capability in its new Tapo C615F Kit. The similar-looking but larger Tapo C615F is another 2K camera, but it pans 360 degrees, tilts 130 degrees, and, most importantly, has an adjustable 800-lumen floodlight.
TP-Link says its solar panel only needs 45 minutes of sun a day to keep the camera ticking, and it comes with a handy 13-foot cable, so you can install the solar panel in the best spot to catch those rays. The Tapo C615F ($100) is available now, and you can use the promo code 10TAPOFLDCAM to get $10 off if you’re quick. —Simon Hill
Fujifilm Updates Its X-T30 Line
Courtesy of Fujifilm
Fujifilm has released the X-T30 III, an update to the company’s entry-level, SLR-shaped mirrorless X-T30 line. The third iteration of the X-T30 pairs Fujifilm’s familiar 26-MP X-Trans APS-C sensor with the latest Fujifilm processor, the X-Processor 5. The latter means that the X-T30 III is now roughly the same as the X-M5 and X-T50 in terms of internal features. All of Fujifilm’s film simulations are available, as are the subject-recognition AF modes. Video specs also see a bump up to 6.2K 30 fps open gate, and 4K 60 fps with a 1.18X crop.
The body is nearly identical to the previous model; the size, weight, and button/dial layout are the same as on the X-T30 II. The one change is that the control dial is now a film simulation dial, with three options for custom film recipes. The X-T30 III goes on sale in November at $999 for the body, or $1,150 for the body and a new 13- to 33-mm F3.5-6.3 zoom lens (20 mm- to 50 mm-equivalent). —Scott Gilbertson
Intel’s AI Experience Stores
In time for the peak shopping season, Intel is launching a variety of “AI Experience Stores” at a few key locations around the world. We don’t know exactly what they’ll be like, but Intel says these pop-ups will include an “AI-powered shopping experience” of some kind and are based on the initial launch of the trial run store in London last year.
If it keeps that same design ethos intact, these stores will be fairly immersive experiences. There will be lots of AI-driven demos on devices from the wider Windows laptop ecosystem, presumably to help drive interest and curiosity in what PCs can do. Interestingly, it comes on the back of a significant marketing push by Microsoft with its new Windows 11 AI experiences, trying to convince buyers to upgrade and explain some of the new AI features.
Here are the dates and locations below for when Intel’s stores will be open. —Luke Larsen
- New York City: 1251 6th Avenue (10/29 to 11/30)
- London: 95 Oxford Street (10/30 to 11/30)
- Munich: Viktualienmarkt 6 (10/30 to 12/9)
- Paris: 14 Boulevard Poissonniere (11/4 to 11/30)
- Seoul: OPUS 407, 1318-1 Seocho-dong (10/31 to 11/30)
Tech
DeepMind introduces AI agent that learns to complete various tasks in a scalable world model
Over the past decade, deep learning has transformed how artificial intelligence (AI) agents perceive and act in digital environments, allowing them to master board games, control simulated robots and reliably tackle various other tasks. Yet most of these systems still depend on enormous amounts of direct experience—millions of trial-and-error interactions—to achieve even modest competence.
This brute-force approach limits their usefulness in the physical world, where such experimentation would be slow, costly, or unsafe.
To overcome these limitations, researchers have turned to world models—simulated environments where agents can safely practice and learn.
These world models aim to capture not just the visuals of a world, but the underlying dynamics: how objects move, collide, and respond to actions. However, while simple games like Atari and Go have served as effective testbeds, world models still fall short when it comes to representing the rich, open-ended physics of complex worlds like Minecraft or robotics environments.
Researchers at Google DeepMind recently developed Dreamer 4, a new artificial agent capable of learning complex behaviors entirely within a scalable world model, given a limited set of pre-recorded videos.
The new model, presented in a paper published on the arXiv preprint server, was the first artificial intelligence (AI) agent to obtain diamonds in Minecraft without practicing in the actual game at all. This remarkable achievement highlights the possibility of using Dreamer 4 to train successful AI agents purely in imagination—with important implications for the future of robotics.
“We as humans choose actions based on a deep understanding of the world and anticipate potential outcomes in advance,” Danijar Hafner, first author of the paper, told Tech Xplore.
“This ability requires an internal model of the world and allows us to solve new problems very quickly. In contrast, previous AI agents usually learn through brute-force with vast amounts of trial-and-error. But that’s infeasible for applications such as physical robots that can easily break.”
Some of the AI agents developed at DeepMind over the past few years have already achieved tremendous success at games such as Go and Atari by training in small world models. However, the world models that these models relied on failed to capture the rich physical interactions in more complex worlds, such as the Minecraft videogame.
On the other hand, “Video models such as Veo and Sora are rapidly improving towards generating realistic videos of very diverse situations,” said Hafner.
“However, they are not interactive, and their generations are too slow, so they cannot be used as ‘neural simulators’ to train agents inside of yet. The goal of Dreamer 4 was to train successful agents purely inside of world models that can realistically simulate complex worlds.”
Hafner and his colleagues decided to use Minecraft as a test bed for their AI agent, as it is a complex video game that contains infinite generated worlds and long-horizon tasks that require over 20,000 consecutive mouse/keyboard actions to be completed.
One of these tasks is the mining of diamonds, which requires the agent to perform a long sequence of prerequisites such as chopping trees, crafting tools, and mining and smelting ores.
Notably, the researchers wanted to train their agent purely in “imagined” scenarios, instead of allowing it to practice in the actual game, analogous to how smart robots will have to learn in simulation, because they could easily break when practicing directly in the physical world . This requires the model to learn object interactions in an accurate enough internal model of the Minecraft world.
The artificial agent developed by Hafner and his colleagues is based on a large transformer model that was trained to predict future observations, actions and the rewards associated with specific situations. Dreamer 4 was trained on a fixed offline dataset containing recorded Minecraft gameplay videos collected by human players.
“After completing this training, Dreamer 4 learns to select increasingly better actions in a wide range of imagined scenarios via reinforcement learning,” said Hafner.
“Training agents inside of scalable world models required pushing the frontier of generative AI. We designed an efficient transformer architecture, and a novel training objective named shortcut forcing. These advances enabled accurate predictions while also speeding up generations by over 25x compared to typical video models.”
Dreamer 4 is the first AI agent to obtain diamonds in Minecraft when trained solely on offline data, without ever practicing its skills in the actual game. This finding highlights the agent’s ability to autonomously learn how to correctly solve complex and long-horizon tasks.
“Learning purely offline is highly relevant for training robots that can easily break when practicing in the physical world,” said Hafner. “Our work introduces a promising new approach to building smart robots that do household chores and factory tasks.”
In the initial tests performed by the researchers, the Dreamer 4 agent was found to accurately predict various object interactions and game mechanics, thus developing a reliable internal world model. The world model established by the agent outperformed the models that earlier agents relied on by a significant margin.
“The model supports real-time interactions on a single GPU, making it easy for human players to explore its dream world and test its capabilities,” said Hafner. “We find that the model accurately predicts the dynamics of mining and placing blocks, crafting simple items, and even using doors, chests, and boats.”
A further advantage of Dreamer 4 is that it achieved remarkable results despite being trained on a very small amount of action data. This is essentially video footage showing the effects of pressing different keys and mouse buttons within the Minecraft videogame.
“Instead of requiring thousands of hours of gameplay recordings with actions, the world model can actually learn the majority of its knowledge from video alone,” said Hafner.
“With only a few hundred hours of action data, the world model then understands the effects of mouse movement and key presses in a general way that transfers to new situations. This is exciting because robot data is slow to record, but the internet contains a lot of videos of humans interacting with the world, from which Dreamer 4 could learn in the future.”
This recent work by Hafner and his colleagues at DeepMind could contribute to the advancement of robotics systems, simplifying the training of the algorithms that allow them to reliably complete manual tasks in the real world.
Meanwhile, the researchers plan to further improve Dreamer 4’s world model, by integrating a long-term memory component. This would ensure that the simulated worlds in which the agent is trained remain consistent over long periods of time.
“Incorporating language understanding would also bring us closer towards agents that collaborate with humans and perform tasks for them,” added Hafner.
“Finally, training the world model on general internet videos would equip the agent with common sense knowledge of the physical world and allow us to train robots in diverse imagined scenarios.”
Written for you by our author Ingrid Fadelli, edited by Sadie Harley, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Danijar Hafner et al, Training Agents Inside of Scalable World Models, arXiv (2025). DOI: 10.48550/arxiv.2509.24527
© 2025 Science X Network
Citation:
DeepMind introduces AI agent that learns to complete various tasks in a scalable world model (2025, October 25)
retrieved 25 October 2025
from https://techxplore.com/news/2025-10-deepmind-ai-agent-tasks-scalable.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Tech1 week agoWhy the F5 Hack Created an ‘Imminent Threat’ for Thousands of Networks
-
Tech6 days agoHow to Protect Yourself Against Getting Locked Out of Your Cloud Accounts
-
Sports1 week agoU.S. Soccer recommends extending NCAA season
-
Business6 days agoGovernment vows to create 400,000 jobs in clean energy sector
-
Tech6 days agoThe DeltaForce 65 Brings Das Keyboard Into the Modern Keyboard Era—for Better or Worse
-
Tech1 week agoI Tested Over 40 Heat Protectant Sprays to Find the Best of the Best
-
Sports1 week agoPCB confirms Tri-nation T20 series to go ahead despite Afghanistan’s withdrawal – SUCH TV
-
Tech1 week agoThe Best Part of Audien’s Atom X Hearing Aids Is the Helpful, High-Tech Case
