Tech
Sodium-based battery design maintains performance at room and subzero temperatures
All-solid-state batteries are safe, powerful ways to power EVs and electronics and store electricity from the energy grid, but the lithium used to build them is rare, expensive and can be environmentally devastating to extract.
Sodium is an inexpensive, plentiful, less-destructive alternative, but the all-solid-state batteries they create currently don’t work as well at room temperature.
“It’s not a matter of sodium versus lithium. We need both. When we think about tomorrow’s energy storage solutions, we should imagine the same gigafactory can produce products based on both lithium and sodium chemistries,” said Y. Shirley Meng, Liew Family Professor in Molecular Engineering at the UChicago Pritzker School of Molecular Engineering (UChicago PME). “This new research gets us closer to that ultimate goal while advancing basic science along the way.”
A paper from Meng’s lab, published this week in Joule, helps rectify that problem. Their research raises the benchmark for sodium-based all-solid-state batteries, demonstrating thick cathodes that retain performance at room temperature down to subzero conditions.
The research helps put sodium on a more equal playing field with lithium for electrochemical performance, said first author Sam Oh of the A*STAR Institute of Materials Research and Engineering in Singapore, a visiting scholar at Meng’s Laboratory for Energy Storage and Conversion during the research.
How they accomplished that goal represents an advance in pure science.
“The breakthrough that we have is that we are actually stabilizing a metastable structure that has not been reported,” Oh said. “This metastable structure of sodium hydridoborate has a very high ionic conductivity, at least one order of magnitude higher than the one reported in the literature, and three to four orders of magnitude higher than the precursor itself.”

Established technique, new field
The team heated a metastable form of sodium hydridoborate up to the point it starts to crystallize, then rapidly cooled it to kinetically stabilize the crystal structure. It’s a well-established technique, but one that has not previously been applied to solid electrolytes, Oh said.
That familiarity could, down the road, help turn this lab innovation into a real-world product.
“Since this technique is established, we are better able to scale up in the future,” Oh said. “If you are proposing something new or if there’s a need to change or establish processes, then industry will be more reluctant to accept it.”
Pairing that metastable phase with an O3-type cathode that has been coated with a chloride-based solid electrolyte can create thick, high-areal-loading cathodes that put this new design beyond previous sodium batteries. Unlike design strategies with a thin cathode, this thick cathode would pack less of the inactive materials and more cathode “meat.”
“The thicker the cathode is, the theoretical energy density of the battery—the amount of energy being held within a specific area—improves,” Oh said.
The current research advances sodium as a viable alternative for batteries, a vital step to combat the rarity and environmental damage of lithium. It’s one of many steps ahead.
“It’s still a long journey, but what we have done with this research will help open up this opportunity,” Oh said.
More information:
Jin An Sam Oh et al, Metastable sodium closo-hydridoborates for all-solid-state batteries with thick cathodes, Joule (2025). DOI: 10.1016/j.joule.2025.102130
Citation:
Sodium-based battery design maintains performance at room and subzero temperatures (2025, September 17)
retrieved 17 September 2025
from https://techxplore.com/news/2025-09-sodium-based-battery-room-subzero.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Teaching robots to map large environments
A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain.
Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission.
To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds.
The AI-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map while estimating the robot’s position in real-time.
Unlike many other approaches, their technique does not require calibrated cameras or an expert to tune a complex system implementation. The simpler nature of their approach, coupled with the speed and quality of the 3D reconstructions, would make it easier to scale up for real-world applications.
Beyond helping search-and-rescue robots navigate, this method could be used to make extended reality applications for wearable devices like VR headsets or enable industrial robots to quickly find and move goods inside a warehouse.
“For robots to accomplish increasingly complex tasks, they need much more complex map representations of the world around them. But at the same time, we don’t want to make it harder to implement these maps in practice. We’ve shown that it is possible to generate an accurate 3D reconstruction in a matter of seconds with a tool that works out of the box,” says Dominic Maggio, an MIT graduate student and lead author of a paper on this method.
Maggio is joined on the paper by postdoc Hyungtae Lim and senior author Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. The research will be presented at the Conference on Neural Information Processing Systems.
Mapping out a solution
For years, researchers have been grappling with an essential element of robotic navigation called simultaneous localization and mapping (SLAM). In SLAM, a robot recreates a map of its environment while orienting itself within the space.
Traditional optimization methods for this task tend to fail in challenging scenes, or they require the robot’s onboard cameras to be calibrated beforehand. To avoid these pitfalls, researchers train machine-learning models to learn this task from data.
While they are simpler to implement, even the best models can only process about 60 camera images at a time, making them infeasible for applications where a robot needs to move quickly through a varied environment while processing thousands of images.
To solve this problem, the MIT researchers designed a system that generates smaller submaps of the scene instead of the entire map. Their method “glues” these submaps together into one overall 3D reconstruction. The model is still only processing a few images at a time, but the system can recreate larger scenes much faster by stitching smaller submaps together.
“This seemed like a very simple solution, but when I first tried it, I was surprised that it didn’t work that well,” Maggio says.
Searching for an explanation, he dug into computer vision research papers from the 1980s and 1990s. Through this analysis, Maggio realized that errors in the way the machine-learning models process images made aligning submaps a more complex problem.
Traditional methods align submaps by applying rotations and translations until they line up. But these new models can introduce some ambiguity into the submaps, which makes them harder to align. For instance, a 3D submap of a one side of a room might have walls that are slightly bent or stretched. Simply rotating and translating these deformed submaps to align them doesn’t work.
“We need to make sure all the submaps are deformed in a consistent way so we can align them well with each other,” Carlone explains.
A more flexible approach
Borrowing ideas from classical computer vision, the researchers developed a more flexible, mathematical technique that can represent all the deformations in these submaps. By applying mathematical transformations to each submap, this more flexible method can align them in a way that addresses the ambiguity.
Based on input images, the system outputs a 3D reconstruction of the scene and estimates of the camera locations, which the robot would use to localize itself in the space.
“Once Dominic had the intuition to bridge these two worlds — learning-based approaches and traditional optimization methods — the implementation was fairly straightforward,” Carlone says. “Coming up with something this effective and simple has potential for a lot of applications.
Their system performed faster with less reconstruction error than other methods, without requiring special cameras or additional tools to process data. The researchers generated close-to-real-time 3D reconstructions of complex scenes like the inside of the MIT Chapel using only short videos captured on a cell phone.
The average error in these 3D reconstructions was less than 5 centimeters.
In the future, the researchers want to make their method more reliable for especially complicated scenes and work toward implementing it on real robots in challenging settings.
“Knowing about traditional geometry pays off. If you understand deeply what is going on in the model, you can get much better results and make things much more scalable,” Carlone says.
This work is supported, in part, by the U.S. National Science Foundation, U.S. Office of Naval Research, and the National Research Foundation of Korea. Carlone, currently on sabbatical as an Amazon Scholar, completed this work before he joined Amazon.
Tech
Could a ‘gray swan’ event bring down the AI revolution? Here are 3 risks we should be preparing for
The term “black swan” refers to a shocking event on nobody’s radar until it actually happens. This has become a byword in risk analysis since a book called “The Black Swan” by Nassim Nicholas Taleb was published in 2007. A frequently cited example is the 9/11 attacks.
Fewer people have heard of “gray swans“. Derived from Taleb’s work, gray swans are rare but more foreseeable events. That is, things we know could have a massive impact, but we don’t (or won’t) adequately prepare for.
COVID was a good example: precedents for a global pandemic existed, but the world was caught off guard anyway.
Although he sometimes uses the term, Taleb doesn’t appear to be a big fan of gray swans. He’s previously expressed frustration that his concepts are often misused, which can lead to sloppy thinking about the deeper issues of truly unforeseeable risks.
But it’s hard to deny there is a spectrum of predictability, and it’s easier to see some major shocks coming. Perhaps nowhere is this more obvious than in the world of artificial intelligence (AI).
Putting our eggs in one basket
Increasingly, the future of the global economy and human thriving has become tied to a single technological story: the AI revolution. It has turned philosophical questions about risk into a multitrillion-dollar dilemma about how we align ourselves with possible futures.
US tech company Nvidia, which dominates the market for AI chips, recently surpassed US$5 trillion (about A$7.7 trillion) in market value. The “Magnificent Seven” US tech stocks—Amazon, Alphabet (Google), Apple, Meta, Microsoft, Nvidia and Tesla—now make up about 40% of the S&P 500 stock index.
The impact of a collapse for these companies—and a stock market bust—would be devastating at a global level, not just financially but also in terms of dashed hopes for progress.
AI’s gray swans
There are three broad categories of risk—beyond the economic realm—that could bring the AI euphoria to an abrupt halt. They’re gray swans because we can see them coming but arguably don’t (or won’t) prepare for them.
1. Security and terror shocks
AI’s ability to generate code, malicious plans and convincing fake media makes it a force multiplier for bad actors. Cheap, open models could help design drone swarms, toxins or cyber attacks. Deepfakes could spoof military commands or spread panic through fake broadcasts.
Arguably, the closest of these risks to a “white swan“—a foreseeable risk with relatively predictable consequences—stems from China’s aggression toward Taiwan.
The world’s biggest AI firms depend heavily on Taiwan’s semiconductor industry for the manufacture of advanced chips. Any conflict or blockade would freeze global progress overnight.
2. Legal shocks
Some AI firms have already been sued for allegedly using text and images scraped from the internet to train their models.
One of the best-known examples is the ongoing case of The New York Times versus OpenAI, but there are many similar disputes around the world.
If a major court were to rule that such use counts as commercial exploitation, it could unleash enormous damages claims from publishers, artists and brands.
A few landmark legal rulings could force major AI companies to press pause on developing their models further—effectively halting the AI build-out.
3. One breakthrough too many: innovation shocks
Innovation is usually celebrated, but for companies investing in AI, it could be fatal. New AI technology that autonomously manipulates markets (or even news that one is already doing so) would make current financial security systems obsolete.
And an advanced, open-source, free AI model could easily vaporize the profits of today’s industry leaders. We got a glimpse of this possibility in January’s DeepSeek dip, when details about a relatively cheaper, more efficient AI model developed in China caused US tech stocks to plummet.
Why we struggle to prepare for gray swans
Risk analysts, particularly in finance, often talk in terms of historical data. Statistics can give a reassuring illusion of consistency and control. But the future doesn’t always behave like the past.
The wise among us apply reason to carefully confirmed facts and are skeptical of market narratives.
Deeper causes are psychological: our minds encode things efficiently, often relying on one symbol to represent very complex phenomena.
It takes us a long time to remodel our representations of the world into believing a looming big risk is worth taking action over—as we’ve seen with the world’s slow response to climate change.
How can we deal with gray swans?
Staying aware of risks is important. But what matters most isn’t prediction. We need to design for a deeper sort of resilience that Taleb calls “antifragility“.
Taleb argues systems should be built to withstand—or even benefit from—shocks, rather than rely on perfect foresight.
For policymakers, this means ensuring regulation, supply chains and institutions are built to survive a range of major shocks. For individuals, it means diversifying our bets, keeping options open and resisting the illusion that history can tell us everything.
Above all, the biggest problem with the AI boom is its speed. It is reshaping the global risk landscape faster than we can chart its gray swans. Some may collide and cause spectacular destruction before we can react.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Could a ‘gray swan’ event bring down the AI revolution? Here are 3 risks we should be preparing for (2025, November 5)
retrieved 5 November 2025
from https://techxplore.com/news/2025-11-gray-swan-event-ai-revolution.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
The Razer Blade 14 Is Still One of the Best Compact Gaming Laptops
The OLED looks great, but one of the benefits of OLED is HDR in gaming, thanks to the incredible contrast from being able to turn off individual pixels. OLED isn’t known for being bright, but lately, that’s improved on laptops and external monitors. The OLED display on the Lenovo Legion 7i Gen 10, for example, can be cranked up to over 1,000 nits, creating an impressive HDR effect. The Razer Blade 14, however, only maxes out at 620 nits in HDR and 377 nits in SDR. Because of that, I could hardly tell HDR was even turned on. It’s still a pretty screen, and OLED has other benefits over IPS panels, including faster response times, less motion blur, and higher contrast.
Unfortunately, the Razer Blade 14’s OLED panel is not as colorful as the one I tested on the Razer Blade 16, with a color accuracy of 1.3 and 86 percent coverage of the AdobeRGB color space. Also, the 120-Hz refresh rate is standard for OLED laptops, but you can get 240-Hz speeds on laptops that use IPS, like the Alienware 16X Aurora, which happens to be a much cheaper device.
The Razer Blade 14’s biggest competition is the ROG Zephyrus G14. I haven’t tested the latest model yet, but it’s a laptop we’ve liked for years now, and it’s on sale often enough for less than the Blade 14. The only real difference is that the Blade 14 uses a more powerful AMD processor, the Ryzen AI 9 365. Not only does it perform better in anything CPU-intensive, such as certain games and creative applications, but it’s also a more efficient chip.
That leads to some improved battery life—at least, better than your average gaming laptop. I got 10 hours and 19 minutes in a local video playback test, which is about the most you can expect to get from the device. On the other hand, Asus offers higher-powered configurations of the Zephyrus G14, including one that includes the more powerful Ryzen AI 9 HX.
The RTX 5070 Takes Charge
Photograph: Luke Larsen
Bad news: The RAM is no longer user-upgradeable on the Razer Blade 14, so you’ll have to configure it up front with what you need. My review unit had 32 GB, but you can also choose either 16 GB or 64 GB. Because it’s soldered, the memory speeds are faster. As for internal storage, you still get one open M.2 slot to expand space if you need it, supporting up to 4 TB.
-
Tech1 week agoOpenAI says a million ChatGPT users talk about suicide
-
Tech1 week agoUS Ralph Lauren partners with Microsoft for AI shopping experience
-
Tech1 week agoHow digital technologies can support a circular economy
-
Sports1 week agoBilly Bob Thornton dishes on Cowboys owner Jerry Jones’ acting prowess after ‘Landman’ cameo
-
Tech1 week agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Fashion1 week agoITMF elects new board at 2025 Yogyakarta conference
-
Tech1 week agoHere’s How Many People May Use ChatGPT During a Mental Health Crisis Each Week
-
Business1 week agoTransfer test: Children from Belfast low income families to be given free tuition
