Tech
Quantum computer chips clear major manufacturing hurdle
UNSW Sydney nano-tech startup Diraq has shown its quantum chips aren’t just lab-perfect prototypes—they also hold up in real-world production, maintaining the 99% accuracy needed to make quantum computers viable.
Diraq, a pioneer of silicon-based quantum computing, achieved this feat by teaming up with the European nanoelectronics institute Interuniversity Microelectronics Center (imec). Together they demonstrated the chips worked just as reliably coming off a semiconductor chip fabrication line as they do in the experimental conditions of a research lab at UNSW.
UNSW Engineering Professor Andrew Dzurak, who is the founder and CEO of Diraq, said up until now it hadn’t been proven that the processors’ lab-based fidelity—meaning accuracy in the quantum computing world—could be translated to a manufacturing setting.
“Now it’s clear that Diraq’s chips are fully compatible with manufacturing processes that have been around for decades.”
In a paper published in Nature, the teams report that Diraq-designed, imec-fabricated devices achieved over 99% fidelity in operations involving two quantum bits—or “qubits.”
The result is a crucial step toward Diraq’s quantum processors achieving utility scale, the point at which a quantum computer’s commercial value exceeds its operational cost. This is the key metric set out in the Quantum Benchmarking Initiative, a program run by the United States’ Defense Advanced Research Projects Agency (DARPA) to gauge whether Diraq and 17 other companies can reach this goal.
Utility-scale quantum computers are expected to be able to solve problems that are out of reach of the most advanced high-performance computers available today. But breaching the utility-scale threshold requires storing and manipulating quantum information in millions of qubits to overcome the errors associated with the fragile quantum state.
“Achieving utility scale in quantum computing hinges on finding a commercially viable way to produce high-fidelity quantum bits at scale,” said Prof. Dzurak.
“Diraq’s collaboration with imec makes it clear that silicon-based quantum computers can be built by leveraging the mature semiconductor industry, which opens a cost-effective pathway to chips containing millions of qubits while still maximizing fidelity.”
Silicon is emerging as the front-runner among materials being explored for quantum computers—it can pack millions of qubits onto a single chip and works seamlessly with today’s trillion-dollar microchip industry, making use of the methods that put billions of transistors onto modern computer chips.
Diraq has previously shown that qubits fabricated in an academic laboratory can achieve high fidelity when performing two-qubit logic gates, the basic building block of future quantum computers. However, it was unclear whether this fidelity could be reproduced in qubits manufactured in a semiconductor foundry environment.
“Our new findings demonstrate that Diraq’s silicon qubits can be fabricated using processes that are widely used in semiconductor foundries, meeting the threshold for fault tolerance in a way that is cost-effective and industry-compatible,” Prof. Dzurak said.
Diraq and imec previously showed that qubits manufactured using CMOS processes—the same technology used to build everyday computer chips—could perform single-qubit operations with 99.9% accuracy. But more complex operations using two qubits that are critical to achieving utility scale had not yet been demonstrated.
“This latest achievement clears the way for the development of a fully fault-tolerant, functional quantum computer that is more cost effective than any other qubit platform,” Prof. Dzurak said.
More information:
Paul Steinacker, Industry-compatible silicon spin-qubit unit cells exceeding 99% fidelity, Nature (2025). DOI: 10.1038/s41586-025-09531-9. www.nature.com/articles/s41586-025-09531-9
Citation:
Quantum computer chips clear major manufacturing hurdle (2025, September 24)
retrieved 24 September 2025
from https://techxplore.com/news/2025-09-quantum-chips-major-hurdle.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
New algorithm enhances Doppler resolution of unmanned vehicle radars
A research team has developed an extrapolation-based Doppler resolution enhancement algorithm for frequency modulated continuous wave radars. The algorithm improves system performance, offering an advancement that is superior to existing ultra-high-resolution technologies.
The findings are published in the Journal of Electrical Engineering & Technology. The team was led by Sang-dong Kim and Bong-seok Kim, affiliated with the DGIST Division of Mobility Technology, in collaboration with a team led by Professor Youngdoo Choi, affiliated with the Republic of Korea Naval Academy (ROKNA).
Improving radar accuracy without extra hardware
This research introduces a technology that improves radar detection accuracy without the need for additional complex computations or hardware. The technology is expected to contribute to enhancing radar system performance on various intelligent unmanned platforms such as unmanned aerial vehicles (UAVs), unmanned ships, and autonomous vehicles.
Conventional radar systems analyze the Doppler effect to determine the velocity of a target, but the fast Fourier transform (FFT)-based approach has limitations regarding resolution (i.e., the accuracy of velocity discrimination). To address this, the joint DGIST–ROKNA research team applied a signal extrapolation technique and has proposed a new algorithm that enhances Doppler resolution without extending observation time.
Performance gains and real-world applications
The proposed method successfully reduces the root mean square error of velocity estimation by up to 33% and decreases the target miss rate by up to 68%, representing a substantial improvement over the conventional approach. Notably, the proposed method maintains the same computational complexity level as the conventional FFT method, thereby simultaneously achieving fast processing speed and high efficiency.
This technology can effectively solve the problem of signal overlap between targets moving at similar velocities, particularly when UAVs or radar systems detect multiple objects simultaneously. It can therefore greatly enhance the ability to distinguish closely spaced targets and improve detection accuracy, marking a new milestone in the advancement of high-resolution target detection technology.
Additionally, the technology is highly regarded for its industrial applicability because it requires no additional hardware resources and features a simple computational structure that enables real-time implementation.
Sang-dong Kim, principal researcher at the Division of Mobility Technology (concurrently serving the interdisciplinary engineering major), said, “This study demonstrates an improvement in both the efficiency and precision of radar signal processing, enabling more accurate target detection without the need for additional equipment. It is expected to evolve into a key technology for defense, autonomous driving, and unmanned systems.”
More information:
Youngdoo Choi et al, Doppler Resolution Enhancement Algorithm Based on Extrapolation for FMCW Radar, Journal of Electrical Engineering & Technology (2025). DOI: 10.1007/s42835-025-02453-6
Citation:
New algorithm enhances Doppler resolution of unmanned vehicle radars (2025, November 11)
retrieved 11 November 2025
from https://techxplore.com/news/2025-11-algorithm-doppler-resolution-unmanned-vehicle.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Spray 3D concrete printing simulator boosts strength and design
Concrete 3D printing reduces both time and cost by eliminating traditional formwork, the temporary mold for casting. Yet most of today’s systems rely on extrusion-based methods, which deposit material very close to a nozzle layer by layer. This makes it impossible to print around reinforcement bars (rebars) without risk of collision, limiting both design flexibility and structural integrity of builds.
Kenji Shimada and researchers in his Carnegie Mellon University’s Computational Engineering and Robotics Laboratory (CERLAB), are breaking through that limitation with a new simulation tool for spray-based concrete 3D printing.
“Spray-based concrete 3D printing is a new process with complicated physical phenomena,” said Shimada, a professor of mechanical engineering. “In this method, a modified shotcrete mixture is sprayed from a nozzle to build up on a surface, even around rebar.”
The ability to print freely around reinforcement is especially important in places like Japan and California, where earthquakes are an imminent threat and structural strength is critical.
“To make this technology viable, we must be able to predict exactly how the concrete will spray and dry into the final shape,” Shimada explained. “That’s why we developed a simulator for concrete spray 3D printing.”
The new simulator can model the viscoelastic behaviors of shotcrete mixtures, including drip, particle rebound, spread, and solidification time. This way, contractors can assess multiple printing paths based on a CAD design with the simulator to evaluate whether spray 3D printing is a feasible fabrication technique for their structure.
The team traveled to Tokyo, Japan, where Shimizu Corporation already operates spray 3D printing robots to validate their model. In the first test, the team focused on the simulator’s ability to predict shape based on the speed of the nozzle’s movement. With 90.75% accuracy, the simulator could predict the height of the sprayed concrete. The second test showed that the simulator could predict printing over rebar with 92.3% and 97.9% accuracy for width and thickness, respectively.
According to Soji Yamakawa, a research scientist in Shimada’s lab and the lead author of the team’s research paper published in IEEE Robotics and Automation Letters, a simulation of this kind would typically take hours, if not days, to run.
“By making wild assumptions, we were able to successfully simplify a super complex physics simulation into a combination of efficient algorithms and data structures and still achieved highly realistic output,” Yamakawa said.
Future work will aim to increase accuracy by identifying environmental parameters like humidity, optimizing performance, and adding plastering simulation to create smoother finished products.
“There are still so many applications and technologies that we can develop with robotics,” said Kyshalee Vazquez-Santiago, a co-author of the paper and a mechanical engineering Ph.D. candidate leading the Mobile Manipulators research group within CERLAB.
“Even in concrete 3D printing, we are working with an entirely new type of application and approach that has so many advantages but leaves so much room for further development.”
More information:
Soji Yamakawa et al, Concrete Spray 3D Printing Simulator for Nozzle Trajectory Planning, IEEE Robotics and Automation Letters (2025). DOI: 10.1109/lra.2025.3615038
Citation:
Spray 3D concrete printing simulator boosts strength and design (2025, November 11)
retrieved 11 November 2025
from https://techxplore.com/news/2025-11-spray-3d-concrete-simulator-boosts.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Mind readers: How large language models encode theory-of-mind
Imagine you’re watching a movie, in which a character puts a chocolate bar in a box, closes the box and leaves the room. Another person, also in the room, moves the bar from a box to a desk drawer. You, as an observer, know that the treat is now in the drawer, and you also know that when the first person returns, they will look for the treat in the box because they don’t know it has been moved.
You know that because as a human, you have the cognitive capacity to infer and reason about the minds of other people—in this case, the person’s lack of awareness regarding where the chocolate is. In scientific terms, this ability is described as Theory of Mind (ToM). This “mind-reading” ability allows us to predict and explain the behavior of others by considering their mental states.
We develop this capacity at about the age of four, and our brains are really good at it.
“For a human brain, it’s a very easy task,” says Zhaozhuo Xu, Assistant Professor of Computer Science at the School of Engineering—it barely takes seconds to process.
“And while doing so, our brains involve only a small subset of neurons, so it’s very energy efficient,” explains Denghui Zhang, Assistant Professor in Information Systems and Analytics at the School of Business.
How LLMs differ from human reasoning
Large language models or LLMs, which the researchers study, work differently. Although they were inspired by some concepts from neuroscience and cognitive science, they aren’t exact mimics of the human brain. LLMs were built on artificial neural networks that loosely resemble the organization of biological neurons, but the models learn from patterns in massive amounts of text and operate using mathematical functions.
That gives LLMs a definitive advantage over humans in processing loads of information rapidly. But when it comes to efficiency, particularly with simple things, LLMs lose to humans. Regardless of the complexity of the task, they must activate most of their neural network to produce the answer. So whether you’re asking an LLM to tell you what time it is or summarize “Moby Dick,” a whale of a novel, the LLM will engage its entire network, which is resource-consuming and inefficient.
“When we, humans, evaluate a new task, we activate a very small part of our brain, but LLMs must activate pretty much all of their network to figure out something new even if it’s fairly basic,” says Zhang. “LLMs must do all the computations and then select the one thing you need. So you do a lot of redundant computations, because you compute a lot of things you don’t need. It’s very inefficient.”
New research into LLMs’ social reasoning
Working together, Zhang and Xu formed a multidisciplinary collaboration to better understand how LLMs operate and how their efficiency in social reasoning can be improved.
They found that LLMs use a small, specialized set of internal connections to handle social reasoning. They also found that LLMs’ social reasoning abilities depend strongly on how the model represents word positions, especially through a method called rotary positional encoding (RoPE). These special connections influence how the model pays attention to different words and ideas, effectively guiding where its “focus” goes during reasoning about people’s thoughts.
“In simple terms, our results suggest that LLMs use built-in patterns for tracking positions and relationships between words to form internal “beliefs” and make social inferences,” Zhang says. The two collaborators outlined their findings in the study titled “How large language models encode theory-of-mind: a study on sparse parameter patterns,” published in npj Artificial Intelligence.
Looking ahead to more efficient AI
Now that researchers better understand how LLMs form their “beliefs,” they think it may be possible to make the models more efficient.
“We all know that AI is energy-expensive, so if we want to make it scalable, we have to change how it operates,” says Xu. “Our human brain is very energy efficient, so we hope this research brings us back to thinking about how we can make LLMs to work more like the human brain, so that they activate only a subset of parameters in charge of a specific task. That’s an important argument we want to convey.”
More information:
Yuheng Wu et al, How large language models encode theory-of-mind: a study on sparse parameter patterns, npj Artificial Intelligence (2025). DOI: 10.1038/s44387-025-00031-9
Citation:
Mind readers: How large language models encode theory-of-mind (2025, November 11)
retrieved 11 November 2025
from https://techxplore.com/news/2025-11-mind-readers-large-language-encode.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Tech1 week agoCISOs in court: Balancing cyber resilience and legal accountability | Computer Weekly
-
Fashion1 week agoCoach reconnects with Bank & Vogue for upcycled bags using corduroy
-
Fashion1 week agoGermany’s Adidas achieves highest-ever quarterly sales in Q3 2025
-
Business1 week agoFirst new Amazon electric heavy goods vehicles hit UK roads
-
Sports1 week agoNFL broadcaster Cris Collinsworth makes government shutdown joke as Seahawks clobber Commanders
-
Fashion1 week agoRare Beauty and BÉIS debut travel-inspired “Beauty On-the-Go” capsule
-
Business1 week agoBusiness news live – Banks bet on interest rate cut and UK bills rise 8% in a year
-
Tech1 week agoTech Traveler’s Guide to Seattle: Where to Stay, Eat, and Recharge
