Tech
Microsoft starts including PQC algorithms in cyber foundations | Computer Weekly

Two years after the debut of its Quantum Safe Programme (QSP) Microsoft is now moving steadily through the process of incorporating post-quantum cryptography (PQC) algorithms into some of the foundational components underpinning the security of its product suite.
The computing giant said that in order to maintain the resilience of its systems and servers when future quantum computers likely break current encryption protocols for good, it needs its core services to be set to go before 2029.
This is a self-imposed deadline for early adoption of quantum-safe enabled technology that, for now, sits well ahead of most government targets for take-up – the UK’s National Cyber Security Centre (NCSC) says Britain’s key sectors and organisations should be planning to switchover to PQC by 2035.
Outlining the progress made to date, Microsoft Azure chief technogy officer Mark Russinovich, and Microsoft corporate vice president, CTO of Microsoft Security and Israel R&D Centre managing director Michal Braverman-Blumenstyk said that while scalable quantum computing remains a pipe dream for now, the time to prepare for it is now.
“Migration to post PQC is not a flip-the-switch moment, it’s a multiyear transformation that requires immediate planning and coordinated execution to avoid a last-minute scramble,” they said.
“It is also an opportunity for every organisation to address legacy technology and practices and implement improved cryptographic standards.
They added: “By acting now, organisations can upgrade to modern cryptographical architectures that are inherently quantum safe, upgrade existing systems with the latest standards in cryptography, and embrace crypto-agility to modernise their cryptographic standards and practices and prepare for scalable quantum computing.”
The overall QSP strategy, as previously outlined, centres on three core pillars: updating Microsoft’s own and third-party services, supply chain and ecosystem to be quantum safe; supporting its customers, partners and ecosystems in this goal; and promoting global research, standards and solutions around quantum security.
Redmond has already conducted an enterprise-wide inventory to identify the potential risks and has been partnering with industry leaders over the past couple of years to address some of the more critical dependencies, invest in research, and work together on new hardware and firmware.
Where we stand today
As of this point in time, Microsoft has integrated PQC algorithms into components such as SymCrypt, which is the main cryptographic library used by Windows, Azure and Office 365. This library now supports Module-Lattice Key Encapsulation Mechanism (ML-KEM, formerly known as Crystals-Kyber) and Module-Lattice-Based Digital Signature Algorithm (ML-DSA, formerly known as Crystals-Dilithium), both of which were among the quantum-safe algorithms taken forward by the US National Institute of Standards and Technology (NIST) a year ago.
Addressing the threat of Harvest Now Decrypt Later (HNDL) cyber attacks in which threat actors exfiltrated data today and hold it in reserve until they can crack the code, Microsoft is also ramping up the introduction of quantum-safe key exchange mechanisms in SymCrypt, enabling transport layer security (TLS) hybrid key exchange – per the latest IETF draft – and enhancing TLS 1.3 to support hybrid and pure post-quantum key exchange methods. These capabilities will be trickling down to the Windows TLS stack before much longer, said Russinovich and Braverman-Blumenstyk.
Beyond SymCrypt, Microsoft is also updating components such as its Entra authentication, key and secret management, and signing services, and plans to move towards integrating PQX into Windows, Azure, Office 365, and its data, networking and AI services to ensure the safety of the broader Microsoft services ecosystem.
Alignment to government plans
Microsoft’s overall QSP strategy currently aligns chiefly with US government requirements and timelines concerning quantum safety – including those laid down by agencies such as the Cybersecurity and Infrastructure Security Agency (CISA), NIST, and the National Security Agency (NSA).
However it is closely monitoring quantum safe initiative emanating from Australia, Canada, the European Union (EU), Japan and the UK.
Tech
Watch Our Livestream Replay: Back to School in the Age of AI

Everyone has a stake in how tech is shaping education today. From the tech moguls and venture capitalists who are starting “microschools” and building ed-tech tools to policymakers who are writing bills to safeguard kids online and teachers who are getting creative about using AI for school.
WIRED explored all this and more in our recent back-to-school digital edition, which was the topic of our subscriber-only livestream on Thursday, August 28, 2025. Hosted by WIRED’s features director, Reyhan Harmanci, with writers Charley Locke and Julia Black. Watch the livestream replay below.
Check out past livestreams on the launch of GPT-5, essential features in ChatGPT, advice for getting started with Claude, and more.
Tech
Robot regret: New research helps robots make safer decisions around humans

Imagine for a moment that you’re in an auto factory. A robot and a human are working next to each other on the production line. The robot is busy rapidly assembling car doors while the human runs quality control, inspecting the doors for damage and making sure they come together as they should.
Robots and humans can make formidable teams in manufacturing, health care and numerous other industries. While the robot might be quicker and more effective at monotonous, repetitive tasks like assembling large auto parts, the person can excel at certain tasks that are more complex or require more dexterity.
But there can be a dark side to these robot-human interactions. People are prone to making mistakes and acting unpredictably, which can create unexpected situations that robots aren’t prepared to handle. The results can be tragic.
New and emerging research could change the way robots handle the uncertainty that comes hand-in-hand with human interactions. Morteza Lahijanian, an associate professor in CU Boulder’s Ann and H.J. Smead Department of Aerospace Engineering Sciences, develops processes that let robots make safer decisions around humans while still trying to complete their tasks efficiently.
In a new study presented at the International Joint Conference on Artificial Intelligence in August 2025, Lahijanian and graduate students Karan Muvvala and Qi Heng Ho devised new algorithms that help robots create the best possible outcomes from their actions in situations that carry some uncertainty and risk.
“How do we go from very structured environments where there is no human, where the robots are doing everything by themselves, to unstructured environments where there are a lot of uncertainties and other agents?” Lahijanian asked.
“If you’re a robot, you have to be able to interact with others. You have to put yourself out there and take a risk and see what happens. But how do you make that decision, and how much risk do you want to tolerate?”
Similar to humans, robots have mental models that they use to make decisions. When working with a human, a robot will try to predict the person’s actions and respond accordingly. The robot is optimized for completing a task—assembling an auto part, for example—but ideally, it will also take other factors into consideration.
In the new study, the research team drew upon game theory, a mathematical concept that originated in economics, to develop the new algorithms for robots. Game theory analyzes how companies, governments and individuals make decisions in a system where other “players” are also making choices that affect the ultimate outcome.
In robotics, game theory conceptualizes a robot as being one of numerous players in a game that it’s trying to win. For a robot, “winning” is completing a task successfully—but winning is never guaranteed when there’s a human in the mix, and keeping the human safe is also a top priority.
So instead of trying to guarantee a robot will always win, the researchers proposed the concept of a robot finding an “admissible strategy.” Using such a strategy, a robot will accomplish as much of its task as possible while also minimizing any harm, including to a human.
“In choosing a strategy, you don’t want the robot to seem very adversarial,” said Lahijanian. “In order to give that softness to the robot, we look at the notion of regret. Is the robot going to regret its action in the future? And in optimizing for the best action at the moment, you try to take an action that you won’t regret.”
Let’s go back to the auto factory where the robot and human are working side-by-side. If the person makes mistakes or is not cooperative, using the researchers’ algorithms, a robot could take matters into its own hands. If the person is making mistakes, the robot will try to fix these without endangering the person. But if that doesn’t work, the robot could, for example, pick up what it’s working on and take it to a safer area to finish its task.
Much like a chess champion who thinks several turns ahead about an opponent’s possible moves, a robot will try to anticipate what a person will do and stay several steps ahead of them, Lahijanian said.
But the goal is not to attempt the impossible and perfectly predict a person’s actions. Instead, the goal is to create robots that put people’s safety first.
“If you want to have collaboration between a human and a robot, the robot has to adjust itself to the human. We don’t want humans to adjust themselves to the robot,” he said. “You can have a human who is a novice and doesn’t know what they’re doing, or you can have a human who is an expert. But as a robot, you don’t know which kind of human you’re going to get. So you need to have a strategy for all possible cases.”
And when robots can work safely alongside humans, they can enhance people’s lives and provide real and tangible benefits to society.
As more industries embrace robots and artificial intelligence, there are many lingering questions about what AI will ultimately be capable of doing, whether it will be able to take over the jobs that people have historically done, and what that could mean for humanity. But there are upsides to robots being able to take on certain types of jobs. They could work in fields with labor shortages, such as health care for older populations, and physically challenging jobs that may take a toll on workers’ health.
Lahijanian also believes that, when they’re used correctly, robots and AI can enhance human talents and expand what we’re capable of doing.
“Human-robot collaboration is about combining complementary strengths: humans contribute intelligence, judgment, and flexibility, while robots offer precision, strength, and reliability,” he said.
“Together, they can achieve more than either could alone, safely and efficiently.”
Citation:
Robot regret: New research helps robots make safer decisions around humans (2025, August 28)
retrieved 28 August 2025
from https://techxplore.com/news/2025-08-robot-robots-safer-decisions-humans.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
Warehouse automation hasn’t made workers safer—it’s just reshuffled the risk, say researchers

Rapid advancements in robotics are changing the face of the world’s warehouses, as dangerous and physically taxing tasks are being reassigned en masse from humans to machines. Automation and digitization are nothing new in the logistics sector, or any sector heavily reliant on manual labor. Bosses prize automation because it can bring up to two- to four-fold gains in productivity. But workers can also benefit from the putative improvements in safety that come from shifting dangerous tasks onto non-human shoulders.
At least, that’s the story employers such as Amazon have—largely successfully—promoted to the public.
In a recent study, Brad N. Greenwood, Dean’s Distinguished Professor at the Costello College of Business at George Mason University, investigated this question: Does automation make warehouse jobs safer? His co-authors include Gordon Burtch of Boston University and Kiron Ravindran of IE University. Their findings, which appear in ILR Review, reveal that the answer depends on how safety is defined.
The researchers distinguish between two types of injuries: severe and non-severe. Severe injuries include broken bones, traumatic falls, and other incidents that cause employees to miss work. Non-severe injuries include sprains, strains, and repetitive motion problems, often leading to reassignment or light-duty work, but not missing work.
The findings showed that robots do seem to reduce severe injuries. In robotic fulfillment centers (FC), tasks like heavy lifting and long walks are handled by machines, reducing workers’ exposure to physical hazards. The researchers found a meaningful drop in the number of severe injuries in these facilities.
However, the overall picture is not so clear. In the same robotic warehouses, the researchers observed a sharp increase in non-severe injuries, especially during high-demand periods such as Amazon Prime Day and the winter holidays. The robotic fulfillment centers experienced a 40% decrease in severe injuries but a 77% increase in non-severe injuries compared to traditional centers.
To better understand their results, the researchers also analyzed thousands of online posts from Amazon warehouse workers.
“There was an immediate and obvious discrepancy in worker opinion, based on whether their fulfillment center was roboticized or not,” says Greenwood.
Humans working alongside robots described their daily experience as “not physically exhausting” and “better than working at a legacy FC.” However, they also reported being expected to meet much higher performance metrics than their counterparts in non-automated FCs—amounting to a two-to-three-times higher “pick rate” in some cases. The faster pace of the human/robot dance was accompanied by a far more repetitive work routine that induced burnout in some workers, while causing others to “zone out.”
This dual reality—robots reducing some injuries while exacerbating others—has serious implications. For employers, simply introducing automation is not enough. Without careful job design, task rotation, and realistic performance goals, the shift to robotics can create new health and safety risks.
“Companies have bottom-line reasons to take this issue seriously. Beyond simple issues of liability, there is a cost to the firm of workers being unable to perform their duties,” says Greenwood.
Traditional safety metrics often focus on injuries that result in lost workdays. But as the nature of work changes, this approach may miss more subtle forms of harm. Chronic, repetitive injuries may not lead to time off, but they still decrease worker well-being and performance.
Looking ahead, Greenwood and his colleagues plan to explore how these trends play out over longer timeframes and in other industries. As robots become more common in fields like manufacturing, retail, and health care, similar patterns may emerge. The researchers hope their findings will help inform both corporate and public policy, ensuring that the future of work is not only more efficient but also humane.
“That isn’t to deny that warehouse robotics benefits workers,” Greenwood explains. “But we need to think more carefully about how to use them, and what that means for the humans they work with.”
More information:
Gordon Burtch et al, Lucy and the Chocolate Factory: Warehouse Robotics and Worker Safety, ILR Review (2025). DOI: 10.1177/00197939251333754
Citation:
Warehouse automation hasn’t made workers safer—it’s just reshuffled the risk, say researchers (2025, August 28)
retrieved 28 August 2025
from https://techxplore.com/news/2025-08-warehouse-automation-hasnt-workers-safer.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Business1 week ago
RSS Feed Generator, Create RSS feeds from URL
-
Tech1 week ago
Korea develops core radar components for stealth technology
-
Fashion1 week ago
Tariff pressure casts shadow on Gujarat’s textile landscape
-
Fashion1 week ago
Rent the Runway to swap debt for equity in revival effort
-
Fashion1 week ago
US retailers split on holiday prospects amid consumer caution
-
Tech1 week ago
Qi2’s Magnetic Wireless Charging Finally Arrives on Android
-
Sports1 week ago
Dan Quinn says Terry McLaurin is healthy, ‘closer’ to Commanders return
-
Tech2 days ago
Review: Google Pixel 10 Series