Tech
Security pros should prepare for tough questions on AI in 2026 | Computer Weekly
For the last couple of years, many organisations have comforted themselves with a single slide or paragraph that reads along the lines of “We use artificial intelligence [AI] responsibly.” That line might have been enough to get through informal supplier due diligence in 2023 but it will not survive the next serious round of tenders.
Enterprise buyers, particularly in government, defence and critical national infrastructure (CNI), are now using AI heavily themselves. They understand the risk language. They are making connections between AI, data protection, operational resilience and supply chain exposure. Their procurement teams will no longer ask whether you use AI. They will ask how you govern it.
The AI question is changing
In practical terms, the questions in requests for proposals (RFPs) and invitations to tender (ITTs) are already shifting.
Instead of the soft “Do you use AI in your services?”, you can expect wording more like:
“Please describe your controls for generative AI, including data sovereignty, human oversight, model accountability and compliance with relevant data protection, security and intellectual property obligations.”
Underneath that line sit a number of very specific concerns.
Where is client or citizen data going when you use tools such as ChatGPT, Claude or other hosted models?
Which jurisdictions does that data transit or reside in?
How is AI assisted output checked by humans before it influences a critical decision, a piece of advice, or a safety related activity?
Who owns and can reuse the prompts and outputs, and how is confidential or classified material protected in that process?
The generic boilerplate no longer answers any of those points. In fact, it advertises that there is no structured governance at all.
The uncomfortable reality in many service providers is that if you strip away the marketing language, most professional services organisations are using AI in a very familiar pattern.
Individual staff have adopted tools to speed up drafting, analysis or coding. Teams share tips informally. Some groups have written local guidance on what is acceptable. A few policies have been updated to mention AI.
What is often missing is evidence
Very few organisations can say with certainty which client engagements involved AI assistance, what categories of data were used in prompts, which models or providers were involved, where those providers processed and stored the information, and how review and approval of AI output was recorded.
From a governance, risk and compliance (GRC) perspective, that is a problem. It touches data protection, information security, records management, professional indemnity, and in some sectors safety and mission assurance. It also follows you into every future tender, because buyers are increasingly asking about past AI related incidents, near misses and lessons learned.
Why this matters so much in government, defence and CNI
In central and local government, policing and justice, AI is increasingly influencing decisions that affect citizens directly. That might be in triaging cases, prioritising inspections, supporting investigations or shaping policy analysis.
When AI is involved in those processes, public bodies must be able to show lawful basis, transparency, fairness and accountability. That means understanding where AI is used, how it is supervised, and how outputs are challenged or overridden. Suppliers into that space are expected to demonstrate the same discipline.
In the defence and wider national security supply chain, the stakes are even higher. AI is already appearing in logistics optimisation, predictive maintenance, intelligence fusion, training environments and decision support. Here the questions are not just about privacy or intellectual property. They are about reliability under stress, robustness against manipulation, and assurance that sensitive operational data is not leaking into systems outside sovereign or approved control.
CNI operators have a similar challenge. Many are exploring AI for anomaly detection in OT environments, demand forecasting, and automated response. A failure or misfire here can quickly turn into a service outage, safety incident or environmental impact. Regulators will expect operators and their suppliers to treat AI as an element of operational risk, not a novelty tool.
In all of these sectors, the organisations that cannot explain their AI governance will quietly fall down the scoring matrix.
Turning AI governance into a commercial advantage
The good news is that this picture can be turned around. AI governance, done properly, is not about slowing down or banning innovation. It is about putting enough structure around AI use that you can explain it, defend it and scale it.
A practical starting point is an AI procurement readiness assessment. At Advent IM, we describe this in very simple terms: can you answer the questions your next major client is going to ask?
That involves mapping where AI is used across your services, identifying which workflows touch client or citizen data, understanding which third party models or platforms are involved, and documenting how humans supervise, approve or override AI outputs. It also means looking at how AI fits into your existing incident response, data breach handling and risk registers.
From there, you can develop a short, evidence-based narrative that fits neatly into RFP and ITT responses, backed by policies, process descriptions and example logs. Instead of hand waving about responsible AI, you can present a clear story about how AI is governed as part of your wider security and GRC framework.
ISO 42001 as the backbone for AI governance
ISO IEC 42001, the new standard for AI management systems, gives this work structure. It provides a framework for managing AI across its lifecycle, from design and acquisition through to operation, monitoring and retirement.
For organisations that already operate an information security management system (ISMS), quality management system or privacy information management system, 42001 should not feel alien. It can be integrated with existing ISO 27001, 9001 and 27701 arrangements. Roles such as senior information risk owner (SIRO), information asset owner (IAO), data protection officer, heads of service and system owners simply gain clearer responsibilities for AI related activities.
Aligning with 42001 also signals to clients, regulators and insurers that AI is not being treated informally. It shows that there are defined roles, documented processes, risk assessments, monitoring and continual improvement around AI. Over time, that alignment can be taken further into formal certification for those organisations where it makes commercial sense.
Bringing people, process and assurance together
Policies and frameworks are only part of the picture. The real test is whether people across the organisation understand what is permitted, what is prohibited, and when they need to ask for help.
AI security and governance training is therefore critical. Staff need to understand how to handle prompts that contain personal or sensitive data, how to recognise when AI outputs might be biased or incomplete, and how to record their own oversight. Managers need to know how to approve use cases, sign off risk assessments and respond to incidents involving AI.
Bringing all of this together gives you something very simple but very powerful. When the next RFP or ITT lands with a page of questions about AI, you will not be scrambling for ad hoc answers. You will be able to describe an AI management system that is aligned to recognised standards, integrated with your existing security and GRC practices, and backed by training and evidence.
In a crowded services market, that may be the difference between being seen as an interesting supplier and being trusted with high value, sensitive work.
Tech
Anthropic Supply-Chain-Risk Designation Halted by Judge
Anthropic won a preliminary injunction barring the US Department of Defense from labeling it a supply-chain risk, potentially clearing the way for customers to resume working with the company. The ruling on Thursday by Rita Lin, a federal district judge in San Francisco, is a symbolic setback for the Pentagon and a significant boost for the generative AI company as it tries to preserve its business and reputation.
“Defendants’ designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law and arbitrary and capricious,” Lin wrote in justifying the temporary relief. “The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.”
Anthropic and the Pentagon did not immediately respond to requests to comment on the ruling.
The Department of Defense, which under Trump calls itself the Department of War, has relied on Anthropic’s Claude AI tools for writing sensitive documents and analyzing classified data over the past couple of years. But this month, it began pulling the plug on Claude after determining that Anthropic could not be trusted. Pentagon officials cited numerous instances in which Anthropic allegedly placed or sought to put usage restrictions on its technology that the Trump administration found unnecessary.
The administration ultimately issued several directives, including designating the company a supply-chain risk, which have had the effect of slowly halting Claude usage across the federal government and hurting Anthropic’s sales and public reputation. The company filed two lawsuits challenging the sanctions as unconstitutional. In a hearing on Tuesday, Lin said the government had appeared to illegally “cripple” and “punish” Anthropic.
Lin’s ruling on Thursday “restores the status quo” to February 27, before the directives were issued. “It does not bar any defendant from taking any lawful action that would have been available to it” on that date, she wrote. “For example, this order does not require the Department of War to use Anthropic’s products or services and does not prevent the Department of War from transitioning to other artificial intelligence providers, so long as those actions are consistent with applicable regulations, statutes, and constitutional provisions.”
The ruling suggests the Pentagon and other federal agencies are still free to cancel deals with Anthropic and ask contractors that integrate Claude into their own tools to stop doing so, but without citing the supply-chain-risk designation as the basis.
The immediate impact is unclear because Lin’s order won’t take effect for a week. And a federal appeals court in Washington, DC, has yet to rule on the second lawsuit Anthropic filed, which focuses on a different law under which the company was also barred from providing software to the military.
But Anthropic could use Lin’s ruling to demonstrate to some customers concerned about working with an industry pariah that the law may be on its side in the long run. Lin has not set a schedule to make a final ruling.
Tech
How Trump’s Plot to Grab Iran’s Nuclear Fuel Would Actually Work
President Donald Trump and top defense officials are reportedly weighing whether to send ground troops to Iran in order to retrieve the country’s highly enriched uranium. However, the administration has shared little information about which troops would be deployed, how they would retrieve the nuclear material, or where the material would go next.
“People are going to have to go and get it,” secretary of state Marco Rubio said at a congressional briefing earlier this month, referring to the possible operation.
There are some indications that an operation is close on the horizon. On Tuesday, The Wall Street Journal reported that the Pentagon has imminent plans to deploy 3,000 brigade combat troops to the Middle East. (At the time of writing, the order has not been made.) The troops would come from the Army’s 82nd Airborne Division, which specializes in “joint forcible entry operations.” On Wednesday, Iran’s government rejected Trump’s 15-point plan to end the war, and White House press secretary Karoline Leavitt said that the president “is prepared to unleash hell” in Iran if a peace deal is not reached—a plan some lawmakers have reportedly expressed concern about.
Drawing from publicly available intelligence and their own experience, two experts outlined the likely contours of a ground operation targeting nuclear sites. They tell WIRED that any version of a ground operation would be incredibly complicated and pose a huge risk to the lives of American troops.
“I personally think a ground operation using special forces supported by a larger force is extremely, extremely risky and ultimately infeasible,” Spencer Faragasso, a senior research fellow at the Institute for Science and International Security, tells WIRED.
Nuclear Ambitions
Any version of the operation would likely take several weeks and involve simultaneous actions at multiple target locations that aren’t in close proximity to each other, the experts say. Jonathan Hackett, a former operations specialist for the Marines and the Defense Intelligence Agency, tells WIRED that as many as 10 locations could be targeted: the Isfahan, Arak, and Darkhovin research reactors; the Natanz, Fordow, and Parchin enrichment facilities; the Saghand, Chine, and Yazd mines; and the Bushehr power plant.
According to the International Atomic Energy Agency, Isfahan likely has the majority of the country’s 60 percent highly enriched uranium, which may be able to support a self-sustaining nuclear chain reaction, though weapon-grade material generally consists of 90 percent enriched uranium. Hackett says that the other two enrichment facilities may also have 60 percent highly enriched uranium, and that the power plant and all three research reactors may have 20 percent enriched uranium. Faragasso emphasizes that any such supplies deserve careful attention.
Hackett says that eight of the 10 sites—with the exception of Isfahan, which is likely intact underground, and “Pickaxe Mountain,” a relatively new enrichment facility near Natanz—were mostly or partially buried after last June’s air raids. Just before the war, Faragasso says, Iran backfilled the tunnel entrances to the Isfahan facility with dirt.
The riskiest version of a ground operation would involve American troops physically retrieving nuclear material. Hackett says that this material would be stored in the form of uranium hexafluoride gas inside “large cement vats.” Faragasso adds that it’s unclear how many of these vats may have been broken or damaged. At damaged sites, troops would have to bring excavators and heavy equipment capable of moving immense amounts of dirt to retrieve them
A comparatively less risky version of the operation would still necessitate ground troops, according to Hackett. However, it would primarily use air strikes to entomb nuclear material inside of their facilities. Ensuring that nuclear material is inaccessible in the short to medium term, Faragasso says, would entail destroying the entrances to underground facilities and ideally collapsing the facilities’ underground roofs.
Softening the Area
Hackett tells WIRED that based on his experience and all publicly available information, Trump’s negotiations with Iran are “probably a ruse” that buys time to move troops into place.
Hackett says that an operation would most likely begin with aerial bombardments in the areas surrounding the target sites. These bombers, he says, would likely be from the 82nd Airborne Division or the 11th or 31st Marine Expeditionary Units (MEU). The 11th MEU, a “rapid-response” force, and the 31st MEU, the only Marine unit continuously deployed abroad in strategic areas, have reportedly both been deployed to the Middle East.
Tech
Amazon’s Spring Sale Is So-So, but Cadence Capsules Are a Bright Spot
The WIRED Reviews Team has been covering Amazon’s Big Spring Sale since it began at on Wednesday, and the overall deals have been … not great, honestly. So far, we’ve found decent markdowns on vacuums, smart bird feeders, and even an air fryer we love, but I just saw that Cadence Capsules, those colorful magnetic containers you may have seen on your social media pages, are 20 percent off. (For reference, the last time I saw them on sale, they were a measly 9 percent off.)
If you’re not familiar, they allow you to decant your full-sized personal care products you use at home—from shampoo and sunscreen to serums and pills—into a labeled, modular system of hexagonal containers that are leak-proof, dishwasher safe, and stick together magnetically in your bag or on a countertop. No more jumbled, travel-sized toiletries and leaky, mismatched bottles and tubes.
Cadence Capsules have garnered some grumbling online for being overly heavy or leaking, but I’ve been using them regularly for about a year—I discuss decanting your daily-use products in my guide to How to Pack Your Beauty Routine for Travel—and haven’t experienced any leaks. They do add weight if you’re trying to travel super-light, and because they’re magnetic, they will also stick to other metal items in your toiletry bag, like bobby pins or other hair accessories. This can be annoying, especially if you’re already feeling chaotic or in a hurry.
Otherwise, Capsules are modular, convenient, and make you feel supremely organized—magnetic, interchangeable inserts for the lids come with permanent labels like “shampoo,” “conditioner,” “cleanser,” and “moisturizer.” Maybe you love this; maybe you don’t. But at least if you buy on Amazon, you can choose which label genre you get (Haircare, Bodycare, Skincare, Daily Routine). If this just isn’t your jam, the Cadence website offers a set of seven that allows you to customize the color and lid label of each Capsule, but that set is not currently on sale.
-
Fashion1 week agoSales at US apparel, clothing accessories stores up 4% YoY in Jan 2026
-
Fashion1 week agoSpain’s Inditex FY25 sales rise 3.2% to $46.28 bn amid strong demand
-
Entertainment1 week agoVal Kilmer revived 1 year after death through AI
-
Politics1 week agoIran strikes Tel Aviv with cluster-warhead missiles in retaliation of Larijani’s martyrdom
-
Sports1 week agoMarch Madness 2026 – How to watch in SA, start time, schedule, TV channel for NCAA championship basketball tournament
-
Fashion1 week agoUS’ G-III Apparel’s FY26 sales fall 7% to $2.96 bn
-
Fashion6 days agoChina’s textile & apparel exports surge 17% to $50 bn in Jan-Feb 2026
-
Politics1 week agoUS judge directs Trump administration to bring VOA employees back

