Connect with us

Tech

Balancing IT security with AI and cloud innovation | Computer Weekly

Published

on

Balancing IT security with AI and cloud innovation | Computer Weekly


Organisations increasingly rely on cloud services to drive innovation and operational efficiency, and as more artificial intelligence (AI) workloads use public cloud-based AI acceleration, organisations’ AI strategies are linked to the security and availability of these services.

However, as John Bruce, chief information security officer (CISO) at Quorum Cyber, points out, CISOs face the persistent challenge of figuring out how to map a cloud provider’s service level agreement (SLA), which does not align with the enterprise’s security and availability requirements (see box: A strategic framework for SLA gap management).

Aditya Sood, vice-president of security engineering and AI strategy at Aryaka, says that while SLAs typically cover metrics like uptime, support response times and service performance, they often overlook critical elements such as data protection, breach response and regulatory compliance.

This, he says, creates a responsibility gap, where assumptions about who is accountable can lead to serious blind spots. For instance, a customer might assume that the cloud provider’s SLA guarantees data protection, only to realise that their own misconfigurations or weak identity management practices have led to a data breach.  

“Organisations may mistakenly believe their provider handles more than it does, increasing the risk of non-compliance, security incidents and operational disruptions,” he says.

Sood recommends that IT decision-makers ensure they take into account the nuances between SLA commitments and shared security responsibilities. He believes this is vital for organisations to make the most of cloud services without undermining resilience or regulatory obligations. 

In Bruce’s experience, misalignment of an SLA with corporate IT requirements is more common than many leaders realise. “Whether it’s a cutting-edge AI platform from a startup, specialised software as a service (SaaS) with limited security guarantees, or even established cloud providers whose standard SLAs fall short of regulatory requirements, the gap between what providers offer and what enterprises need can be substantial,” he says.

According to Bruce, the modern cloud ecosystem presents a complex landscape. He says: “While major cloud providers like AWS [Amazon Web Services], [Microsoft] Azure and Google Cloud have matured their security offerings and SLAs considerably, the broader ecosystem includes thousands of specialised providers.”

Bruce notes that while many offer innovative capabilities that can provide significant competitive advantages, their SLAs often reflect their size, maturity, or focus areas rather than enterprise security requirements. 

For instance, IT decision-makers can face an innovation paradox. This occurs, says Bruce, if a promising AI or machine learning (ML) platform offers breakthrough capabilities but provides only basic security guarantees and 99.5% uptime commitments when the organisation requires 99.99% availability

While an SLA guarantees the cloud provider’s commitment to “the security of the cloud”, ensuring the underlying infrastructure’s uptime, resilience and core security, in Sood’s experience, it explicitly does not cover the customer’s responsibilities for security in the cloud.

He says that even if a provider’s SLA promises 99.99% uptime for its infrastructure, a customer’s misconfigurations, weak identity management or unpatched applications can still lead to data breaches or service outages, effectively nullifying the perceived security and uptime benefits of the provider’s SLA. 

Even if a provider’s SLA promises 99.99% uptime for its infrastructure, a customer’s misconfigurations, weak identity management or unpatched applications can still lead to data breaches or service outages

Another factor to consider is what Bruce calls the “compliance gap”. This is when the SaaS provider offers essential functionality, but its data residency, encryption or audit logging capabilities do not meet the regulatory requirements of the organisation. 

Then there is the case of a service provider’s inability to scale to meet certain requirements needed by enterprise IT. This “scale mismatch”, as Bruce calls it, occurs in a situation where the specialised software house provides unique industry-specific tools, but its incident response procedures and security monitoring do not meet enterprise standards. 

Sood recommends using a shared responsibility model (SRM), which plays a central role in defining how security and operational duties are split between cloud providers and their customers. The SRM directly impacts the adequate security and availability experienced by the enterprise, making diligent customer-side security practices crucial for realising the full value of any cloud SLA.

Public cloud lock-in

Beyond managing how responsibility for IT security is coordinated, IT leaders should also be wary of the extent to which they use the value-added services provided in a public cloud platform.

Bill McCluggage, former director of IT strategy and policy in the Cabinet Office and deputy government CIO from 2009 to 2012, says fewer than 1% of customers switch cloud providers annually, because the system is rigged.

For instance, egress fees to transfer data out of a public provider’s datacentre are opaque. McCluggage says that egress fees combined with proprietary application programming interfaces (APIs) and binding enterprise agreements often make the cost of switching public cloud providers too high.

“Beyond just stifling competition, this lock-in also undermines the UK government’s ambition to become an AI powerhouse. With AI workloads increasingly dependent on high-performance cloud infrastructure, continuing to rely on just two dominant hyperscalers risks concentrating capability, control and innovation in the hands of a few,” he says.

According to McCluggage, customers using certain public cloud services can face “economic entrapment”. As an example, Microsoft’s recent Office 365 Personal and Family subscriptions price increase in the UK – from £59.99 to £84.99 – was justified by the addition of AI-powered Copilot features.

“Customers can avoid the hike by choosing the ‘Classic’ subscription,” says McCluggage, pointing out that Microsoft has made this subscription much harder for people to find. “Most individuals – and organisations – won’t know they have a choice until it’s too late. This isn’t value creation,” he adds.

Being realistic about contract terms

The cloud ecosystem will continue to evolve, with new providers offering compelling capabilities alongside varying security guarantees. Quorum Cyber’s Bruce warns that attempting to eliminate all SLA gaps would mean forgoing potentially transformative technologies. Instead, he says, successful CISOs need to develop frameworks for making informed risk decisions that enable innovation while maintaining appropriate controls. 

“By taking a structured approach to SLA gap management, organisations can access innovative cloud services while maintaining strong security postures and regulatory compliance,” says Bruce, for whom the key is moving beyond simple accept/reject decisions to sophisticated risk management that enables business objectives while protecting against genuine threats. 

Organisations that develop mature approaches to SLA gap management will be best positioned to take advantage of these innovations while maintaining appropriate risk management standards. 

Every technology decision involves risk trade-offs. Should IT make the most of new cloud and AI innovation, even if it may not fully meet corporate IT standards, or go with established public cloud providers where there is the potential of being locked in and facing the opaque egress fees that McCluggage refers to. 

Aryaka’s Sood urges IT decision-makers to adopt proactive governance, risk and compliance (GRC) by updating the organisation’s internal security policies and procedures to account for the new cloud service and its specific risk profile. “Map the provider’s security controls and your compensating controls directly to relevant regulatory requirements,” he says.

Sood also suggests that IT leaders should ensure documentation of the organisation’s risk assessments, mitigation strategies and any formal risk acceptance decisions are meticulously managed.  

By adopting these strategies, IT and security leaders can confidently embrace innovative cloud technologies, minimising inherent risks and ensuring a strong compliance posture, even when faced with SLAs that don’t initially meet all desired criteria.

With such measures and policies in place, IT decision-makers understand the risk and their mitigation strategies, which should put them in a better place to select the best AI and cloud innovations for their organisations. “The question isn’t whether to accept risk, but how to manage it intelligently in pursuit of business objectives,” says Bruce.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

New design tackles integer factorization problems through digital probabilistic computing

Published

on

New design tackles integer factorization problems through digital probabilistic computing


Figure outlining the overall system, including the V-MTJ chip and the ASIC along with their respective printed circuit boards. Credit: Duffee et al.

Probabilistic Ising machines (PIMs) are advanced and specialized computing systems that could tackle computationally hard problems, such as optimization or integer factorization tasks, more efficiently than classical systems. To solve problems, PIMs rely on interacting probabilistic bits (p-bits), networks of interacting units of digital information with values that randomly fluctuate between 0 and 1, but that can be biased to converge to yield desired solutions.

A class of PIMs that are intensively investigated use to inject randomness into a digital transistor-based circuit. While these systems have been found to be promising for the rapid resolution of various domain-specific and advanced problems, their large-scale design and reliable fabrication have so far proved challenging. This is primarily because their upscaling requires the precise control of small magnetic moments and often also entails the use of large circuits that convert into analog voltages and other additional components.

Researchers at Northwestern University and other institutes recently developed a new application-specific integrated circuit (ASIC) that could be used to create better performing probabilistic computers. In a paper published in Nature Electronics, they presented a probabilistic computer based on the new circuit and showed that it could perform integer factorization tasks.

“We were interested in exploring how one could build a scalable probabilistic computer by custom-designing an ASIC using foundry CMOS technology,” Pedram Khalili Amiri, senior author of the paper, told Tech Xplore.

“Our intuition was that by taking advantage of the digital CMOS platform and the high transistor densities available in today’s semiconductor technology, one could eventually build very large-scale probabilistic computers that can tackle problems related to, for example, combinatorial optimization. As a first step, we decided to try out these ideas, and develop the computing architecture and design approach, using a less advanced (130 nm) foundry node.”

When reviewing previous literature in the field and experimenting with probabilistic computing architectures, Amiri and his colleagues realized that, despite its numerous advantages, CMOS technology does not appear to be well-suited for creating random bit sequences. Notably, the creation of these random sequences is central to the functioning of probabilistic computers.

To overcome this limitation of CMOS technology, the researchers adapted voltage-controlled magnetic tunnel junctions (V-MTJs), hardware components that they introduced in their earlier work and had previously applied to the creation of magnetic random-access memory (MRAM) devices. They changed some elements of these devices so that they would serve as and compact sources of randomness (i.e., entropy).

“Our probabilistic computer consists of an array of bistable probabilistic elements (called probabilistic bits or p-bits),” explained Amiri. “The interactions between these p-bits can be programmed so that the p-bit network (called a probabilistic Ising machine or PIM) collectively searches through the solution space of a problem. Our p-bits are implemented using digital CMOS circuitry on our ASIC and use bit sequences read from an adjacent V-MTJ chip to provide the required randomness. The energy minimum of the PIM is designed to correspond to the solution of the computing problem of interest.”

A new application-specific integrated circuit-based probabilistic computer
Figure showing the ASIC that was used in this experiment. Credit: Duffee et al.

The new probabilistic architecture developed by Amiri and his colleagues could theoretically be used to efficiently tackle many real-world problems, including various optimization tasks. As part of their study, however, the team specifically applied their architecture to integer factorization tasks, which are known to be very challenging to solve computationally.

“This was a good place to start, mainly because there is only one correct solution to be found in the entire energy landscape, and because it is easy to check whether we found the right factors or not,” said Amiri. “The same approach, however, can be applied to many other computing problems.”

Two central advantages of the architecture developed by this research team are that it is digital and synchronous. This is in contrast with most other PIMs introduced in earlier works.

“This means that the probabilistic computer works with a clock that determines a well-defined time interval upon which p-bits can update and does not require area-consuming circuits such as digital-to-analog converters,” said Amiri. “In addition, the use of V-MTJs, which are currently implemented in a separate chip from the ASIC but can eventually be integrated within it, saves area and can provide high-throughput random bit sequences to the p-bits.”

V-MTJs, the components that Amiri and his colleagues used to create their architecture, were found to be inherently more robust against device-to-device variations when used to generate random bits compared to other spintronic random bit generators used in the past. The team’s initial findings were highly promising, highlighting the promise of their approach for creating probabilistic computers.

Notably, although it relies on VMTJs, the new approach is also compatible with established CMOS manufacturing processes and digital design strategies. In the future, it could contribute to the large-scale fabrication of PIMs that could solve a wide range of real-world optimization problems faster and more efficiently.

“Our next step will be to adapt this design to implement problems other than factorization,” added Amiri. “For example, we have a chip in the works that is tailored to other optimization problems with real-world significance. In addition, we plan to integrate the V-MTJs directly on the CMOS in a more advanced foundry node, which would allow us to make the probabilistic computer even more compact.”

Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.

More information:
Christian Duffee et al, An integrated-circuit-based probabilistic computer that uses voltage-controlled magnetic tunnel junctions as its entropy source, Nature Electronics (2025). DOI: 10.1038/s41928-025-01439-6. On arXiv: DOI: 10.48550/arxiv.2412.08017

© 2025 Science X Network

Citation:
New design tackles integer factorization problems through digital probabilistic computing (2025, September 22)
retrieved 22 September 2025
from https://techxplore.com/news/2025-09-tackles-integer-factorization-problems-digital.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Trump says Murdochs interested in investing in TikTok’s US arm

Published

on

Trump says Murdochs interested in investing in TikTok’s US arm


The United States has forcefully sought to take TikTok’s US operations out of the hands of Chinese parent company ByteDance for national security reasons.

US President Donald Trump said on Sunday that media mogul Rupert Murdoch and his eldest son Lachlan could be among the investors who will take control of TikTok in the United States.

The United States has forcefully sought to take TikTok’s US operations out of the hands of Chinese parent company ByteDance for national security reasons.

Since returning to power in January, Trump has repeatedly delayed implementation of the ban while a deal has been sought.

He has negotiated with Beijing to sell the platform’s US operations to a consortium of he describes as “patriots,” including ally and tech giant Oracle’s boss Larry Ellison, and entrepreneur Michael Dell.

On Sunday, he added more names to that list.

“I hate to tell you this, but a man named Lachlan is involved… Lachlan Murdoch, I believe,” Trump said in an interview with Fox News.

“And Rupert is, is probably going to be in the group. I think they’re going to be in the group. Couple of others, really great people, very prominent people.”

Earlier this month, right-wing media mogul Rupert Murdoch’s children reached a settlement in their long-running legal dispute over control of the media empire, cementing his eldest son Lachlan’s leadership.

Lachlan Murdoch, who officially took control of Fox News and News Corp as part of the deal, is Rupert Murdoch’s eldest son.

The elder Murdoch built a right-wing conservative media empire spanning the United States, Britain and Australia.

On Saturday, the White House said the board of the new company that would control TikTok’s US operations would be dominated by American citizens, and that a deal could be signed “in th coming days.”

© 2025 AFP

Citation:
Trump says Murdochs interested in investing in TikTok’s US arm (2025, September 22)
retrieved 22 September 2025
from https://techxplore.com/news/2025-09-trump-murdochs-investing-tiktok-arm.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

A Cyberattack on Jaguar Land Rover Is Causing a Supply Chain Disaster

Published

on

A Cyberattack on Jaguar Land Rover Is Causing a Supply Chain Disaster


Almost immediately after the cyberattack, a group on Telegram called Scattered Lapsus$ Hunters, claimed responsibility for the hack. The group name implies a potential collaboration between three loose hacking collectives— Scattered Spider, Lapsus$, and Shiny Hunters—that have been behind some of the most high-profile cyberattacks in recent years. They are often made up of young, English-speaking, cybercriminals who target major businesses.

Building vehicles is a hugely complex process. Hundreds of different companies provide parts, materials, electronics, and more to vehicle manufacturers, and these expansive supply chain networks often rely upon “just-in-time” manufacturing. That means they order parts and services to be delivered in the specific quantities that are needed and exactly when they need them—large stockpiles of parts are unlikely to be held by auto makers.

“The supplier networks that are supplying into these manufacturing plants, they’re all set up for efficiency—economic efficiency, and also logistic efficiency,” says Siraj Ahmed Shaikh, a professor in systems security at Swansea University. “There’s a very carefully orchestrated supply chain,” Shaikh adds, speaking about automotive manufacturing generally. “There’s a critical dependency for those suppliers supplying into this kind of an operation. As soon as there is a disruption at this kind of facility, then all the suppliers get affected.”

One company that makes glass sun roofs has started laying off workers, according to a report in the Telegraph. Meanwhile, another firm told the BBC it has laid off around 40 people so far. French automotive company OPmobility, which employs 38,000 people across 150 sites, told WIRED it is making some changes and monitoring the events. “OPmobility is reconfiguring its production at certain sites as a consequence of the shutdown of its production by one of its customers based in the United Kingdom and depending on the evolution of the situation,” a spokesperson for the firm says.

While it is unclear which specific JLR systems have been impacted by the hackers and what systems JLR took offline proactively, many were likely taken offline to stop the attack from getting worse. “It’s very challenging to ensure containment while you still have connections between various systems,” says Orla Cox, head of EMEA cybersecurity communications at FTI Consulting, which responds to cyberattacks and works on investigations. “Oftentimes as well, there will be dependencies on different systems: You take one down, then it means that it has a knock on effect on another.”

Whenever there’s a hack in any part of a supply chain—whether that is a manufacturer at the top of the pyramid or a firm further down the pipeline—digital connections between companies may be severed to stop attackers from spreading from one network to the next. Connections via VPNs or APIs may be stopped, Cox says. “Some may even take stronger measures such as blocking domains and IP addresses. Then things like email are no longer usable between the two organizations.”

The complexity of digital and physical supply chains, spanning across dozens of businesses and just-in-time production systems, means it is likely that bringing everything back online and up to full-working speed may take time. MacColl, the RUSI researcher, says cybersecurity issues often fail to be debated at the highest level of British politics—but adds this time could be different due to the scale of the disruption. “This incident has the potential to cut through because of the job losses and the fact that MPs in constituencies affected by this will be getting calls,” he says. That breakthrough has already begun.



Source link

Continue Reading

Trending