Connect with us

Tech

Government faces questions about why US AWS outage disrupted UK tax office and banking firms | Computer Weekly

Published

on

Government faces questions about why US AWS outage disrupted UK tax office and banking firms | Computer Weekly


The UK government is being pressed for a response as to why a major, multi-hour Amazon Web Services (AWS) outage in the US disrupted UK-based organisations, including HM Revenue & Customs (HMRC) and Lloyds Banking Group.

The outage, which AWS confirmed started just before 8am UK time on 20 October, originated in AWS’s US-East-1 datacentre region in North Virginia, and caused large-scale disruption to a host of companies across the world, including in the UK.

The US-East-1 region is renowned for being Amazon’s first and flagship cloud region, as well as its largest, and is often the place where the public cloud giant rolls out new services to customers first.

For this reason, it is not unheard of for service issues with the US-East-1 region to blight overseas users of the firm’s cloud technologies.

But with concerns mounting in the UK (and other geographies) about the public and private sector’s over-reliance on US-based big tech platforms, the outage has led to renewed calls for greater transparency about the resiliency of the nation’s hosting arrangements.

“The narrative of bigger is better and biggest is best has been shown for the lie it always has been,” Owen Sayers, an independent security architect and data protection specialist with a long history of working in the public sector, told Computer Weekly. “The proponents of hyperscale cloud will always say they have the best engineers, the most staff and the greatest pool of resources, but bigger is not always better – and certainly not when countries rely on those commodity global services for their own national security, safety and operations.

“Nationally important services must be recognised as best delivered under national control, and as a minimum, the government should be knocking on AWS’s door today and asking if they can in fact deliver a service that guarantees UK uptime,” he said. “Because the evidence from this week’s outage suggests that they cannot.”

Government use of cloud under scrutiny

AWS has vowed to publish a detailed “post-event summary” detailing the causes of the outage and the steps it had to take to bring services back online.

In the meantime, and in line with Sayers’ recommendations, HM Treasury is already being asked to account for why it has not used powers conferred on it earlier this year to ensure suppliers like AWS are up to the job of delivering resilient cloud services to organisations in the financial services sector.

The chair of the Treasury Select Committee, Meg Hillier, published a letter she has written to the economic secretary, Lucy Rigby, that appears to have been penned during the AWS outage.

The letter calls on Rigby for clarification about why, despite having the power to do so since January 2025, the Treasury has apparently so far neglected to add AWS to its Critical Third Parties (CTP) list of suppliers.

This designation, which was introduced through changes made to the Financial Services and Markets Act 2020 in November 2024, is intended to provide the UK’s financial regulators with the means to include third-party suppliers to the sector within their supervisory scope – the idea being that doing so might help better manage any potential risks to the stability and resilience of the UK financial system that might arise as a result of a third-party supplier suffering from service disruption, as happened on 20 October with AWS.

As stated in Hillier’s letter, it appears the Treasury is yet to call any suppliers into the scope of the CTP regime, including AWS, which is known to be a supplier to a large number of UK financial services institutions.

“In light of today’s major outage at Amazon Web Services … why has HM Treasury not designated Amazon Web Services or any other major technology firm as a CTP for the purposes of the Critical Third Parties Regime,” asked Hillier, in the letter. “[And] how soon can we expect firms to be brought into this regime?”

Hillier also asked HM Treasury for clarification about whether or not it is concerned about the fact that “seemingly key parts of our IT infrastructure are hosted abroad” given the outage originated from a US-based AWS datacentre region but impacted the activities of Lloyds Bank and also HMRC.

On the latter point, Hiller asked: “What work is HM Treasury doing with HMRC to look at what went wrong, and how this may be prevented in future?”

Computer Weekly contacted HM Treasury for details of its response to Hillier’s letter, and to seek clarification on whether it has plans to imminently add AWS to the CTP list. It also asked if the Treasury has concerns about parts of the UK’s banking infrastructure being hosted overseas, in the wake of the outage.

A spokesperson for the government department did not directly answer the questions posed by Computer Weekly, but did provide the following statement in response:

“We know the threat cyber attackers present, which is why we are working with regulators to establish a Critical Third-Party regime, so we can hold firms providing these services to the same high standards as other financial services institutions,” the Treasury statement read.

UK reliance on overseas clouds

Hillier’s question to the Treasury about whether it has any concerns about key parts of the UK’s IT infrastructure being hosted overseas is being echoed by other UK cloud market watchers and stakeholders in the wake of the outage.

“We should be asking the obvious question: why are so many critical UK institutions, from HMRC to major banks, dependent on a datacentre on the east coast of the US?” said Mark Boost, CEO of London-based cloud services provider Civo. 

“Sovereignty means having control when incidents like this happen – but too much of ours is currently outsourced to foreign cloud providers. The AWS outage is yet another reminder that when you put all your eggs in one basket, you’re gambling with critical infrastructure.

“When a single point of failure can take down HMRC, it becomes clear that our reliance on a handful of US tech giants has left core public services dangerously exposed,” he said.

AWS has operated a UK datacentre region since 2016, with a key selling point of these facilities being that it would allow UK-based organisations to access locally hosted versions of its public cloud services.

This adds further weight to Boost and Hillier’s line of questioning about why a US outage impacted UK-based organisations when, presumably, these organisations should be relying on the UK region to access AWS services.

When Computer Weekly put this question to AWS, citing the disruption caused to HMRC during the outage as an example, a company spokesperson advised the publication to direct that comment directly to the government tax agency.

Shared responsibility model

That response (or lack thereof) potentially speaks to the notion of the “shared responsibility model” that AWS subscribes to, whereby the organisation considers security, compliance and the resilience of its customers’ cloud environments to be something of a shared burden.

As detailed on the company’s Shared Responsibility Model reference web page, this setup is designed to “relieve” AWS customers of the operational burden of running their own cloud infrastructure, but they remain responsible for whatever data they choose to host in it.

“Customers should carefully consider the services they choose [to host in AWS] as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations,” said AWS.

“The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment.”

Speaking to Computer Weekly, Brent Ellis, principal analyst at IT market watcher Forrester, said the fact the outage originated in the AWS US-East-1 region and impacted UK organisations suggests “at least some part” of the HMRC and Lloyds setups had a dependency on that region.

“That would have been an architecture choice by those companies, but not necessarily a fault of AWS,” said Ellis. “That dependency could also have been introduced by a nested SaaS [software as a service] component for the organisations involved.

“Generally, I think this shows how complex and interconnected modern cloud-based infrastructure is, and that is a problem from a resilience perspective, especially if you do not have visibility into the nested dependencies that underlie your business technology stack.”

Regulatory intervention

Because of the impact such dependencies can have, Ellis is of the view that the AWS outage may prompt calls for regulatory intervention to prevent a repeat of it, in a similar vein to what Hiller and her colleagues on the Treasury Select Committee are calling for. “I do think it gives fodder to the greater push for sovereign cloud,” he said. “It also will probably spur regulation to increase visibility into dependencies and fault domains for critical sectors like finance.”

What users of hyperscale cloud services, such as AWS, need to know is what services and capabilities within their chosen suppliers’ extended portfolios are hosted in the UK, and how resilient they are, added Sayers.

To highlight why this is important, he cited the findings of a series of investigations into Microsoft’s cloud hosting arrangements in the Scottish policing sector that he worked with Computer Weekly to make public.

That work resulted in an initial disclosure from Microsoft that it could not guarantee the sovereignty of UK policing data stored and processed in its M365 platform.

This was later followed up with further revelations that policing data hosted in the Microsoft cloud could be processed in more than 100 countries, without users explicitly knowing about it.

“We already know Microsoft do not have a UK-based capability for all their services, but we need to know exactly what the [overseas hyperscalers] can deliver in the country and how resilient that actually is,” said Sayers. “We need to properly understand their points of failure and how they can be engineered around.”
 
Some of the hyperscalers have sought to evade answering questions on this point, claiming the information is commercially sensitive, he continued. “That’s not a defence we can tolerate anymore,” said Sayers. “These services are increasingly friable, increasingly complex and increasingly hidden from our view. If we are to rely on them, we need to know they are reliable, and if they aren’t then we need to pivot – at least for critical services.”

Customer-created issues

Ellis’s colleague, Dario Maisto, is a senior analyst at Forrester, who told Computer Weekly that AWS is aware that customer-created, cross-region architectural dependencies are part of a “bigger sovereignty problem” facing its European customer base.

“[AWS] is about to launch a perfect replica of its services [in Europe] under the AWS EU [European Union] sovereign cloud offer, with the first isolated [sovereign] region in Germany,” he said.

“In fact, the only way a client can be sure that its data and workloads do not suffer from any dependency from infrastructure abroad is physical and logical isolation of the cloud regions the client uses [so that it] must not be possible at all that the client is able to make any choice that creates a dependency on foreign infrastructure.”

Achieving this outcome, continued Maisto, means all of the services the customer needs must be hosted within the isolated region as the only ones the client can access. “A data boundary or a commitment to the market cannot guarantee what only a precise architectural construct of the client’s cloud environment can grant,” he added.

AWS is far from the only cloud provider to suffer an outage, and any cloud company an enterprise entrusts their data to could suffer a similar fate at some point in their existence.

However, Civo’s Boost said the incident highlights why enterprises should be looking to diversify their pool of cloud providers, but also why governments and regulators need to be taking a closer look at how much of the world’s infrastructure runs on a relatively small number of hyperscale cloud platforms.

“The more concentrated our infrastructure becomes, the more fragile and externally governed it is,” he said. “If Europe is serious about digital sovereignty, it needs to accelerate its shift towards domestically governed and diversified infrastructure. Governments and regulators have a responsibility to create the conditions for real competition. That means rethinking procurement, funding sovereign alternatives and making resilience a baseline requirement.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

The Disney-OpenAI Deal Redefines the AI Copyright War

Published

on

The Disney-OpenAI Deal Redefines the AI Copyright War


On Thursday, Disney and OpenAI announced a deal that might have seemed unthinkable not so long ago. Starting next year, OpenAI will be able to use Disney characters like Mickey Mouse, Ariel, and Yoda in its Sora video-generation model. Disney will take a $1 billion stake in OpenAI, and its employees will get access to the firm’s APIs and ChatGPT. None of this makes much sense—unless Disney was fighting a battle it couldn’t win.

Disney has always been a notoriously aggressive litigant around its intellectual property. Alongside fellow IP powerhouse Universal, it sued Midjourney in June over outputs that allegedly infringed on classic film and TV characters. The night before the OpenAI deal was announced, Disney reportedly sent a cease-and-desist letter to Google alleging copyright infractions on a “massive scale.”

On the surface, there appears to be some dissonance with Disney embracing OpenAI while poking its rivals. But it’s more than likely that Hollywood is embarking down a similar path as media publishers when it comes to AI, signing licensing agreements where it can and using litigation when it can’t. (WIRED is owned by Condé Nast, which inked a deal with OpenAI in August 2024.)

“I think that AI companies and copyright holders are beginning to understand and become reconciled to the fact that neither side is going to score an absolute victory,” says Matthew Sag, a professor of law and artificial intelligence at Emory University. While many of these cases are still working their way through the courts, so far it seems like model inputs—the training data that these models learn from—are covered by fair use. But this deal is about outputs—what the model returns based on your prompt—where IP owners like Disney have a much stronger case

Coming to an output agreement resolves a host of messy, potentially unsolvable issues. Even if a company tells an AI model not to produce, say, Elsa at a Wendy’s drive-through, the model might know enough about Elsa to do so anyway—or a user might be able to prompt their way into making Elsa without asking for the character by name. It’s a tension that legal scholars call the “Snoopy problem,” but in this case you might as well call it the Disney problem.

“Faced with this increasingly clear reality, it makes sense for consumer-facing AI companies and entertainment giants like Disney to think about licensing arrangements,” says Sag.



Source link

Continue Reading

Tech

Cursor Launches an AI Coding Tool For Designers

Published

on

Cursor Launches an AI Coding Tool For Designers


Cursor, the wildly popular AI coding startup, is launching a new feature that lets people design the look and feel of web applications with AI. The tool, Visual Editor, is essentially a vibe-coding product for designers, giving them access to the same fine-grained controls they’d expect from professional design software. But in addition to making changes manually, the tool lets them request edits from Cursor’s AI agent using natural language.

Cursor is best known for its AI coding platform, but with Visual Editor, the startup wants to capture other parts of the software creation process. “The core that we care about, professional developers, never changes,” Cursor’s head of design, Ryo Lu, tells WIRED. “But in reality, developers are not by themselves. They work with a lot of people, and anyone making software should be able to find something useful out of Cursor.”

Cursor is one of the fastest growing AI startups of all time. Since its 2023 debut, the company says it has surpassed $1 billion in annual recurring revenue and counts tens of thousands of companies, including Nvidia, Salesforce, and PwC, as customers. In November, the startup closed a $2.3 billion funding round that brought its valuation to nearly $30 billion.

Cursor was an early leader in the AI coding market, but it’s now facing more pressure than ever from larger competitors like OpenAI, Anthropic, and Google. The startup has historically licensed AI models from these companies, but now its rivals are investing heavily in AI coding products of their own. Anthropic’s Claude Code, for example, grew even faster than Cursor, reaching $1 billion in annual recurring revenue just six months after launch. In response, Cursor has started developing and deploying its own AI models.

Traditionally, building software applications has required many different teams working together across a wide range of products and tools. By integrating design capabilities directly into its coding environment, Cursor wants to show that it can bring these functions together into a single platform.

“Before, designers used to live in their own world of pixels and frames, and they don’t really translate to code. So teams had to build processes to hand off tasks back and forth between developers and designers, but there was a lot of friction,” says Lu. “We kind of melded the design world and the coding world together into one interface with one AI agent.”

AI-Powered Web Design

In a demo at WIRED’s San Francisco headquarters, Cursor’s product engineering lead Jason Ginsberg showcased how Visual Editor could modify the aesthetics of a webpage.

A traditional design panel on the right lets users adjust fonts, add buttons, create menus, or change backgrounds. On the left, a chat interface accepts natural-language requests, such as “make this button’s background color red.” Cursor’s agent then applies those changes directly into the code base.

Earlier this year, Cursor released its own web browser that works directly within its coding environment. The company argues the browser creates a better feedback loop when developing products, allowing engineers and designers to view requests from real users and access Chrome-style developer tools.



Source link

Continue Reading

Tech

AT&T Gives the Smart Home a Second Try With Help From Google and Abode

Published

on

AT&T Gives the Smart Home a Second Try With Help From Google and Abode


AT&T is taking a second crack at the smart home. After sunsetting its Digital Life service in 2022—powered by the now-defunct 3G network—the company is launching a new smart-home security platform called Connected Life, this time in partnership with smart-home players Google and Abode.

Previously available as a pilot program in select markets, AT&T Connected Life is rolling out nationwide starting today. The vision behind it is to simplify smart-home setup. Instead of buying various smart-home devices and using multiple apps to connect them, you can buy one of two kits directly from AT&T’s Connected Life website—the Starter Kit ($11 per month for 36 months) or the Advanced Kit ($19 per month for 36 months). You can also pay upfront for the kits at $399 and $699, respectively.

Each includes Google Nest smart-home products and security sensors, with the Advanced Kit offering more sensors, a security keypad, and a Nest Cam security camera. (Google confirmed the Nest products on offer are not the latest devices the company launched recently.) You’ll use the Connected Life app and the Google Home app to set everything up, though you can also get help from a technician if you don’t want to DIY.

Google says the platform leverages Google Home’s application programming interface (API) to integrate Google’s smart home devices into the Connected Life app, and after setup, users can solely rely on the Connected Life app to view livestreams and manage devices.

There are two subscription tiers: Essential ($11 per month) or Professional ($22 per month). They offer access to features like 30-day event video history and intelligent alerts, though the Professional plan includes a US-based monitoring service from Abode that can dispatch police and medical services during emergencies. The system is designed so that you can pause professional monitoring when you don’t need it, rather than being locked into a contract.

AT&T is touting the Cellular Backup feature in Connected Life: If your home internet goes offline, this feature will keep your smart-home devices running by routing data through your smartphone (via the hot spot), and there’s a battery backup for the hub in case power goes out. This was a cornerstone feature of AT&T’s old Digital Life service, but cellular backup is now a staple in many smart-home security systems, like those from SimpliSafe or ADT.

You need to be an AT&T customer to use the Connected Life platform, though it doesn’t matter if you have a wireless mobile plan or home internet. This means the potential customer base for these new smart-home services is massive; AT&T has 119 million wireless mobile customers and is the largest provider of fiber home internet in the US, with more than 10 million customers.



Source link

Continue Reading

Trending