Connect with us

Tech

UK government confirms Foreign Office cyber attack | Computer Weekly

Published

on

UK government confirms Foreign Office cyber attack | Computer Weekly


The UK government has admitted that IT systems at the Foreign, Commonwealth and Development Office (FCDO) were hacked in October, but insists the attack had a “low risk” of personal data being compromised.

During a round of broadcast interviews today (19 December 2025), trade minister Chris Bryant said it was “not clear” who perpetrated the attack, although the first report on the hack, revealed in The Sun, attributed it to a China-based threat actor known as Storm 1849.

The same group was blamed for targeting vulnerabilities in Cisco equipment that led to a National Cyber Security Centre (NCSC) warning in September for organisations using Cisco’s Adaptive Security Appliance family of unified threat management systems. Users were told to replace any devices reaching end-of-life support, noting the significant risks that ageing or obsolete hardware can pose.

Bryant said some of the reports about the FCDO hack were “speculation”, but that the government had managed to “close the hole” quickly, and that security experts were confident there was a “low risk” of any individual being affected. The Sun report claimed hackers accessed confidential data and documents, possibly including thousands of visa details.

The Storm 1849 attack campaign on Cisco equipment was dubbed ArcaneDoor, and targeted two zero-day vulnerabilities. One was a high-severity denial-of-service vulnerability capable of remote code execution; the other was a high-severity persistent local code execution vulnerability.

While government IT systems always face scrutiny over cyber security, the hack will provide further fuel for critics of plans to introduce a national digital ID scheme, many of whom have already raised concerns about the potential risks of gathering citizen identity data.

The development also comes a day after ITV News broadcast a report on the cyber security issues found in One Login – the government single sign-on system that will be at the heart of the digital ID plan – which were first revealed by Computer Weekly in April.

Damaging year

2025 has been a notably damaging year for cyber attacks, with high-profile ransomware campaigns affecting Jaguar Land Rover (JLR), the Co-op and Marks & Spencer.

The Office for National Statistics attributed a November decline in the UK’s economy partly to the impact of the JLR attack, which stopped car production at the manufacturer and had a knock-on impact across the automotive supply chain.

Last month, four London councils – Kensington and Chelsea; Hackney; Westminster; and Hammersmith and Fulham – suffered cyber attacks, disrupting services and prompting an NCSC investigation. Westminster has since admitted that potentially sensitive data was copied from its systems during the hack. Three of the local authorities operate a shared IT service.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

DHS Demanded Google Surrender Data on Canadian’s Activity, Location Over Anti-ICE Posts

Published

on

DHS Demanded Google Surrender Data on Canadian’s Activity, Location Over Anti-ICE Posts


The Department of Homeland Security tried to obtain a Canadian man’s location information, activity logs, and other identifying information from Google after he criticized the Trump administration online following the killings of Renee Good and Alex Pretti by federal immigration agents in Minneapolis early this year.

Lawyers for the man, who has not been named, are alarmed in part because they say that the man has not entered the United States in more than a decade. “I don’t know what the government knows about our client’s residence, but it’s clear that the government isn’t stopping to find out,” says Michael Perloff, a senior staff attorney at the American Civil Liberties Union of the District of Columbia who is representing the man in a lawsuit against Markwayne Mullin, the secretary of DHS, over the summons. The lawsuit alleges that DHS violated the customs law that gives the agency the power to request records from businesses and other parties.

Perloff argues that the government is using the fact that big tech companies are based in the US to request information it would not otherwise be able to get. “It’s using that geographic fact to get information that otherwise would be totally outside of its jurisdiction,” he says. “I mean, we’re talking about the physical movements of a person who lives in Canada.”

DHS and Google did not immediately respond to a request for comment.

The demand for the man’s location data was included in a request DHS issued to Google called a customs summons, which is supposed to be used to investigate issues related to importing goods and collecting customs duties.

“It says right in the statute, it’s for records and testimony about the correctness of an entry, the liability of a person for duties, taxes, and fees, you know, compliance with basic customs laws,” says Chris Duncan, a former assistant chief counsel for US Customs and Border Protection who now works as a private-practice attorney representing importers and exporters. “And that’s all it was ever envisioned to be used for.”

A customs summons is a type of administrative subpoena and is not reviewed by a judge or grand jury before being sent out. According to the complaint, Google alerted the man about the request on February 9, despite an ask included in the summons “not to disclose the existence of this summons for an indefinite period of time.”

Through his attorneys, the man told WIRED he initially mistook the notification for a joke or scam before realizing it was real.

The summons, which is included in the complaint, does not give a specific reason for why the man was under investigation beyond citing the Tariff Act of 1930. The man’s lawyers contend that he did not export or import anything from the United States between September 1, 2025, to February 4, 2026, the time frame the government requested information about.

Instead, the man’s lawyers allege, the summons was filed in response to the man’s online activities, including posts that he made condemning immigration enforcement agents after the killings of Good and Pretti in January.

The man tells WIRED that watching members of the Trump administration “smear these two souls as terrorists was absolutely disgusting and enraging. People were being asked to disbelieve our own eyes so that the men responsible for killing two good Americans would go free.”

The man says of his online activity, “I felt I needed to do something that would stand out and be seen by despairing Americans to show them they had support and that they were not alone.”



Source link

Continue Reading

Tech

Do Lightsaber Blades Have Mass?

Published

on

Do Lightsaber Blades Have Mass?


When you think of Star Wars, you think of lightsabers. Right? What could be better, from a movie-making standpoint, than a futuristic sword that lets you create awesome fencing duels like in old-time Errol Flynn swashbucklers. (So much better than watching Stormtroopers fire their blasters into walls and ceilings and anything else except their targets.)

Lightsabers come in a cosmic rainbow of hues (color-coded blue or green for good guys, red for bad) and a variety of shapes. There’s even a double-bladed version in Phantom Menace. (I don’t want to start a nerd fight—yet—but the best lightsaber battle in the canon has to be the “Duel of the Fates” in that movie, thanks to the skills and scariness of Darth Maul actor Ray Park.)

So … exactly what are lightsabers? Of course, they aren’t real, so nobody really knows how they work. Even the characters in the movies seem a little confused about it. In Phantom Menace, Anakin calls it a “laser sword.” Yeah, he was a kid, but both Din Djarin (the Mandalorian) and Luke Skywalker also refer to it as a laser sword—though I suspect Luke was being sarcastic.

Anyway, that’s just wrong: It can’t be a laser. For starters, lasers beams are invisible from the side, so you wouldn’t see a thing unless you staged the duels in a disco with fog machines to scatter the beams. Second, the beams go on forever; they don’t have an end. Third, laser beams can’t clank together like swords—they’d just pass through each other when you try to parry.

But what is it then? We can greatly narrow the possibilities by asking if the blade has mass. If it’s some kind of light (as you’d think from the name “lightsaber”), then the answer is no—light, or electromagnetic radiation, has no mass. If we can determine that it has mass, then it’s not light.

This is a question we can answer, by analyzing how lightsabers move when you wave them around. In other words, it’s time for some physics!

Mass and Motion

Don’t confuse mass and weight. Mass is a measure of how much “stuff” like protons, neutrons, and electrons are in an object, and weight is the amount of gravitational force acting on an object. Here we want to see what impact the mass of a lightsaber would have on its motion. But let’s start with something simpler.

Instead of a lightsaber, say we have a “lightball” made of the same buzzy substance. Since it’s symmetrical, we can describe its motion without worrying about rotation. If we want to move this ball back and forth, we call on Newton’s second law of motion. This says the acceleration (a) of an object depends on its mass (m) and the amount of force (F) applied to it.



Source link

Continue Reading

Tech

How a cloud-native architecture handles persistent storage | Computer Weekly

Published

on

How a cloud-native architecture handles persistent storage | Computer Weekly


Cloud-native, or containerised, applications are now mainstream. As many as 82% of enterprises now have Kubernetes in production, according to the Cloud Native Computing Forum (CNCF). That is up from 66% in 2023. And a full 98% of organisations have at least some cloud-native applications, the industry body says.

But moving applications to cloud-native environments does not just mean creating new code. It also means adapting infrastructure. Compute, networking and data storage all need to work with container environments. By no means can all systems do this out of the box, especially when it comes to on-premise hardware.

At the same time, enterprise IT architects need to consider the requirements of legacy applications and virtual machines (VMs) that are not being updated.  And enterprises will want to make the most efficient use of their storage hardware, regardless of their application environments.

Moving to containers means adapting a technology that was not designed for persistent storage to handle business-critical data. 

Stateless states

Containerised applications started out as stateless, or ephemeral. The designers never intended containers to hold persistent data. They expected that microservices or containerised applications would use no non-volatile storage and discard the contents of memory, and even their settings, once they had completed their tasks. 

Instead, containerised applications rely on an external data store, usually a database or cache. 

There are advantages to this approach. These include simpler deployment, easier scaling, fault tolerance and recovery, and application portability. But most business applications, if not the majority, need persistent data. 

“Most business applications require storage. In reality, unless you’re converting Fahrenheit to Celsius and back, you’re storing something somewhere,” says Dan Ciruli, vice-president and general manager for cloud native at Nutanix. 

And the need to work with persistent data is all the more important, as enterprises look to containers as an alternative to conventional virtual machines.

But this means rethinking the way applications work. And it requires IT architects to update their storage systems to support modernised, cloud-native applications. This can be directly, where array manufacturers support containers, or through a control plane such as Nutanix or Everpure’s Portworx.

Almost inevitably, changes are being driven by AI, as enterprises look to support its data-heavy workloads in modern, cloud-native environments. But there are other drivers, too, including a trend to move virtualised applications to containers and the need for cost controls.

“Kubernetes might be over a decade old, but it’s continuing to evolve as AI transforms the way we handle data. Already, Kubernetes has moved beyond the days when it was built only for ephemeral, stateless applications,” says Michael Cade, global field chief technology officer at Veeam Software.

“Today, stateful applications such as databases, machine learning pipelines and streaming systems are now being treated as first-class citizens [in containerised environments] and have been given the specialised tools they need to thrive.” 

Storage connections

Connecting storage to Kubernetes, though, relies on support from both application developers and hardware suppliers. 

The main way to connect storage to container environments is through the container storage interface (CSI). CSI needs to be supported directly by the storage provider, be that the hardware manufacturer, a cloud service, or a software-defined storage (SDS) supplier. 

As the CNCF’s Kubernetes page notes: “CSI was developed as a standard for exposing arbitrary block and file storage systems to containerised workloads on container orchestration systems like Kubernetes.” CSI allows third-party storage providers to write, and deploy, plug-ins for storage without changing the core Kubernetes code. 

SDS technologies, for their part, also use CSI drivers, but run on commodity hardware rather than dedicated storage arrays, as well as hyper-converged infrastructure. It also includes open source options, such as OpenEBS, Longhorn and Ceph. 

“Every environment needs a storage back end, with a CSI driver that connects it to Kubernetes. It’s up to the storage provider to provide the CSI driver,” says Nigel Poulton, an author and independent expert in Kubernetes and containers. 

“Most CSI drivers create at least one StorageClass that maps to a tier of storage and its capabilities. For example, a CSI driver might create a StorageClass called ‘fast-replicated’ that maps to high-speed flash storage automatically replication to a remote location. Any application using this class automatically gets that tier and set of capabilities,” he adds. 

This level of abstraction is highly useful for application developers, as they no longer have to worry about the physical capabilities of the storage system. That is handled by the CSI drivers. 

“The CSI drivers enable us to give access to storage from the containerised application, but [for firms to] still administer the storage the way they do the storage that’s running under their VMs,” says Nutanix’s Ciruli. “And that’s a big advantage.” He also sees customers installing Kubernetes on bare metal clusters. 

This also maintains separation between the Kubernetes workloads and the underlying storage hardware. On paper at least, enterprises can move their containerised applications to a different platform or supplier, or new storage hardware, without rewriting code and with minimal disruption. 

In practice, large-scale moves of Kubernetes applications between platforms are still relatively rare. Enterprises tend to develop applications to run on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, or local hardware, depending on their business requirements.  

Application portability, supported by CSI, is a useful insurance, even if there are enough differences between platforms to suggest caution.

“We really don’t need to become an expert in how EBS [Elastic Block Store] works versus Azure disk, or local SSD [solid-state drives] and how that works,” says Greg Muscarella, general manager for Portworx at Everpure. “If you have to manage those things, it becomes somewhat complex. Companies tend to focus on a single cloud environment.”

Few organisations, he suggests, have code where they could “push a button and move it to a different cloud”, not least because of differences between storage architectures from both hardware suppliers and cloud providers. However, enterprises are moving more applications to cloud-native environments. And this increasingly includes databases and applications that previously ran in conventional virtual machines.

New platforms 

One of the most significant trends in application modernisation is to move both virtual machines and database-driven applications to containers. Cost, avoiding supplier lock-in and the need to consolidate on fewer platforms are all drivers. 

“The line between ‘containerised’ and ‘virtualised’ is blurring,” suggests Veeam’s Cade. “For a long time, containers and VMs were seen as two separate siloes. But as stateful applications have developed, and since VMs are essentially a typical stateful workload, we’re seeing a significant rise in businesses running them directly within Kubernetes using platforms such as Red Hat OpenShift Virtualization.” 

Poulton agrees. He sees more organisations moving virtualised workloads to containers, via tools such as KubeVirt. But, although organisations are porting over virtualised applications, and databases, IT architects need to be sure that all the application’s requirements are met by the storage layer. 

“Databases have much more demanding requirements, including ordered startup, replication, automated failover and backup,” he cautions. “The two biggest changes are ensuring a CSI driver exists for the storage system and potentially deploying an operator.” 

A Kubernetes operator provides details about a database’s specific requirements, and sometimes storage, too. Operator support is essential to allow databases to deliver enterprise workloads over Kubernetes. Again, the operator supports the modern application goal of separating the code from the storage array or cloud storage service. 

Percona, for example, provides operators for MySQL, PostgreSQL and MongoDB, as well as Everest. “The operators are basically the game changers,” says Kate Obiidykhata, the company’s general manager for cloud native. “They encode the human DBA knowledge into the software, and you have all those most important resilience components, backup, failover, replication and upgrades automated.”

Operators, she adds, help enterprises to adopt hybrid architectures or multicloud strategies, allowing data portability without the need to rewrite applications. But workloads that operate on VMs will not automatically run on containers, she says. Firms will need to plan, and test, their deployments with care. 

“There are specific playbooks that you should apply and methodologies that are obviously different from the classic database setup on VMs,” says Obiidykhata. “But it’s all doable, and many companies are now running those databases on Kubernetes. They just have a different playbook to mitigate those issues.”

Firms also need to factor in how they run their ported applications in production. Development, understandably, attracts much of the attention. But how systems run from “day two” onwards is critical. This includes storage provisioning and tiering, as well as backup, recovery and security.

The CSI drivers take care of much of the hard work, but enterprises are likely to look to invest in new hardware, or even storage from suppliers focused on cloud-native environments, to ease the migration to containers.

“This is usually by deploying new storage architectures, either via new storage products from existing vendors, but increasingly by engaging with new vendors,” says Poulton. Enterprises, he adds, might still be running older hardware systems, but they are unlikely to use them for Kubernetes. 



Source link

Continue Reading

Trending