In a serious setback to the cyber criminal underground, an Interpol-led operation spanning 72 countries and territories has successfully neutralised more than 45,000 malicious IP addresses and servers, seized over 200 devices, and seen 94 people taken into custody, with well over 100 others still under investigation.
Dubbed Operation Synergia III, the action – which unfolded over a six-month period starting in mid-July 2025 – targeted the infrastructure used in cyber fraud, phishing, malware and ransomware campaigns.
Interpol hailed a major cross-border collaborative effort that saw data transformed into actionable intelligence, enabling it to provide tactical operational assistance to police forces all over the world, including in the UK. Technical support was provided by private sector cyber companies including Group-IB, Trend Micro, and S2W.
“Cyber crime in 2026 is more sophisticated and destructive than ever before, but Operation Synergia III stands as a powerful testament to what global cooperation can achieve,” said Interpol Cybercrime Directorate director Neal Jetton.
“Interpol remains at the forefront of this fight, uniting law enforcement agencies and private sector experts to dismantle criminal networks, disrupt emerging threats and protect victims around the world.”
Group-IB CEO Dmitry Volkov added: “Cyber criminal groups rely on complex infrastructure to scale phishing and malware operations globally.
“Operation Synergia III demonstrates how close cooperation between law enforcement agencies and private-sector partners can significantly disrupt these networks. By sharing intelligence on malicious infrastructure and attacker tactics, Group-IB remains committed to supporting global efforts to dismantle cybercrime operations and protect organizations and individuals worldwide.”
Many investigations conducted under the auspices of Operation Synergia III are still in progress and cannot yet be publicly discussed. However, Interpol shared some details of a few cases.
In Macau in China, for example, law enforcement identified and targeted 33,000 fraudulent websites, many relating to the gambling industry for which Macau is world-famous, but also financial services and governments. The websites were used to siphon money and personal data from scam victims.
Meanwhile, in Togo in Western Africa, authorities arrested 10 suspected of operating a fraud ring from a residential property – specialising in a variety of crimes from hacking social media accounts to romance scams and sextortion, and in Bangladesh, police arrested 40 and seized over 130 devices used in credit card fraud, identity theft, and loan and job scams.
Robert McArdle, director of cyber crime research at Trend Micro’s TrendAI, said: “Behind every malicious server or phishing kit sits a wider criminal ecosystem that needs to be mapped and understood before arrests become possible.
“Our support for investigations such as Tycoon2FA, and contributions to operations like this one led by Interpol, demonstrates how actionable threat intelligence can help authorities identify infrastructure, connect actors and disrupt cyber criminal networks at scale.”
Latest iteration of a serial operation
As its name suggests, Operation Synergia III is the third in a series of Interpol actions against organised cyber crime.
The previous action, Operation Synergia II, unfolded in 2024 and similarly resulted in the sinkholing of thousands of malicious IP addresses and servers, and at least 40 known arrests.
Operation Synergia II was similarly globe-trotting, with known actions taking place in Hong Kong, Mongolia, Macau, Madagascar and Estonia.
The first action in the series, in late 2023, targeted the command and control (C2) server infrastructure so beloved of cyber criminal gangs.
The two USB-C ports are on the left side, alongside HDMI and a USB-A port. The second USB-A port, a microSD card slot, and a headphone jack are on the right. It’s not a nice assortment of ports overall, and I just wish Acer had split the USB-C ports up so the laptop could have a charging port on either side.
Acer is using a top-notch 16-inch OLED touchscreen display on the Swift 16 AI. It has a resolution of 2880 x 1800, a refresh rate of 120 Hz, and color saturation as close to perfect as I’ve seen. Like most OLED laptops, it has a glossy, highly reflective display that maxes out at 315 nits of brightness, according to my testing. It’s nowhere near as bright as IPS or mini-LED displays, but the trade-off in brightness is to achieve that unbeatable contrast that only OLED can deliver.
A Risky Touchpad
Photograph: Luke Larsen
The full-size keyboard and oversized touchpad are definitely the most notable elements of this laptop. The first thing you notice is the touchpad, which is certainly the largest I’ve ever seen. You might think it looks a bit silly, but I always like it when companies leave as little wasted space on a product as possible. I really wanted to like this touchpad, but unfortunately, it could deter most people from buying this product.
On large laptops like the Swift 16 AI, which have a number pad to the right of the keyboard, the touchpad is typically below the keyboard, making it visually off-center. While it’s functional, this arrangement looks odd, and some 16-inch laptops get around this by omitting the number pad entirely. That’s what you see on the MacBook Pro, the Dell XPS 16, and most gaming laptops these days, too.
Rather than removing the number pad, Acer expanded the touchpad and centered it. This makes good use of the space below the keyboard, preserves the number pad, and solves the aesthetic annoyance that typically plagues full-size laptops.
Robotically assembled building blocks could be a more environmentally friendly method for erecting large-scale structures than some existing construction techniques, according to a new study by MIT researchers.
The team conducted a feasibility study to evaluate the efficiency of constructing a simple building using “voxels,” which are modular 3D subunits that assemble into complex, durable structures.
After studying the performance of multiple voxels, the researchers developed three new designs intended to streamline building construction. They also produced a robotic assembler and a user-friendly interface for generating voxel-based building layouts and feeding instructions to the robots.
Their results indicate this voxel-based robotic assembly system could reduce embodied carbon — all of the carbon emitted during the lifecycle of building materials — by as much as 82 percent, compared with popular techniques like 3D concrete printing, precast modular concrete, and steel framing. The system would also be competitive in terms of cost and construction time. However, the choice of materials used to manufacture the voxels does play a major role in their carbon footprint and cost.
While scalability, durability, long-term robustness, and important considerations like fire resistance remain to be explored before such a system could be widely deployed, the researchers say these initial results highlight the potential of this approach for automated, on-site construction.
“I’m particularly excited about how the robotic assembly of discrete lattices can enable a practical way to apply digital fabrication to the built environment in a way that can let us build much more efficiently and sustainably,” says Miana Smith, a graduate student in the Center for Bits and Atoms (CBA) at MIT and lead author the study.
She is joined on the paper by Paul Richard, a graduate student at École Polytechnique Fédérale de Lausanne in Switzerland and former visiting researcher at MIT; Alfonso Parra Rubio, a CBA graduate student; and senior author Neil Gershenfeld, an MIT professor and the director of the CBA. The research appears in Automation in Construction.
Designing better building blocks
Over the past several years, researchers in the Center for Bits and Atoms have been developing voxels, which are lattice-structured building blocks that can be assembled into objects with high strength and stiffness, like airplane wings, wind turbine blades, and space structures.
“Here, we are taking aerospace principles and applying them to buildings. Why don’t we make buildings as efficiently as we make airplanes?” Gershenfeld says, based on prior work his lab has done on voxel assembly with NASA, Airbus, and Boeing.
To explore the feasibility of voxel-based assembly strategies for buildings, the researchers first evaluated the mechanical performance and sustainability of eight existing voxel designs, including a cuboctahedron made from glass-reinforced nylon and a Kelvin lattice made from steel.
Based on those evaluations, they developed a set of three voxels using a new geometry that could be more easily assembled robotically into a larger structure. The new design, based on a high-strength and high-stiffness octet lattice, mechanically self-aligns into rigid structures.
“The interlocking nature of these voxels means we can get nice mechanical properties without needing to have a lot of connectors in the system, so the construction process can run a lot faster,” Smith says.
To accelerate construction, they designed a robotic assembly system based on inchworm-like robots that crawl across a voxel structure by anchoring and extending their bodies. These Modular Inchworm Lattice Assembler robots, or MILAbots, use grippers on each end to place voxel building blocks and engage the snap-fit connections.
“The robots can assemble the voxels by dropping them into place and then stepping on them to have the pieces interlock. We can do precise maneuvers based on the mechanical relationship between the robots and the voxels,” Smith explains.
The team studied the embodied carbon needed to fabricate their new voxel designs using three materials: plastic, plywood, and steel. Then they evaluated the throughput and cost of using the robotic assembly system to build a simple, one-story building. The researchers compared these estimates with the performance of other construction methods.
Potential environmental benefits
They found that most existing voxels, and especially those made from plastics, performed poorly compared to existing methods in terms of sustainability, but the steel and wood voxels they designed offered significant environmental benefits.
For instance, utilizing their steel voxels would generate only 36 percent of the embodied carbon required for 3D concrete printing and 52 percent of the embodied carbon of precast concrete. The plywood voxels had the lowest carbon footprint, requiring about 17 percent and 24 percent of the embodied carbon needed, respectively.
“There is still a potential viable option for a plastics-based voxel approach, we just have to be a bit more strategic about which types of plastics, infills, and geometries we use,” Smith says.
In addition, projected on-site assembly time for the steel and wood voxel approaches averaged 99 hours, whereas existing construction methods averaged 155 hours.
These speed benefits rely on the distributed nature of voxel-based assembly. While one MILAbot working alone is far slower than existing techniques, with a team of 20 robots working in parallel, the system catches up to or surpasses existing automation methods at a lower cost.
“One benefit of this method is how incremental it is. You can start building, and if it turns out you need a new room, you can just add onto the structure. It is also reversible, so if your use changes, you can dissemble the voxels and change the structure,” Gershenfeld says.
The researchers also developed an interface that enables users to input or hand-design a voxelized structure. The automatic system determines the paths the MILAbots should follow for construction and sends commands to the assemblers.
The next step in this project will be a larger testbed in Bhutan, using the “super fab lab” that CBA helped set up there to replicate the robots to test construction for a planned sustainable city, Gershenfeld says.
Additional areas of future work include studying the stability of voxel structures under lateral loads, improving the design tool to account for the physics of the system, enhancing the MILAbots, and evaluating voxels that have integrated sheeting, insulation, or electrical and plumbing routing.
“Our work helps support why doing this type of distributed robot assembly might be a practical way to bring digital fabrication into building construction,” Smith says.
This work was funded, in part, by the MIT Center for Bits and Atoms Consortia.
When containers started out, they were meant to be ephemeral – stateless, disposable and data-light. But that’s all changed. As Gartner notes, use cases for containers have evolved to include analytics and artificial intelligence (AI) processing, and by 2028, it predicts 15% of on-premise production workloads will run in containers. That’s a 300% increase since 2022.
Now, while containers themselves retain all the benefits of ephemerality – rapidly reproducing, then dying back just as quickly to account for workload spikes – the storage attached to them cannot live by the same rules.
As enterprises move from proofs of concept to running a big chunk of production workloads in containers, the storage layer has become a pivot point. While the early days were focused on simple web scaling, containers have now moved into the realm of mission-critical databases, massive data science pipelines, and the power-hungry world of generative AI (GenAI).
The challenge lies in navigating key choices such as file versus block versus object storage, CSI versus container-native storage, and whether to go for a dedicated container storage platform.
Containerisation is lightweight virtualisation
Containerisation is a lightweight form of virtualisation. Unlike traditional virtual machines (VMs) that require a hypervisor and a full guest operating system (OS), containers share the host server’s OS. This makes them lighter, faster to scale and more portable. They are built on microservices principles that break monolithic applications into discrete, application programming interface (API)-linked components in a way that aligns with DevOps methodologies.
While several orchestrators exist (for example, Docker Swarm and OpenShift), Kubernetes is the market leader. It manages the cluster of nodes, which is where pods run the containers. Clusters are groups of nodes managed by a control plane, which is where we find the API server, a scheduler for pod placement, a controller to maintain the desired state, and etcd for storage configuration.
As originally conceived, container storage was ephemeral, and data vanished when a pod was deleted. So, to support enterprise applications, Kubernetes developed persistent volumes (PV), which are attached to a cluster and decouple storage from compute to allow applications to remain portable while maintaining access to data.
CSI vs container-native storage
Container Storage Interface (CSI) is a standard that allows storage suppliers – more than 130 drivers are available – to expose their systems to Kubernetes. CSI allows Kubernetes to trigger advanced data services such as snapshots, cloning and automated provisioning across block, file and object storage in on-premise and cloud environments.
Container-native storage potentially has the advantage of portability – on-premise, in the cloud, and so on, by virtue of the virtualisation inherent – while CSI is more likely to tie a deployment to deployed storage arrays
CSI is essentially a “broker”. It is an industry-standard API that acts as a middleman, allowing Kubernetes to talk to external storage arrays. For example, when a developer requests storage via a persistent volume claim (PVC), the CSI driver tells the external storage box to carve out a piece of capacity and plug it into the container. The advantage is that you get to use the expensive, reliable enterprise storage you already own, but the storage is still “outside” the cluster, and if you move containers to a different cloud or datacentre, that external hardware might not be there.
Meanwhile, container-native storage is storage that lives inside the Kubernetes cluster. It is usually deployed as a set of containers itself. It takes specified drives attached to Kubernetes nodes and pools them together into one big virtual resource.
Container-native storage potentially has the advantage of portability – on-premise, in the cloud, and so on, by virtue of the virtualisation inherent – while CSI is more likely to tie a deployment to deployed storage arrays.
Container-native storage is location independent, so you can run the same setup on-premise or in the cloud. But it can consume central processing unit (CPU) and random access memory (RAM) from your Kubernetes nodes to manage the data, which may be a concern.
Do we need containers to be that portable?
CSI offers connection to big-iron fully featured storage, and container-native storage holds the promise of flexible deployment, portability, and so on. But is portability that important? Eric Phenix, who leads the engineering practice at analysts GigaOm, says not.
“Containers offer a compute abstraction layer that allows the application to be infrastructure agnostic, rather than a solution that is designed to make applications more portable,” he says.
Phenix argues that while containers make the code agnostic, deployment is another matter. “Unless a company is specifically a customer-facing instanced PaaS [platform as a service] where they need to run on every cloud, I don’t see the need to run the same workload on multiple clouds. Once things are deployed, they’re always messy to migrate,” he says.
And this “messiness” is almost always a data problem, according to Phenix. While the container image can move in seconds, the multi-terabyte persistent volume attached to it cannot.
James Brown, an analyst at GigaOm, points out that container-native storage is essentially software-defined storage and brings its own lock-ins. “Heavily integrated, container-native supplier platforms risk replacing hardware lock-in with software lock-in. Tying your architecture to proprietary in-cluster storage features creates massive migration hurdles, effectively breaking the core portability promise of Kubernetes,” he says.
So, the choice here comes down to just how portable you need things to be. Enterprises often use a hybrid approach: CSI to connect to massive, high-performance arrays for their heaviest databases; container-native storage for modern, distributed apps that need to be able to move without a “messy” data migration.
In 2026, choosing the correct storage protocol for container storage is all about playing in a “mixed economy”, with a Kubernetes cluster able to pull from all three formats simultaneously.
Block for high performance
Block storage presents data as a raw, unformatted volume – like a physical hard drive – that is attached to a single node at a time. In Kubernetes, this is typically handled via persistent volumes using the ReadWriteOnce (RWO) access mode.
Block storage can be in on-premise arrays or in the cloud, such as in Amazon Elastic Block Store (EBS), Google Persistent Disk, or Microsoft Azure Disk.
Block storage offers the lowest latency and highest input/output operations per second (IOPS) because there is no filesystem overhead between the application and the storage. That makes it ideal for databases where small, frequent updates happen at specific locations within files.
When it comes to the cons, most block storage cannot be mounted to multiple pods across different nodes simultaneously, and scaling usually requires resizing the volume and expanding the filesystem. Block storage is generally the most expensive, too.
File for directory access
File storage provides a shared hierarchical namespace (folders and files) accessible over a network. In Kubernetes, it is the primary way to achieve ReadWriteMany, allowing multiple pods on different nodes to read and write to the same data.
It is also available in on-premise storage or cloud services such as Amazon Elastic File System (EFS), Microsoft Azure Files and Google Filestore.
File access is perfectly suited for horizontal scaling of web servers where all pods need access to the same assets, and most legacy applications are built to read/write to a standard directory structure. Compared to block access, network protocols like NFS or SMB introduce more latency, and at large scales (millions of files), traversing deep directory trees can become extremely slow. Meanwhile, handling concurrent writes across many pods can lead to file locking conflicts if not managed carefully.
Object for sizeable datastores
Object storage manages data as discrete objects in a flat namespace and is accessed via APIs (for example, S3 or Swift) rather than being “mounted” like a disk. It’s the cloud-native storage protocol, though it can run on-site, too. Examples include Amazon Simple Storage Service (S3), MinIO, Google Cloud Storage and Ceph RGW. Object storage can store petabytes of data without worrying about partition limits or disk sizes, and is usually the cheapest option for large-scale unstructured data (logs, images, backups).
Object storage is ideal for modern “cloud-native” apps that talk directly to storage via HTTP/HTTPS, bypassing the OS kernel entirely.
On the negative side, object storage is generally the slowest for transactional work with high throughput but higher latency than block or file. Meanwhile, you can’t “edit” a single line in a file; you must re-upload the entire object to change it.
Storage protocol decision-making
In summary, block storage is expensive but the best performing, file storage is less costly but with scale restrictions, and object storage is great for huge capacity but also lags in performance terms. So, which one to choose? It’s a case of horses for courses, according to Tony Lock, director of engagement and distinguished analyst at Freeform Dynamics.
“In an ideal world, the choice of underlying storage – block, file or object – will likely depend on what the app is, where the organisation wishes to run it, and what its characteristics are in terms of size, number of containers, latency requirements, security, location, cost, etc,” he says.
Meanwhile, Whit Walters, field chief technology officer at GigaOm, believes S3 is winning the battle, but block has its place. He says: “The real story is protocol bifurcation inside AI pipelines. Object storage dominates the ingestion and data lake tier, offering exabyte-scale horizontal scaling with rich, customisable metadata that enables semantic discovery natively at the storage layer.
“Block storage still owns the inference hot path where vector databases demand 500,000+ IOPS, however.
“The emerging trend to watch is COSI, the Container Object Storage Interface, which aims to make object storage buckets first-class Kubernetes resources with standardised, declarative lifecycle management.”
CSI vs container-native in storage supplier container platform
All the big storage suppliers provide some form of platform or wrapper for container storage. These include Dell’s Container Storage Modules, HPE’s Ezmeral Runtime Enterprise, the Hitachi Kubernetes Service (HKS), NetApp’s Astra and Everpure’s Portworx.
What they all have in common is a means of managing container storage – and in some cases, data protection and more. Where they differ under the hood is that most are based around CSI, so they provide a layer from which to manage CSI drivers to their storage.
CSI connectivity may well be better suited to larger, more static environments, while container-native solutions can be best for more dynamic sets of workloads
Some differ in that they provide their management functionality from within Kubernetes. Everpure’s Portworx, for example, lives entirely within Kubernetes but uses CSI as a “handshake” with external storage.
Meanwhile, HPE Ezmeral also runs in Kubernetes but accesses data via the CSI driver. NetApp’s Astra Datastore was container-native in a similar way to Portworx, but was discontinued in 2023.
While all the key storage suppliers offer products that can manage storage for containers, be sure to check the extent to which these are container-native or dependent on CSI. As mentioned, CSI connectivity may well be better suited to larger, more static environments, while container-native solutions can be best for more dynamic sets of workloads.
GigaOm’s Walters puts a finer point on it: “The Kubernetes tax is real, but it’s a trade-off. Container-native platforms run replication, dedupe and encryption on worker nodes. Ceph alone carries a 2-10% baseline CPU penalty per node just for cluster quorum, and that spikes hard during replica rebuilds.
“In GPU [graphics processing unit]-dense AI environments, where every cycle counts, offloading that work to dedicated array ASICs [application-specific integrated circuits] via an advanced CSI model keeps compute nodes clean. But in multicloud or edge scenarios without dedicated arrays, that CPU tax buys you topology-aware placement and self-healing automation that’s genuinely hard to replicate otherwise.”
There may also be performance considerations in terms of contention for resources, as well as questions about how they are administered.
Towards autonomous, agentic storage
As we look towards 2027, the focus is shifting from manual provisioning to policy-driven storage.
The ultimate goal is a system where the storage “senses” workload requirements. For example, if an AI training container spins up, the system automatically provisions high-throughput file storage, or if a database scales up, it gets low-latency block storage.