Tech
Jaguar Land Rover extends production shutdown for another week | Computer Weekly
Jaguar Land Rover (JLR) has extended a pause in vehicle production for at least another week following a cyber attack by the Scattered Lapsus$ Hunters hacking collective comprising members of the Scattered Spider, ShinyHunters, and Lapsus$ gangs.
The incident, which began at the end of August before becoming public on 2 September, forced the suspension of work at JLR’s Merseyside plant and has also affected its retail services.
It has since emerged that data of an undisclosed nature has been compromised by the cyber gang – which has been boasting of its exploits on Telegram but has also now claimed to have retired – and notified the relevant regulators. Its forensic investigation continues.
A JLR spokesperson said: “Today we have informed colleagues, suppliers and partners that we have extended the current pause in our production until Wednesday 24 September 2025.
“We have taken this decision as our forensic investigation of the cyber incident continues, and as we consider the different stages of the controlled restart of our global operations, which will take time.
“We are very sorry for the continued disruption this incident is causing and we will continue to update as the investigation progresses,” they said.
James McQuiggan, CISO advisor at KnowBe4, said the continuing disruption at JLR demonstrated how entwined cyber security and wider business resilience have now become.
“When core systems are taken offline, the impact cascades through employees, suppliers and customers, showing that business continuity and cyber defence should be indivisible,” he said. “Beyond immediate disruption, data theft during such incidents increases the long-term risks, from reputational damage to regulatory consequences.”
“To mitigate these risks, organisations should regularly test and update their business continuity and incident response plans, strengthen supply chain risk assessments, and adopt zero-trust principles to limit attacker movement.”
McQuiggan added: “Just as important is addressing human risk, as social engineering remains the leading entry point for attackers. Ongoing security awareness, phishing simulations, and behavior analysis of users in a human risk management program help users recognise and resist malicious tactics. By combining strong technical controls with a culture of cyber resilience, organisations can reduce their exposure and recover with greater confidence.”
Golden parachutes
Meanwhile, the supposed Scattered Lapsus$ Hunters shutdown – announced via BreachForums and Telegram across a number of frequently crude postings – saw ‘farewell’ messages that included a number of apologies to the families of some gang members scooped up in law enforcement actions, to JLR, and to Google and CrowdStrike.
In the messages, reviewed by CyberNews, one of the supposed gang members even addressed the CIA, saying they were “so very sorry” they leaked classified documents and “had no idea what they were doing”.
“Please forgive me and f*** Iran. I will be going to the rehab center for 60 days,” they added.
The gang’s alleged climbdown has drawn a sceptical eye from cyber community members who, based on years of experience, know that cyber criminals rarely if ever pack up shop and go straight.
Cian Heasley, principal consultant at Acumen Cyber, said that the gang’s talk of activating “contingency plans” and a call for fans not to worry about them as they would be enjoying their “golden parachutes with the millions the group accumulated [sic]”, seemed far-fetched.
“This is a transparent move that suggests its members are buying some breathing time, panicking about the threat of prison, and arguing behind the scenes about how much trouble they are actually in and the need to be cautious,” said Heasley.
“Given the volatile and explosive nature of the group, it’s hard to imagine they carried out this level of due diligence.
“The lure of the money and excitement that comes with cyber crime will inevitably draw them back in eventually,” added Heasley.
Indeed, even amid its farewell messages, Scattered Lapsus$ Hunters hinted at future developments and taunted the likes of the FBI and Mandiant, and various victims including luxury goods house Kering and Air France.
It also named British Airlines, an organisation that does not exist but which may be a reference to British Airways (BA).
BA is not known to have been attacked at the time of writing, suggesting that more victims of the recent hacking spree may yet come to light.
Tech
An Anarchist’s Conviction Offers a Grim Foreshadowing of Trump’s War on the ‘Left’
By the standards of the San Francisco Bay Area’s hard left, Casey Goonan’s crimes were unremarkable. A police SUV partially burned by an incendiary device on UC Berkeley’s campus. A planter of shrubs lit on fire after Goonan unsuccessfully tried to smash a glass office window and throw a firebomb into the federal building in downtown Oakland.
But thanks to a series of communiques where Goonan claimed to have carried out the summer 2024 attacks in solidarity with Hamas and the East Bay native’s anarchist beliefs, federal prosecutors claimed Goonan “intended to promote” terrorism on top of a felony count for using an incendiary device. Goonan’s original charges notably did not contain terrorism counts. In late September, US District Court Judge Jeffrey White sentenced Goonan, whom they called “a domestic terrorist” during the hearing, to 19 and a half years in prison plus 15 years probation. Prosecutors also asked that he be sent to the Bureau of Prisons facility that contains a Communications Management Units, a highly restrictive assignment reserved for what the government claims are “extremist” inmates with terrorism-related offenses or affiliations.
Although Goonan’s case began under the Biden Administration, it offers a glimpse of the approach the Department of Justice may take in President Donald Trump’s forthcoming offensive against the “left,” formalized in late September in National Security Presidential Memorandum 7 (NSPM-7), an executive order targeting anti-fascist beliefs, opposition towards Immigrations and Customs Enforcement raids, and criticism of capitalism and Christianity as potential “indicators of terrorism.”
In addition to Goonan’s purported admiration for Hamas—a designated terrorist organization since 1997—and cofounding of True Leap, a tiny Anarchist publisher, the 35-year-old doctorate in African-American Studies’ biography includes another trait being targeted by the Trump administration and its allies: Goonan identifies as a transgender person. While NPSM-7 cites “extremism migration, race, and gender” as an indicator of “this pattern of violent and terroristic tendencies,” the Heritage Foundation has attempted to link gender-fluid identity to mass shootings and is urging the FBI to create a new, specious domestic terrorism classification of “Transgender Ideology-Inspired Violent Extremism,” or TIVE.
The executive order, meanwhile, directs the American security state’s sprawling post-9/11 counterterrorism apparatus to be reoriented away from neo-Nazis, Proud Boys, white nationalists, Christian nationalists, and other extreme right-wing actors that have been overwhelmingly responsible for the majority of political violence in the past few decades, and towards opponents of ICE, anti-fascists, and the administration writ large. Along with potentially violent actors, NSPM-7 instructs federal law enforcement to scrutinize nonprofit groups and philanthropic foundations involved in funding organizations that espouse amorphous ideologies, from “support for the overthrow of the United States Government” to expressing “hostility towards those who hold traditional American views on family, religion, and morality.”
“NSPM-7 is the natural culmination of ‘radicalization theory’ as the basis for the American approach to counterterrorism,” says Mike German, a retired FBI agent who spent years infiltrating violent white supremacist groups and quit the Bureau in response to its post-9/11 shift in terrorism strategy. German explored radicalization theory’s trajectory in his 2019 book, Disrupt, Discredit and Divide: How the New FBI Damages Democracy.
Tech
What are the storage requirements for AI training and inference? | Computer Weekly
Despite ongoing speculation around an investment bubble that may be set to burst, artificial intelligence (AI) technology is here to stay. And while an over-inflated market may exist at the level of the suppliers, AI is well-developed and has a firm foothold among organisations of all sizes.
But AI workloads place specific demands on IT infrastructure and on storage in particular. Data volumes can start big and then balloon, in particular during training phases as data is vectorised and checkpoints are created. Meanwhile, data must be curated, gathered and managed throughout its lifecycle.
In this article, we look at the key characteristics of AI workloads, the particular demands of training and inference on storage I/O, throughput and capacity, whether to choose object or file storage, and the storage requirements of agentic AI.
What are the key characteristics of AI workloads?
AI workloads can be broadly categorised into two key stages – training and inference.
During training, processing focuses on what is effectively pattern recognition. Large volumes of data are examined by an algorithm – likely part of a deep learning framework like TensorFlow or PyTorch – that aims to recognise features within the data.
This could be visual elements in an image or particular words or patterns of words within documents. These features, which might fall under the broad categories of “a cat” or “litigation”, for example, are given values and stored in a vector database.
The assigned values provide for further detail. So, for example “a tortoiseshell cat”, would comprise discrete values for “cat” and “tortoiseshell”, that make up the whole concept and allow comparison and calculation between images.
Once the AI system is trained on its data, it can then be used for inference – literally, to infer a result from production data that can be put to use for the organisation.
So, for example, we may have an animal tracking camera and we want it to alert us when a tortoiseshell cat crosses our garden. To do that it would infer the presence or not of a cat and whether it is tortoiseshell by reference to the dataset built during the training described above.
But, while AI processing falls into these two broad categories, it is not necessarily so clear cut in real life. It will always be the case that training will be done on an initial dataset. But after that it is likely that while inference is an ongoing process, training also becomes perpetual as new data is ingested and new inference results from it.
So, to labour the example, our cat-garden-camera system may record new cats of unknown types and begin to categorise their features and add them to the model.
What are the key impacts on data storage of AI processing?
At the heart of AI hardware are specialised chips called graphics processing units (GPUs). These do the grunt processing work of training and are incredibly powerful, costly and often difficult to procure. For these reasons their utilisation rates are a major operational IT consideration – storage must be able to handle their I/O demands so they are optimally used.
Therefore, data storage that feeds GPUs during training must be fast, so it’s almost certainly going to be built with flash storage arrays.
Another key consideration is capacity. That’s because AI datasets can start big and get much bigger. As datasets undergo training, the conversion of raw information into vector data can see data volumes expand by up to 10 times.
Also, during training, checkpointing is carried out at regular intervals, often after every “epoch” or pass through the training data, or after changes are made to parameters.
Checkpoints are similar to snapshots, and allow training to be rolled back to a point in time if something goes wrong so that existing processing does not go to waste. Checkpointing can add significant data volume to storage requirements.
So, sufficient storage capacity must be available, and will often need to scale rapidly.
What are the key impacts of AI processing on I/O and capacity in data storage?
The I/O demands of AI processing on storage are huge. It is often the case that model data in use will just not fit into a single GPU memory and so is parallelised across many of them.
Also, AI workloads and I/O differ significantly between training and inference. As we’ve seen, the massive parallel processing involved in training requires low latency and high throughput.
While low latency is a universal requirement during training, throughput demands may differ depending on the deep learning framework used. PyTorch, for example, stores model data as a large number of small files while TensorFlow uses a smaller number of large model files.
The model used can also impact capacity requirements. TensorFlow checkpointing tends towards larger file sizes, plus dependent data states and metadata, while PyTorch checkpointing can be more lightweight. TensorFlow deployments tend to have a larger storage footprint generally.
If the model is parallelised across numerous GPUs this has an effect on checkpoint writes and restores that mean storage I/O must be up to the job.
Does AI processing prefer file or object storage?
While AI infrastructure isn’t necessarily tied to one or other storage access method, object storage has a lot going for it.
Most enterprise data is unstructured data and exists at scale, and it is often what AI has to work with. Object storage is supremely well suited to unstructured data because of its ability to scale. It also comes with rich metadata capabilities that can help data discovery and classification before AI processing begins in earnest.
File storage stores data in a tree-like hierarchy of files and folders. That can become unwieldy to access at scale. Object storage, by contrast, stores data in a “flat” structure, by unique identifier, with rich metadata. It can mimic file and folder-like structures by addition of metadata labels, which many will be familiar with in cloud-based systems such as Google Drive, Microsoft OneDrive and so on.
Object storage can, however, be slow to access and lacks file-locking capability, though this is likely to be of less concern for AI workloads.
What impact will agentic AI have on storage infrastructure?
Agentic AI uses autonomous AI agents that can carry out specific tasks without human oversight. They are tasked with autonomous decision-making within specific, predetermined boundaries.
Examples would include the use of agents in IT security to scan for threats and take action without human involvement, to spot and initiate actions in a supply chain, or in a call centre to analyse customer sentiment, review order history and respond to customer needs.
Agentic AI is largely an inference phase phenomenon so compute infrastructure will not need to be up to training-type workloads. Having said that, agentic AI agents will potentially access multiple data sources across on-premises systems and the cloud. That will cover the range of potential types of storage in terms of performance.
But, to work at its best, agentic AI will need high-performance, enterprise-class storage that can handle a wide variety of data types with low latency and with the ability to scale rapidly. That’s not to say datasets in less performant storage cannot form part of the agentic infrastructure. But if you want your agents to work at their best you’ll need to provide the best storage you can.
Tech
Opengear cuts network downtime for 35 global sites | Computer Weekly
With secure and resilient remote management critical for global organisations managing complex, geographically dispersed network infrastructure in a multicloud world, global managed hosting and cloud services provider Hyve Managed Hosting claims to have “dramatically” cut its network downtime, accelerated response times and lowered costs associated with external technical support through advanced network management solutions.
The operational gains achieved across the estate are said to have been realised with the deployment of Opengear’s Smart Out-of-Band technology, specifically the ACM7000 and IM 7200 with 4G LTE failover.
Hyve Managed Hosting has operations spanning 35 international sites in Europe, the US, Asia and Africa. The network project began with Hyve’s engineering team navigating the complexities of accessing parts of the network without on-site technical personnel. Simple configuration errors could lock engineers out of network devices and potentially affect response time or service continuity.
To address these challenges, Hyve needed a secure and reliable remote connectivity solution for every stage of network management, from initial setup and day-zero provisioning and configuration to upgrades and ongoing troubleshooting.
To align the new systems with its own security, resilience and scalability requirements, Hyve implemented Opengear’s remote management solutions with built-in cellular connectivity. All Opengear devices were configured at Hyve’s UK headquarters and then deployed globally.
Since implementation, Hyve’s Opengear secure remote access solutions have provided uninterrupted access to critical network infrastructure, even when primary network connections fail, directly leading to enhanced network uptime for Hyve customers. The centralised configuration and deployment from Hyve’s UK HQ have also minimised the need for local technical support and maintenance visits, optimising service costs globally.
The setup has enabled Hyve’s engineering team to remotely resolve issues, such as configuration errors, that previously required on-site personnel. This is claimed to have “drastically” accelerated customer response times and service continuity worldwide.
Commenting on the deployment so far, Hyve technical team lead Roberto Bello Hurtado said: “Having Opengear’s Out-of-Band solution in place has been invaluable for our team. Knowing we can access our network devices from anywhere gives us peace of mind and allows us to support our global infrastructure effectively.”
Opengear president and general manager Patrick Quirk added: “Hyve runs a global business where downtime is not an option. By deploying always-on, secure remote management, it puts resilience at the centre of its growth strategy. As the industry faces rising outages and greater complexity, Hyve is not reacting. It is leading.”
Looking ahead, as it expands globally, particularly in the US, Hyve plans to enhance its network resilience further with Opengear’s Lighthouse software. Lighthouse’s features are said to be designed to drive further efficiencies and provide a future-ready foundation for growth.
-
Tech1 week agoWhy electricity costs so much in the UK (it’s not all about the weather)
-
Tech6 days agoOpenAI says a million ChatGPT users talk about suicide
-
Sports1 week agoGiants-Eagles rivalry and the NFL punt that lives in infamy
-
Tech6 days agoHow digital technologies can support a circular economy
-
Tech6 days agoAI chatbots are becoming everyday tools for mundane tasks, use data shows
-
Sports1 week agoTwo Australian women cricketers ‘touched inappropriately’ in India
-
Entertainment1 week agoSaturday Sessions: Rodney Crowell performs “The Twenty-One Song Salute”
-
Tech5 days agoUS Ralph Lauren partners with Microsoft for AI shopping experience
