Tech
UK, US urge Cisco users to ditch end-of-life security appliances | Computer Weekly

An ongoing campaign of cyber attacks orchestrated through vulnerabilities found in the Cisco Adaptive Security Appliance (ASA) family of unified threat management (UTM) kit has prompted warnings from both the British and American authorities for users to unplug and discard outdated, out-of-support equipment.
Cisco ASA is a multipurpose line of security appliances that, on introduction in the 2000s, succeeded various functions that Cisco previously offered in standalone form, including firewalls, intrusion prevention and virtual private networking. It remains well in use to this day, particularly among small to medium-sized enterprises (SMEs).
The alert stems from two distinct flaws in the technology – CVE-2025-20333, enabling remote code execution (RCE), and CVE-2025-20362, enabling elevation of privileges (EoP). A third arbitrary code execution vulnerability, CVE-2025-20363, has also been identified but is not in the scope of this specific alert.
Cisco said the issues impact Cisco ASA 5500-X Series models running Cisco ASA Software Release 9.12 or 9.14 with VPN web services enabled. The specific models involved are 5512-X, 5515-X, 5525-X, 5545-X, 5555-X and 5585-X, some of which reached end-of-life status in 2017. Two of them, 5512-X and 5515-X have been out of support since 2022.
The National Cyber Security Centre (NCSC) strongly recommended, where practicable, that ASA models falling out of support over the next 12 months should be replaced, noting the significant risks that obsolete, end-of-life hardware can pose.
“It is critical for organisations to take note of the recommended actions highlighted … particularly on detection and remediation,” said NCSC chief technology officer Ollie Whitehouse.
“We strongly encourage network defenders to follow vendor best practices and engage with the NCSC’s malware analysis report to assist with their investigations.
“End-of-life technology presents a significant risk for organisations. Systems and devices should be promptly migrated to modern versions to address vulnerabilities and strengthen resilience,” he said.
In an emergency directive issued prior to the weekend of 27-28 September, the US Cybersecurity and Infrastructure Security Agency (CISA) directed all users within the American government to account for and update Cisco ASA devices, and Cisco Firepower devices, which are also affected.
CISA supported the NCSC’s warning, saying that if ASA hardware models with an end-of-support date falling on or before Tuesday 30 September 2025 are found, these should be permanently disconnected immediately.
“These legacy platforms [and/or] releases cannot meet current vendor support and update requirements,” said CISA.
What’s the problem?
According to Cisco, the latest vulnerabilities are being exploited by the threat actor behind the ArcaneDoor campaign, which first came to light in April 2024 and is thought to have been the work of a nation state-backed threat actor.
This activity is thought to date back a few months prior to that, with Cisco’s Talos threat intel unit having identified attacker-controlled infrastructure active in November 2023, and possible test and development activity for previous exploits in July of that year.
Cisco said it had been working with multiple affected customers, including government agencies, on investigating the latest series of attacks for some time. It described the attacks as complex and sophisticated, requiring an extensive response, and added that the threat actor was still actively scanning for targets of interest.
The campaign has been linked to two different malwares, named Line Dancer and Line Runner, which were the subject of alerts in 2024.
Line Dancer, a shellcode loader, and Line Runner, a Lua webshell, work in tandem to enable the threat actors to achieve their objectives on ASA devices.
Tech
Interrupting encoder training in diffusion models enables more efficient generative AI

A new framework for generative diffusion models was developed by researchers at Science Tokyo, significantly improving generative AI models. The method reinterpreted Schrödinger bridge models as variational autoencoders with infinitely many latent variables, reducing computational costs and preventing overfitting. By appropriately interrupting the training of the encoder, this approach enabled development of more efficient generative AI, with broad applicability beyond standard diffusion models.
Diffusion models are among the most widely used approaches in generative AI for creating images and audio. These models generate new data by gradually adding noise (noising) to real samples and then learning how to reverse that process (denoising) back into realistic data. A widely used version, the score-based model, achieves this by the diffusion process connecting the prior to the data with a sufficiently long-time interval. This method, however, has a limitation that when the data differs strongly from the prior, the time intervals of the noising and denoising processes become longer, which causes slowing down sample generation.
Now, a research team from Institute of Science Tokyo (Science Tokyo), Japan, has proposed a new framework for diffusion models that is faster and computationally less demanding. They achieved this by reinterpreting Schrödinger bridge (SB) models, a type of diffusion model, as variational autoencoders (VAEs).
The study was led by graduate student Mr. Kentaro Kaba and Professor Masayuki Ohzeki from the Department of Physics at Science Tokyo, in collaboration with Mr. Reo Shimizu (then a graduate student) and Associate Professor Yuki Sugiyama from the Graduate School of Information Sciences at Tohoku University, Japan. Their findings were published in the Physical Review Research on September 3, 2025.
SB models offer greater flexibility than standard score-based models because they can connect any two probability distributions over a finite time using a stochastic differential equation (SDE). This supports more complex noising processes and higher-quality sample generation. The trade-off, however, is that SB models are mathematically complex and expensive to train.
The proposed method addresses this by reformulating SB models as VAEs with multiple latent variables. “The key insight lies in extending the number of latent variables from one to infinity, leveraging the data-processing inequality. This perspective enables us to interpret SB-type models within the framework of VAEs,” says Kaba.
In this setup, the encoder represents the forward process that maps real data onto a noisy latent space, while the decoder reverses the process to reconstruct realistic samples, and both processes are modeled as SDEs learned by neural networks.
The model employs a training objective with two components. The first is the prior loss, which ensures that the encoder correctly maps the data distribution to the prior distribution. The second is drift matching, which trains the decoder to mimic the dynamics of the reverse encoder process. Moreover, once the prior loss stabilizes, encoder training can be stopped early. This allows us to complete learning faster, reducing the risk of overfitting and preserving high accuracy in SB models.
“The objective function is composed of the prior loss and drift matching parts, which characterizes the training of neural networks in the encoder and the decoder, respectively. Together, they reduce the computational cost of training SB-type models. It was demonstrated that interrupting the training of the encoder mitigated the challenge of overfitting,” explains Ohzeki.
This approach is flexible and can be applied to other probabilistic rule sets, even non-Markov processes, making it a broadly applicable training scheme.
More information:
Kentaro Kaba et al, Schrödinger bridge-type diffusion models as an extension of variational autoencoders, Physical Review Research (2025). DOI: 10.1103/dxp7-4hby
Citation:
Interrupting encoder training in diffusion models enables more efficient generative AI (2025, September 29)
retrieved 29 September 2025
from https://techxplore.com/news/2025-09-encoder-diffusion-enables-efficient-generative.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Tech
OpenAI Is Preparing to Launch a Social App for AI-Generated Videos

OpenAI is preparing to launch a stand-alone app for its video generation AI model Sora 2, WIRED has learned. The app, which features a vertical video feed with swipe-to-scroll navigation, appears to closely resemble TikTok—except all of the content is AI-generated. There’s a For You–style page powered by a recommendation algorithm. On the right side of the feed, a menu bar gives users the option to like, comment, or remix a video.
Users can create videoclips up to 10 seconds long using OpenAI’s next-generation video model, according to documents viewed by WIRED. There is no option to upload photos or videos from a user’s camera roll or other apps.
The Sora 2 App has an identity verification feature that allows users to confirm their likeness. If a user has verified their identity, they can use their likeness in videos. Other users can also tag them and use their likeness in clips. For example, someone could generate a video of themselves riding a roller coaster at a theme park with a friend. Users will get a notification whenever their likeness is used—even if the clip remains in draft form and is never posted, sources say.
OpenAI launched the app internally last week. So far, it’s received overwhelmingly positive feedback from employees, according to documents viewed by WIRED. Employees have been using the tool so frequently that some managers have joked it could become a drain on productivity.
OpenAI declined to comment.
OpenAI appears to be betting that the Sora 2 app will let people interact with AI-generated video in a way that fundamentally changes their experience of the technology—similar to how ChatGPT helped users realize the potential of AI-generated text. Internally, sources say, there’s also a feeling that President Trump’s on-again, off-again deal to sell TikTok’s US operations has given OpenAI a unique opportunity to launch a short-form video app—particularly one without close ties to China.
OpenAI officially launched Sora in December of last year. Initially, people could only access it via a web page, but it was soon incorporated directly into the ChatGPT app. At the time, the model was among the most state-of-the-art AI video generators, though OpenAI noted it had some limitations. For example, it didn’t seem to fully understand physics and struggled to produce realistic action scenes, especially in longer clips.
OpenAI’s Sora 2 app will compete with new AI video offerings from tech giants like Meta and Google. Last week, Meta introduced a new feed in its Meta AI app called Vibes, which is dedicated exclusively to creating and sharing short AI-generated videos. Earlier this month, Google announced that it was integrating a custom version of its latest video generation model, Veo 3, into YouTube.
TikTok, on the other hand, has taken a more cautious approach to AI-generated content. The video app recently redefined its rules around what kind of AI-generated videos it allows on the platform. It now explicitly bans AI-generated content that’s “misleading about matters of public importance or harmful to individuals.”
Oftentimes, the Sora 2 app refuses to generate videos due to copyright safeguards and other filters, sources say. OpenAI is currently fighting a series of lawsuits over alleged copyright infringements, including a high-profile case brought by The New York Times. The Times case centers on allegations that OpenAI trained its models on the paper’s copyrighted material.
OpenAI is also facing mounting criticism over child safety issues. On Monday, the company released new parental controls, including the option for parents and teenagers to link their accounts. The company also said that it is working on an age-prediction tool that could automatically route users believed to be under the age of 18 to a more restricted version of ChatGPT that doesn’t allow for romantic interactions, among other things. It is not known what age restrictions might be incorporated into the Sora 2 app.
This is an edition of the Model Behavior newsletter. Read previous newsletters here.
Tech
More people are using AI in court, not a lawyer. It could cost you money—and your case

When you don’t have the money for a lawyer to represent you in a court case, even judges can understand the temptation to get free help from anywhere—including tapping into generative artificial intelligence (AI).
As Judge My Anh Tran in the County Court of Victoria said this year: “Generative AI can be beguiling, particularly when the task of representing yourself seems overwhelming. However, a litigant runs the risk that their case will be damaged, rather than helped, if they choose to use AI without taking the time to understand what it produces, and to confirm that it is both legally and factually accurate.”
Our research has so far found 84 reported cases of generative AI use in Australian courts since ChatGPT launched in late 2022. While cases involving lawyers have had the most media attention, we found more than three-quarters of those cases (66 of 84) involved people representing themselves, known as “self-represented litigants.”
Those people—who sometimes have valid legal claims—are increasingly turning to different generative AI tools to help on everything from property and will disputes, to employment, bankruptcy, defamation, and migration cases.
Our ongoing research is part of an upcoming report for the Australian Academy of Law, being launched later in the year. But we’re sharing our findings now because this is a growing real-world problem.
Just this month, Queensland’s courts issued updated guidance for self-represented litigants, warning using “inaccurate AI-generated information in court” could cause delays, or worse: “a costs order may be made against you.”
As New South Wales Chief Justice Andrew Bell observed in a decision in August this year, the self-represented respondent was “admirably candid with the court in relation to her use of AI.” But while she was “doing her best to defend her interests,” her AI-generated submissions were often “misconceived, unhelpful and irrelevant.”
If you’re considering using AI in your own case, here’s what you need to know.
The temptation to rely on AI
Self-representation in Australian courts is more common than many people realize.
For example, 79% of litigants in migration matters at the Federal Circuit Court were unrepresented in 2023-2024.
The Queensland District Court has said “a significant number of civil proceedings involve self-represented parties.” The County Court of Victoria last year created easy-to-use forms for self-represented litigants.
But as the availability of free or low-cost generative AI tools increases, so does the temptation to use AI, as our recent research paper highlighted.
The risks if AI gets it wrong
Relying on AI tools that produce fake law can result in court documents being rejected, and valid claims being lost in court.
If you’re a self-represented litigant, the court system gives you the right to provide evidence and argument to support your case. But if that evidence or argument is not real, the court must reject it. That means you could lose your day in court.
In those circumstances, the court may make a costs order against a self-represented litigant—meaning you could end up having to pay your opponent’s legal costs.
Lawyers here and overseas have also been caught relying on inaccurate AI-generated law in court.
But a key difference is that if a lawyer uses fake cases that the court rejects, this is likely to amount to negligence. Their client might be able to sue the lawyer.
When someone representing themselves makes the error, they only have themselves to blame.
How can you reduce your risks?
The safest advice is to avoid AI for legal research.
There are many free, publicly available legal research websites for Australian law. The best known is the Australasian Legal Information Institute (AUSTLII). Another is Jade.
Court libraries and law schools are open to the public and have online resources about how to conduct legal research. Libraries will often have textbooks that set out principles of law.
Australian courts, such as the Supreme Court of Queensland, Supreme Court of NSW and Supreme Court of Victoria, have all issued guidance on when generative AI can and cannot be used.
Check if there’s a guide from the relevant court for your case. Follow their advice.
If you still plan to use generative AI, you must check everything against a reliable source. You need to search for each case you plan to cite, not just to make sure it exists, but also that it says what an AI summary says it does.
And as Queensland’s guide for self-litigants warns: “Do not enter any private, confidential, suppressed or legally privileged information into a Generative AI chatbot […] Anything you put into a Generative AI chatbot could become publicly known. This could result in you unintentionally breaching suppression orders, or accidentally disclosing your own or someone else’s private or confidential information.”
Conducting legal research and producing court documents is not easy. That’s what trained lawyers are for, which is why affordable, accessible legal services are necessary for a fair justice system.
AI is being used to address an access to justice problem that it is not well-suited to—at least, not yet.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
More people are using AI in court, not a lawyer. It could cost you money—and your case (2025, September 29)
retrieved 29 September 2025
from https://techxplore.com/news/2025-09-people-ai-court-lawyer-money.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
-
Fashion1 week ago
Banking woes threaten Bangladesh’s RMG export momentum
-
Fashion1 week ago
LFW Saturday: Patrick McDowell, Roksanda, Completedworks, The Ouze, and Lueder
-
Fashion1 week ago
Solutions across the spectrum from Shima Seiki
-
Tech1 week ago
Looking for Softer Sheets? These Bamboo Sheets Are the Answer
-
Tech6 days ago
OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers
-
Tech1 week ago
WIRED Roundup: The Right Embraces Cancel Culture
-
Sports6 days ago
MLB legend Roger Clemens reacts to conviction of man who tried to assassinate Trump
-
Business7 days ago
Disney says ‘Jimmy Kimmel Live’ will return to ABC on Tuesday