Tech
Are AI agents a blessing or a curse for cyber security? | Computer Weekly
Artificial intelligence (AI) and AI agents are seemingly everywhere. Be it with conference show floors or television adverts featuring celebrities, suppliers are keen to showcase the technology, which they tell us will help make our day-to-day lives much easier. But what exactly is an AI agent?
Fundamentally, AI agents – also known as agentic AI models – are generative AI (GenAI) and large language models (LLMs) used to automate tasks and workflows.
For example, need to book a room for a meeting at a particular office at a specific time for a certain number of people? Simply ask the agent to do so and it will act, plan and execute on your behalf, identifying a suitable room and time, then sending the calendar invite out to your colleagues on your behalf.
Or perhaps you’re booking a holiday. You can detail where you want to go, how you want to get there, add in any special requirements and ask the AI agent for suggestions that it will duly examine, parse and detail in seconds – saving you both time and effort.
“We’re going to be very dependent on AI agents in the very near future – everybody’s going to have an agent for different things,” says Etay Maor, chief security strategist at network security company Cato Networks. “It’s super convenient and we’re going to see this all over the place.
“The flip side of that is the attackers are going to be looking heavily into it, too,” he adds.
Unforeseen consequences
When new technology appears, even if it’s developed with the best of intentions, it’s almost inevitable that criminals will seek to exploit it.
We saw it with the rise of the internet and cyber fraud, we saw it with the shift to cloud-based hybrid working, and we’ve seen it with the rise of AI and LLMs, which cyber criminals quickly jumped on to write more convincing phishing emails. Now, cyber criminals are exploring how to weaponise AI agents and autonomous systems, too.
“They want to generate exploits,” says Yuval Zacharia, who until recently was R&D director at cyber security firm Hunters, and is now a co-founder at a startup in stealth mode. “That’s a complex mission involving code analysis and reverse engineering that you need to do to understand the codebase then exploit it. And that’s exactly the task that agentic AI is good at – you can divide a complex problem into different components, each with specific tools to execute it.”
Cyber security consultancy Reversec has published a wide range of research on how GenAI and AI agents can be exploited by malicious hackers, often by taking advantage of how new the technology is, meaning security measures may not fully be in place – especially if those developing AI tools want to ensure their product is released ahead of the competition.
For example, attackers can exploit prompt injection vulnerabilities to hijack browser agents with the aim of stealing data or other unauthorised actions. Or, alternatively, Reversec has demonstrated how an AI agent can be manipulated through prompt injection attacks to encourage outputs to include phishing links, social engineering and other ways of stealing information.
“Attackers can use jailbreaking or prompt injection attacks,” says Donato Capitella, principal security consultant at Reversec. “Now, you give an LLM agency – all of a sudden this is not just generic attacks, but it can act on your behalf: it can read and send emails, it can do video calls.
“An attacker sends you an email, and if an LLM is reading parts of that mailbox, all of a sudden, the email contains instructions that confuse the LLM, and now the LLM will steal information and send information to the attacker.”
Agentic AI is designed to help users, but as AI agents become more common and more sophisticated, that’s also going to open the door to attackers looking to exploit them to aid with their own goals – especially if legitimate tools aren’t secured correctly.
“If I’m a criminal and I know you’re using an AI agent which helps you with managing files on your network, for me, that’s a way into the network to deploy ransomware,” says Maor. “Maybe you’ll have an AI agent which can leave voice messages for you: Your voice? Now it’s identity fraud. Emails are business email compromise (BEC) attacks.
“The fact is a lot of these agents are going to have a lot of capabilities with the things they can do, and not too many guardrails, so criminals will be focusing on it,” he warns, adding that “there’s a continuous lowering of the bar of what it takes to do bad things”.
Fighting agentic AI with agentic AI
Ultimately, this means agentic AI-based attacks is something else chief information security officers (CISOs) and cyber security teams need to consider on top of every other challenge they currently face. Perhaps one answer to this is for defenders to take advantage of the automation provided by AI agents, too.
Zacharia believes so – she even built an agentic AI-powered threat-hunting tool in her spare time.
“It was about a side-project I did in my spare time at the weekends – I’m really geeky,” she says. “It was about exploring the world of AI agents because I thought it was cool.”
Cyber attacks are constantly evolving, and rapid response to emerging threats can be incredibly difficult, especially in an area where AI agents could be maliciously deployed to uncover new exploits en masse. That means identifying security threats, let alone assessing the impact and applying the mitigations can take a lot of time – especially if cyber security staff are doing it manually.
“What I was trying to do was automate this with AI agents,” says Zacharia. “The architecture built on top of multiple AI agents aim to identify emerging threats and prioritise according to business context, data enrichment and things that you care about, then they create hunting and viability queries that will help you turn those into actionable insights.”
That data enrichment comes from multiple sources. They include social media trends, CVEs, Patch Tuesday notifications, CISA alerts and other malware advisories.
The AI prioritises this information according to severity, with the AI agents acting upon that information to help perform tasks – for example, by downloading critical security updates – while also helping to relieve some of the burden on overworked cyber security staff.
“Cyber security teams have a lot on their hands, a lot of things to do,” says Zacharia. “They’re overwhelmed by the alerts they keep getting from all the security tools that they have. That means threat hunting in general, specifically for emergent threats, is always second priority.”
She points to incidents like Log4j, a critical zero-day vulnerability in widely used software that was almost immediately exploited by sophisticated threat actors upon disclosure.
“Think how much damage this could cause in your organisation if you’re not finding these on time,” says Zacharia. “And that’s exactly the point,” she adds, referring to how agentic AI can help to swiftly identify and remedy cyber security vulnerabilities and issues.
Streamlining the SOC with agentic AI
Zacharia’s far from alone in believing agentic AI could be of great benefit to cyber security teams.
“Think of a SOC [security operations centre] analyst sitting in front of an incident and he or she needs to start investigating it,” says Maor. “They start with looking at the technical data, to see if they’ve seen something like it in the past.”
What he’s describing is the important – but time-consuming – work SOC analysts do everyday. Maor believes adding agentic AI tools to the process can streamline their work, ultimately making them more effective at detecting cyber threats.
“An AI model can examine the incident and then detail similar incidents, immediately suggesting an investigation is needed,” he says. “There’s also the predictive model that tells the analyst what they don’t need to investigate. This cuts down the grunt work that needs to be done – sometimes hours, sometimes days of work – in order to reach something of value, which is nice.”
But while it can provide support, it’s important to note that agentic AI isn’t a silver bullet that is going to eliminate cyber security threats. Yes, it’s designed to make the task of monitoring threat intelligence or applying security updates easier and more efficient, but people remain key to information security, too. People are needed to work in SOCs, and information security staff are still required to help employees across the rest of the organisation remain alert and secure to cyber threats.
Especially as AI continues to evolve and improve, and attackers will continue to look to exploit it – and it’s up to the defenders to counter them.
“It’s a cat and mouse situation,” says Zacharia. “Both sides are adopting AI. But as an attacker, you only need one way to sneak in. As a defender, you have to protect the entire castle. Attackers will always have the advantage, that’s the game we’re playing. But I do think that both sides are getting better and better.”
Tech
The Justice Department Released More Epstein Files—but Not the Ones Survivors Want
Over the weekend, the Justice Department released three new data sets comprising files related to Jeffrey Epstein. The DOJ had previously released nearly 4,000 documents prior to the Friday midnight deadline required by the Epstein Files Transparency Act.
As with Friday’s release, the new tranche appears to contain hundreds of photographs, along with various court records pertaining to Epstein and his associates. The first of the additional datasets, Data Set 5, is photos of hard drives and physical folders, as well as chain-of-custody forms. Data Set 6 appears to mostly be grand jury materials from cases out of the Southern District of New York against Epstein and his coconspirator, Ghislaine Maxwell. Data Set 7 includes more grand jury materials from those cases, as well as materials from a separate 2007 Florida grand jury.
Data Set 7 also includes an out-of-order transcript between R. Alexander Acosta and the DOJ’s Office of Professional Responsibility from 2019. According to the transcript, the OPR was investigating whether attorneys in the Southern District of Florida US Attorney’s Office committed professional misconduct by entering into a non-prosecution agreement with Epstein, who was being investigated by state law enforcement on sexual battery charges. Acosta was the head of the office when the agreement was signed.
Leading up to the deadline to release materials, the DOJ made three separate requests to unseal grand jury materials. Those requests were granted earlier this month.
The initial release of the Epstein files was met with protest, particularly by Epstein victims and Democratic lawmakers. “The public received a fraction of the files, and what we received was riddled with abnormal and extreme redactions with no explanation,” wrote a group of 19 women who had survived abuse from Epstein and Maxwell in a statement posted on social media. Senator Chuck Schumer said Monday that he would force a vote that would allow the Senate to sue the Trump administration for a full release of the Epstein files.
Along with the release of the new batch of files over the weekend, the Justice Department also removed at least 16 files from its initial offering, including a photograph that depicted Donald Trump. The DOJ later restored that photograph, saying in a statement on X that it had initially been flagged “for potential further action to protect victims.” The post went on to say that “after the review, it was determined there is no evidence that any Epstein victims are depicted in the photograph, and it has been reposted without any alteration or redaction.”
The Justice Department acknowledged in a fact sheet on Sunday that it has “hundreds of thousands of pages of material to release,” claiming that it has more than 200 lawyers reviewing files prior to release.
Tech
OpenAI’s Child Exploitation Reports Increased Sharply This Year
OpenAI sent 80 times as many child exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025 as it did during a similar time period in 2024, according to a recent update from the company. The NCMEC’s CyberTipline is a Congressionally authorized clearinghouse for reporting child sexual abuse material (CSAM) and other forms of child exploitation.
Companies are required by law to report apparent child exploitation to the CyberTipline. When a company sends a report, NCMEC reviews it and then forwards it to the appropriate law enforcement agency for investigation.
Statistics related to NCMEC reports can be nuanced. Increased reports can sometimes indicate changes in a platform’s automated moderation, or the criteria it uses to decide whether a report is necessary, rather than necessarily indicating an increase in nefarious activity.
Additionally, the same piece of content can be the subject of multiple reports, and a single report can be about multiple pieces of content. Some platforms, including OpenAI, disclose the number of both the reports and the total pieces of content they were about for a more complete picture.
OpenAI spokesperson Gaby Raila said in a statement that the company made investments toward the end of 2024 “to increase [its] capacity to review and action reports in order to keep pace with current and future user growth.” Raila also said that the time frame corresponds to “the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports.” In August, Nick Turley, vice president and head of ChatGPT, announced that the app had four times the amount of weekly active users than it did the year before.
During the first half of 2025, the number of CyberTipline reports OpenAI sent was roughly the same as the amount of content OpenAI sent the reports about—75,027 compared to 74,559. In the first half of 2024, it sent 947 CyberTipline reports about 3,252 pieces of content. Both the number of reports and pieces of content the reports saw a marked increase between the two time periods.
Content, in this context, could mean multiple things. OpenAI has said that it reports all instances of CSAM, including uploads and requests, to NCMEC. Besides its ChatGPT app, which allows users to upload files—including images—and can generate text and images in response, OpenAI also offers access to its models via API access. The most recent NCMEC count wouldn’t include any reports related to video-generation app Sora, as its September release was after the time frame covered by the update.
The spike in reports follows a similar pattern to what NCMEC has observed at the CyberTipline more broadly with the rise of generative AI. The center’s analysis of all CyberTipline data found that reports involving generative AI saw a 1,325 percent increase between 2023 and 2024. NCMEC has not yet released 2025 data, and while other large AI labs like Google publish statistics about the NCMEC reports they’ve made, they don’t specify what percentage of those reports are AI-related.
Tech
The Doomsday Glacier Is Getting Closer and Closer to Irreversible Collapse
Known as the “Doomsday Glacier,” the Thwaites Glacier in Antarctica is one of the most rapidly changing glaciers on Earth, and its future evolution is one of the biggest unknowns when it comes to predicting global sea level rise.
The eastern ice shelf of the Thwaites Glacier is supported at its northern end by a ridge of the ocean floor. However, over the past two decades, cracks in the upper reaches of the glacier have increased rapidly, weakening its structural stability. A new study by the International Thwaites Glacier Collaboration (ITGC) presents a detailed record of this gradual collapse process.
Researchers at the Centre for Earth Observation and Science at the University of Manitoba, Canada, analyzed observational data from 2002 to 2022 to track the formation and propagation of cracks in the ice shelf shear zone. They discovered that as the cracks grew, the connection between the ice shelf and the mid-ocean ridge weakened, accelerating the upstream flow of ice.
The Crack in the Ice Shelf Widens in Two Stages
The study reveals that the weakening of the ice shelf occurred in four distinct phases, with crack growth occurring in two stages. In the first phase, long cracks appeared along the ice flow, gradually extending eastward. Some exceeded 8 km in length and spanned the entire shelf. In the second phase, numerous short cross-flow cracks, less than 2 km long, emerged, doubling the total length of the fissures.
Analysis of satellite images showed that the total length of the cracks increased from about 165 km in 2002 to approximately 336 km in 2021. Meanwhile, the average length of each crack decreased from 3.2 km to 1.5 km, with a notable increase in small cracks. These changes reflect a significant shift in the stress state of the ice shelf, that is, in the interaction of forces within its structure.
Between 2002 and 2006, the ice shelf accelerated as it was pulled by nearby fast-moving currents, generating compressive stress on the anchorage point, which initially stabilized the shelf. After 2007, the shear zone between the shelf and the Western ice tongue collapsed. The stress concentrated around the anchorage point, leading to the formation of large cracks.
Since 2017, these cracks have completely penetrated the ice shelf, severing the connection to the anchorage. According to researchers, this has accelerated the upstream flow of ice and turned the anchorage into a destabilizing factor.
Feedback Loop Collapse
One of the most significant findings of the study is the existence of a feedback loop: Cracks accelerate the flow of ice, and in turn, this increased speed generates new cracks. This process was clearly recorded by the GPS devices that the team deployed on the ice shelf between 2020 and 2022.
During the winter of 2020, the upward propagation of structural changes in the shear zone was particularly evident. These changes advanced at a rate of approximately 55 kilometers per year within the ice shelf, demonstrating that structural collapse in the shear zone directly impacts upstream ice flow.
-
Business1 week agoStudying Abroad Is Costly, But Not Impossible: Experts On Smarter Financial Planning
-
Business1 week agoKSE-100 index gains 876 points amid cut in policy rate | The Express Tribune
-
Fashion5 days agoIndonesia’s thrift surge fuels waste and textile industry woes
-
Business5 days agoBP names new boss as current CEO leaves after less than two years
-
Sports1 week agoJets defensive lineman rips NFL officials after ejection vs Jaguars
-
Tech1 week agoFor the First Time, AI Analyzes Language as Well as a Human Expert
-
Tech5 days agoT-Mobile Business Internet and Phone Deals
-
Entertainment1 week agoPrince Harry, Meghan Markle’s 2025 Christmas card: A shift in strategy
