Connect with us

Tech

Are AI agents a blessing or a curse for cyber security? | Computer Weekly

Published

on

Are AI agents a blessing or a curse for cyber security? | Computer Weekly


Artificial intelligence (AI) and AI agents are seemingly everywhere. Be it with conference show floors or television adverts featuring celebrities, suppliers are keen to showcase the technology, which they tell us will help make our day-to-day lives much easier. But what exactly is an AI agent?

Fundamentally, AI agents – also known as agentic AI models – are generative AI (GenAI) and large language models (LLMs) used to automate tasks and workflows.

For example, need to book a room for a meeting at a particular office at a specific time for a certain number of people? Simply ask the agent to do so and it will act, plan and execute on your behalf, identifying a suitable room and time, then sending the calendar invite out to your colleagues on your behalf.

Or perhaps you’re booking a holiday. You can detail where you want to go, how you want to get there, add in any special requirements and ask the AI agent for suggestions that it will duly examine, parse and detail in seconds – saving you both time and effort.

“We’re going to be very dependent on AI agents in the very near future – everybody’s going to have an agent for different things,” says Etay Maor, chief security strategist at network security company Cato Networks. “It’s super convenient and we’re going to see this all over the place.

“The flip side of that is the attackers are going to be looking heavily into it, too,” he adds.

Unforeseen consequences

When new technology appears, even if it’s developed with the best of intentions, it’s almost inevitable that criminals will seek to exploit it.

We saw it with the rise of the internet and cyber fraud, we saw it with the shift to cloud-based hybrid working, and we’ve seen it with the rise of AI and LLMs, which cyber criminals quickly jumped on to write more convincing phishing emails. Now, cyber criminals are exploring how to weaponise AI agents and autonomous systems, too.

“They want to generate exploits,” says Yuval Zacharia, who until recently was R&D director at cyber security firm Hunters, and is now a co-founder at a startup in stealth mode. “That’s a complex mission involving code analysis and reverse engineering that you need to do to understand the codebase then exploit it. And that’s exactly the task that agentic AI is good at – you can divide a complex problem into different components, each with specific tools to execute it.”

Cyber security consultancy Reversec has published a wide range of research on how GenAI and AI agents can be exploited by malicious hackers, often by taking advantage of how new the technology is, meaning security measures may not fully be in place – especially if those developing AI tools want to ensure their product is released ahead of the competition.

For example, attackers can exploit prompt injection vulnerabilities to hijack browser agents with the aim of stealing data or other unauthorised actions. Or, alternatively, Reversec has demonstrated how an AI agent can be manipulated through prompt injection attacks to encourage outputs to include phishing links, social engineering and other ways of stealing information.

“Attackers can use jailbreaking or prompt injection attacks,” says Donato Capitella, principal security consultant at Reversec. “Now, you give an LLM agency – all of a sudden this is not just generic attacks, but it can act on your behalf: it can read and send emails, it can do video calls.

“An attacker sends you an email, and if an LLM is reading parts of that mailbox, all of a sudden, the email contains instructions that confuse the LLM, and now the LLM will steal information and send information to the attacker.”

Agentic AI is designed to help users, but as AI agents become more common and more sophisticated, that’s also going to open the door to attackers looking to exploit them to aid with their own goals – especially if legitimate tools aren’t secured correctly.

“If I’m a criminal and I know you’re using an AI agent which helps you with managing files on your network, for me, that’s a way into the network to deploy ransomware,” says Maor. “Maybe you’ll have an AI agent which can leave voice messages for you: Your voice? Now it’s identity fraud. Emails are business email compromise (BEC) attacks.

“The fact is a lot of these agents are going to have a lot of capabilities with the things they can do, and not too many guardrails, so criminals will be focusing on it,” he warns, adding that “there’s a continuous lowering of the bar of what it takes to do bad things”.

Fighting agentic AI with agentic AI

Ultimately, this means agentic AI-based attacks is something else chief information security officers (CISOs) and cyber security teams need to consider on top of every other challenge they currently face. Perhaps one answer to this is for defenders to take advantage of the automation provided by AI agents, too.

Zacharia believes so – she even built an agentic AI-powered threat-hunting tool in her spare time.

“It was about a side-project I did in my spare time at the weekends – I’m really geeky,” she says. “It was about exploring the world of AI agents because I thought it was cool.”

Cyber attacks are constantly evolving, and rapid response to emerging threats can be incredibly difficult, especially in an area where AI agents could be maliciously deployed to uncover new exploits en masse. That means identifying security threats, let alone assessing the impact and applying the mitigations can take a lot of time – especially if cyber security staff are doing it manually.

“What I was trying to do was automate this with AI agents,” says Zacharia. “The architecture built on top of multiple AI agents aim to identify emerging threats and prioritise according to business context, data enrichment and things that you care about, then they create hunting and viability queries that will help you turn those into actionable insights.”

That data enrichment comes from multiple sources. They include social media trends, CVEs, Patch Tuesday notifications, CISA alerts and other malware advisories.

The AI prioritises this information according to severity, with the AI agents acting upon that information to help perform tasks – for example, by downloading critical security updates – while also helping to relieve some of the burden on overworked cyber security staff.

“Cyber security teams have a lot on their hands, a lot of things to do,” says Zacharia. “They’re overwhelmed by the alerts they keep getting from all the security tools that they have. That means threat hunting in general, specifically for emergent threats, is always second priority.”

She points to incidents like Log4j, a critical zero-day vulnerability in widely used software that was almost immediately exploited by sophisticated threat actors upon disclosure.

“Think how much damage this could cause in your organisation if you’re not finding these on time,” says Zacharia. “And that’s exactly the point,” she adds, referring to how agentic AI can help to swiftly identify and remedy cyber security vulnerabilities and issues.

Streamlining the SOC with agentic AI

Zacharia’s far from alone in believing agentic AI could be of great benefit to cyber security teams.

“Think of a SOC [security operations centre] analyst sitting in front of an incident and he or she needs to start investigating it,” says Maor. “They start with looking at the technical data, to see if they’ve seen something like it in the past.”

What he’s describing is the important – but time-consuming – work SOC analysts do everyday. Maor believes adding agentic AI tools to the process can streamline their work, ultimately making them more effective at detecting cyber threats.

“An AI model can examine the incident and then detail similar incidents, immediately suggesting an investigation is needed,” he says. “There’s also the predictive model that tells the analyst what they don’t need to investigate. This cuts down the grunt work that needs to be done – sometimes hours, sometimes days of work – in order to reach something of value, which is nice.”

But while it can provide support, it’s important to note that agentic AI isn’t a silver bullet that is going to eliminate cyber security threats. Yes, it’s designed to make the task of monitoring threat intelligence or applying security updates easier and more efficient, but people remain key to information security, too. People are needed to work in SOCs, and information security staff are still required to help employees across the rest of the organisation remain alert and secure to cyber threats.

Especially as AI continues to evolve and improve, and attackers will continue to look to exploit it – and it’s up to the defenders to counter them.

“It’s a cat and mouse situation,” says Zacharia. “Both sides are adopting AI. But as an attacker, you only need one way to sneak in. As a defender, you have to protect the entire castle. Attackers will always have the advantage, that’s the game we’re playing. But I do think that both sides are getting better and better.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Brendan Carr Isn’t Going to Stop Until Someone Makes Him

Published

on

Brendan Carr Isn’t Going to Stop Until Someone Makes Him


To Genevieve Lakier, a professor of law at the University of Chicago whose research focuses on free speech, Carr’s threats against ABC appear to be “a pretty clear cut case of jawboning.” Jawboning refers to a type of informal coercion where government officials try to pressure private entities into suppressing or changing speech without using any actual formal legal action. Since jawboning is typically done in letters and private meetings, it rarely leaves a paper trail, making it notoriously difficult to challenge in court.

This Kimmel suspension is a little different, Lakier says. During the podcast appearance, Carr explicitly named his target, threatened regulatory action, and within a matter of hours the companies complied.

“The Supreme Court has made clear that that’s unconstitutional in all circumstances,” says Lakier. “You’re just not allowed to do that. There’s no balancing. There’s no justification. Absolutely no, no way may the government do that.”

Even if Carr’s threats amount to unconstitutional jawboning, though, stopping him could still prove difficult. If ABC sued, it would need to prove coercion—and however a suit went, filing one could risk additional regulatory retaliation down the line. If Kimmel were to sue, there’s no promise that he would get anything out of the suit even if he won, says Lakier, making it less likely for him to pursue legal action in the first place.

“There’s not much there for him except to establish that his rights were violated. But there is a lot of benefit for everyone else,” says Lakier. “This has received so much attention that it would be good if there could be, from now on, some mechanism for more oversight from the courts over what Carr is doing.”

Organizations like the FPF have sought novel means of limiting Carr’s power. In July, the FPF submitted a formal disciplinary complaint to the DC Bar’s Office of Disciplinary Counsel arguing that Carr violated its ethical rules, misrepresenting the law by suggesting the FCC has the ability to regulate editorial viewpoints. Without formal rulings, companies affected by Carr’s threats would be some of the only organizations with grounding to sue. At the same time, they have proven to be some of the least likely groups to pursue legal action over the last eight months.

In a statement on Thursday, House Democratic leadership wrote that Carr had “disgraced the office he holds by bullying ABC” and called on him to resign. They said they plan to “make sure the American people learn the truth, even if that requires the relentless unleashing of congressional subpoena power,” but did not outline any tangible ways to rein in Carr’s power.

“People need to get creative,” says Stern. “The old playbook is not built for this moment and the law only exists on paper when you’ve got someone like Brendan Carr in charge of enforcing it.”

This vacuum has left Carr free to push as far as he likes and it has spooked experts over how far this precedent will travel. Established in the 1930s, the FCC was designed to operate as a neutral referee, but years of media consolidation have dramatically limited the number of companies controlling programming over broadcast, cable, and now, streaming networks. Spectrum is a limited resource the FCC controls, giving the agency more direct control over the broadcast companies that rely on it than it has over cable or streaming services. This concentration makes them infinitely easier to pressure, benefitting the Trump administration, Carr, but also whoever might come next.

“If political tides turn, I don’t have confidence that the Democrats won’t also use them in an unconstitutional and improper matter,” says Stern. “[The Trump administration is] really setting up this world where every election cycle, assuming we still have elections in this country, the content of broadcast news might drastically shift depending on which political party controls the censorship office.”



Source link

Continue Reading

Tech

OpenAI launches teen-safe ChatGPT with parental controls

Published

on

OpenAI launches teen-safe ChatGPT with parental controls


by I. Edwards


Teenagers chatting with ChatGPT will soon see a very different version of the tool—one built with stricter ways to keep them safe online, OpenAI announced.

The new safeguards come as regulators increase scrutiny of chatbots and their impact on young people’s mental health.

Under the change, anyone identified as under 18 will automatically be directed to a different version of ChatGPT designed with “age-appropriate” content rules, the company said in a statement.

The teen version blocks sexual content and can involve in rare cases where a user is in acute distress.

“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” the company explained.

OpenAI also plans to roll out parental controls by the end of September. Parents will be able to link accounts, view chat history and even set blackout hours to limit use.

The announcement follows the Federal Trade Commission’s (FTC) investigation into the potential risks of AI chatbots for children and teens.

In April, 16-year-old Adam Raine of California died by suicide; his family has sued OpenAI, claiming ChatGPT played a role in his death, CBS News reported.

While OpenAI says it is prioritizing safety, questions still remain about how the system will verify a user’s age. If the platform cannot confirm a user’s age, it will default to the teen version, the company said.

Other tech giants have announced similar steps. YouTube, for example, has introduced new age-estimation technology that factors in account history and viewing habits, CBS News said.

Parents remain concerned.

A Pew Research Center report released earlier this year found 44% of parents who worry about teen mental health believe has the biggest negative impact.

More information:
HealthyChildren.org has more on how AI chatbots can affect kids.

© 2025 HealthDay. All rights reserved.

Citation:
OpenAI launches teen-safe ChatGPT with parental controls (2025, September 18)
retrieved 18 September 2025
from https://techxplore.com/news/2025-09-openai-teen-safe-chatgpt-parental.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Tech

Google Injects Gemini Into Chrome as AI Browsers Go Mainstream

Published

on

Google Injects Gemini Into Chrome as AI Browsers Go Mainstream


Google is adding multiple new AI features to Chrome, the most popular browser in the world. The most visible change is a new button in Chrome that launches the Gemini chatbot, but there are also new tools for searching, researching, and answering questions with AI. Google has additional cursor-controlling “agentic” tools in the pipeline for Chrome as well.

The Gemini in Chrome mode for the web browser uses generative AI to answer questions about content on a page and synthesize information across multiple open tabs. Gemini in Chrome first rolled out to Google’s paying subscribers in May. The AI-focused features are now available to all desktop users in the US browsing in English; they’ll show up in a browser update.

On mobile devices, Android users can already use aspects of Gemini within the Chrome app, and Google is expected to launch an update for iOS users of Chrome in the near future.

When I wrote about web browsers starting to add more generative AI tools back in 2023, it was primarily something that served as an alternative to the norm. The software was built by misfits and change-makers who were experimenting with new tools, or hunting for a break-out feature to grow their small user bases. All of this activity was dwarfed by the commanding number of users who preferred Chrome.

Two years later, while Google’s browser remains the market leader, the internet overall is completely seeped in AI tools, many of them also made by Google. Still, today marks the moment when the concept of an “AI browser” truly went mainstream with the weaving of Gemini so closely into the Chrome browser.

The Gemini strategy at Google has already been to leverage as many of its in-house integrations as possible, from Gmail to Google Docs. So, the decision to AI-ify the Chrome browser for a wider set of users does not come as a shock.

Even so, the larger roll out will likely be met with ire by some users who are either exhausted by the onslaught of AI-focused features in 2025 or want to abstain from using generative AI, whether for environmental reasons or because they don’t want their activity to be used to train an algorithm. Users who don’t want to see the Gemini option will be able to click on the Gemini sparkle icon and unpin it from the top right corner of the Chrome browser.

The new button at the top of the browser will launch Gemini. Users in the US will see these changes first.

Video: Google



Source link

Continue Reading

Trending