Tech
Are AI agents a blessing or a curse for cyber security? | Computer Weekly
Artificial intelligence (AI) and AI agents are seemingly everywhere. Be it with conference show floors or television adverts featuring celebrities, suppliers are keen to showcase the technology, which they tell us will help make our day-to-day lives much easier. But what exactly is an AI agent?
Fundamentally, AI agents – also known as agentic AI models – are generative AI (GenAI) and large language models (LLMs) used to automate tasks and workflows.
For example, need to book a room for a meeting at a particular office at a specific time for a certain number of people? Simply ask the agent to do so and it will act, plan and execute on your behalf, identifying a suitable room and time, then sending the calendar invite out to your colleagues on your behalf.
Or perhaps you’re booking a holiday. You can detail where you want to go, how you want to get there, add in any special requirements and ask the AI agent for suggestions that it will duly examine, parse and detail in seconds – saving you both time and effort.
“We’re going to be very dependent on AI agents in the very near future – everybody’s going to have an agent for different things,” says Etay Maor, chief security strategist at network security company Cato Networks. “It’s super convenient and we’re going to see this all over the place.
“The flip side of that is the attackers are going to be looking heavily into it, too,” he adds.
Unforeseen consequences
When new technology appears, even if it’s developed with the best of intentions, it’s almost inevitable that criminals will seek to exploit it.
We saw it with the rise of the internet and cyber fraud, we saw it with the shift to cloud-based hybrid working, and we’ve seen it with the rise of AI and LLMs, which cyber criminals quickly jumped on to write more convincing phishing emails. Now, cyber criminals are exploring how to weaponise AI agents and autonomous systems, too.
“They want to generate exploits,” says Yuval Zacharia, who until recently was R&D director at cyber security firm Hunters, and is now a co-founder at a startup in stealth mode. “That’s a complex mission involving code analysis and reverse engineering that you need to do to understand the codebase then exploit it. And that’s exactly the task that agentic AI is good at – you can divide a complex problem into different components, each with specific tools to execute it.”
Cyber security consultancy Reversec has published a wide range of research on how GenAI and AI agents can be exploited by malicious hackers, often by taking advantage of how new the technology is, meaning security measures may not fully be in place – especially if those developing AI tools want to ensure their product is released ahead of the competition.
For example, attackers can exploit prompt injection vulnerabilities to hijack browser agents with the aim of stealing data or other unauthorised actions. Or, alternatively, Reversec has demonstrated how an AI agent can be manipulated through prompt injection attacks to encourage outputs to include phishing links, social engineering and other ways of stealing information.
“Attackers can use jailbreaking or prompt injection attacks,” says Donato Capitella, principal security consultant at Reversec. “Now, you give an LLM agency – all of a sudden this is not just generic attacks, but it can act on your behalf: it can read and send emails, it can do video calls.
“An attacker sends you an email, and if an LLM is reading parts of that mailbox, all of a sudden, the email contains instructions that confuse the LLM, and now the LLM will steal information and send information to the attacker.”
Agentic AI is designed to help users, but as AI agents become more common and more sophisticated, that’s also going to open the door to attackers looking to exploit them to aid with their own goals – especially if legitimate tools aren’t secured correctly.
“If I’m a criminal and I know you’re using an AI agent which helps you with managing files on your network, for me, that’s a way into the network to deploy ransomware,” says Maor. “Maybe you’ll have an AI agent which can leave voice messages for you: Your voice? Now it’s identity fraud. Emails are business email compromise (BEC) attacks.
“The fact is a lot of these agents are going to have a lot of capabilities with the things they can do, and not too many guardrails, so criminals will be focusing on it,” he warns, adding that “there’s a continuous lowering of the bar of what it takes to do bad things”.
Fighting agentic AI with agentic AI
Ultimately, this means agentic AI-based attacks is something else chief information security officers (CISOs) and cyber security teams need to consider on top of every other challenge they currently face. Perhaps one answer to this is for defenders to take advantage of the automation provided by AI agents, too.
Zacharia believes so – she even built an agentic AI-powered threat-hunting tool in her spare time.
“It was about a side-project I did in my spare time at the weekends – I’m really geeky,” she says. “It was about exploring the world of AI agents because I thought it was cool.”
Cyber attacks are constantly evolving, and rapid response to emerging threats can be incredibly difficult, especially in an area where AI agents could be maliciously deployed to uncover new exploits en masse. That means identifying security threats, let alone assessing the impact and applying the mitigations can take a lot of time – especially if cyber security staff are doing it manually.
“What I was trying to do was automate this with AI agents,” says Zacharia. “The architecture built on top of multiple AI agents aim to identify emerging threats and prioritise according to business context, data enrichment and things that you care about, then they create hunting and viability queries that will help you turn those into actionable insights.”
That data enrichment comes from multiple sources. They include social media trends, CVEs, Patch Tuesday notifications, CISA alerts and other malware advisories.
The AI prioritises this information according to severity, with the AI agents acting upon that information to help perform tasks – for example, by downloading critical security updates – while also helping to relieve some of the burden on overworked cyber security staff.
“Cyber security teams have a lot on their hands, a lot of things to do,” says Zacharia. “They’re overwhelmed by the alerts they keep getting from all the security tools that they have. That means threat hunting in general, specifically for emergent threats, is always second priority.”
She points to incidents like Log4j, a critical zero-day vulnerability in widely used software that was almost immediately exploited by sophisticated threat actors upon disclosure.
“Think how much damage this could cause in your organisation if you’re not finding these on time,” says Zacharia. “And that’s exactly the point,” she adds, referring to how agentic AI can help to swiftly identify and remedy cyber security vulnerabilities and issues.
Streamlining the SOC with agentic AI
Zacharia’s far from alone in believing agentic AI could be of great benefit to cyber security teams.
“Think of a SOC [security operations centre] analyst sitting in front of an incident and he or she needs to start investigating it,” says Maor. “They start with looking at the technical data, to see if they’ve seen something like it in the past.”
What he’s describing is the important – but time-consuming – work SOC analysts do everyday. Maor believes adding agentic AI tools to the process can streamline their work, ultimately making them more effective at detecting cyber threats.
“An AI model can examine the incident and then detail similar incidents, immediately suggesting an investigation is needed,” he says. “There’s also the predictive model that tells the analyst what they don’t need to investigate. This cuts down the grunt work that needs to be done – sometimes hours, sometimes days of work – in order to reach something of value, which is nice.”
But while it can provide support, it’s important to note that agentic AI isn’t a silver bullet that is going to eliminate cyber security threats. Yes, it’s designed to make the task of monitoring threat intelligence or applying security updates easier and more efficient, but people remain key to information security, too. People are needed to work in SOCs, and information security staff are still required to help employees across the rest of the organisation remain alert and secure to cyber threats.
Especially as AI continues to evolve and improve, and attackers will continue to look to exploit it – and it’s up to the defenders to counter them.
“It’s a cat and mouse situation,” says Zacharia. “Both sides are adopting AI. But as an attacker, you only need one way to sneak in. As a defender, you have to protect the entire castle. Attackers will always have the advantage, that’s the game we’re playing. But I do think that both sides are getting better and better.”
Tech
Two Titanic Structures Hidden Deep Within the Earth Have Altered the Magnetic Field for Millions of Years
A team of geologists has found for the first time evidence that two ancient, continent-sized, ultrahot structures hidden beneath the Earth have shaped the planet’s magnetic field for the past 265 million years.
These two masses, known as large low-shear-velocity provinces (LLSVPs), are part of the catalog of the planet’s most enormous and enigmatic objects. Current estimates calculate that each one is comparable in size to the African continent, although they remain buried at a depth of 2,900 kilometers.
Low-lying surface vertical velocity (LLVV) regions form irregular areas of the Earth’s mantle, not defined blocks of rock or metal as one might imagine. Within them, the mantle material is hotter, denser, and chemically different from the surrounding material. They are also notable because a “ring” of cooler material surrounds them, where seismic waves travel faster.
Geologists had suspected these anomalies existed since the late 1970s and were able to confirm them two decades later. After another 10 years of research, they now point to them directly as structures capable of modifying Earth’s magnetic field.
LLSVPs Alter the Behavior of the Nucleus
According to a study published this week in Nature Geoscience and led by researchers at the University of Liverpool, temperature differences between LLSVPs and the surrounding mantle material alter the way liquid iron flows in the core. This movement of iron is responsible for generating Earth’s magnetic field.
Taken together, the cold and ultrahot zones of the mantle accelerate or slow the flow of liquid iron depending on the region, creating an asymmetry. This inequality contributes to the magnetic field taking on the irregular shape we observe today.
The team analyzed the available mantle evidence and ran simulations on supercomputers. They compared how the magnetic field should look if the mantle were uniform versus how it behaves when it includes these heterogeneous regions with structures. They then contrasted both scenarios with real magnetic field data. Only the model that incorporated the LLSVPs reproduced the same irregularities, tilts, and patterns that are currently observed.
The geodynamo simulations also revealed that some parts of the magnetic field have remained relatively stable for hundreds of millions of years, while others have changed remarkably.
“These findings also have important implications for questions surrounding ancient continental configurations—such as the formation and breakup of Pangaea—and may help resolve long-standing uncertainties in ancient climate, paleobiology, and the formation of natural resources,” said Andy Biggin, first author of the study and professor of Geomagnetism at the University of Liverpool, in a press release.
“These areas have assumed that Earth’s magnetic field, when averaged over long periods, behaved as a perfect bar magnet aligned with the planet’s rotational axis. Our findings are that this may not quite be true,” he added.
This story originally appeared in WIRED en Español and has been translated from Spanish.
Tech
Loyalty Is Dead in Silicon Valley
Since the middle of last year, there have been at least three major AI “acqui-hires” in Silicon Valley. Meta invested more than $14 billion in Scale AI and brought on its CEO, Alexandr Wang; Google spent a cool $2.4 billion to license Windsurf’s technology and fold its cofounders and research teams into DeepMind; and Nvidia wagered $20 billion on Groq’s inference technology and hired its CEO and other staffers.
The frontier AI labs, meanwhile, have been playing a high stakes and seemingly never-ending game of talent musical chairs. The latest reshuffle began three weeks ago, when OpenAI announced it was rehiring several researchers who had departed less than two years earlier to join Mira Murati’s startup, Thinking Machines. At the same time, Anthropic, which was itself founded by former OpenAI staffers, has been poaching talent from the ChatGPT maker. OpenAI, in turn, just hired a former Anthropic safety researcher to be its “head of preparedness.”
The hiring churn happening in Silicon Valley represents the “great unbundling” of the tech startup, as Dave Munichiello, an investor at GV, put it. In earlier eras, tech founders and their first employees often stayed onboard until either the lights went out or there was a major liquidity event. But in today’s market, where generative AI startups are growing rapidly, equipped with plenty of capital, and prized especially for the strength of their research talent, “you invest in a startup knowing it could be broken up,” Munichiello told me.
Early founders and researchers at the buzziest AI startups are bouncing around to different companies for a range of reasons. A big incentive for many, of course, is money. Last year Meta was reportedly offering top AI researchers compensation packages in the tens or hundreds of millions of dollars, offering them not just access to cutting-edge computing resources but also … generational wealth.
But it’s not all about getting rich. Broader cultural shifts that rocked the tech industry in recent years have made some workers worried about committing to one company or institution for too long, says Sayash Kapoor, a computer science researcher at Princeton University and a senior fellow at Mozilla. Employers used to safely assume that workers would stay at least until the four-year mark when their stock options were typically scheduled to vest. In the high-minded era of the 2000s and 2010s, plenty of early cofounders and employees also sincerely believed in the stated missions of their companies and wanted to be there to help achieve them.
Now, Kapoor says, “people understand the limitations of the institutions they’re working in, and founders are more pragmatic.” The founders of Windsurf, for example, may have calculated their impact could be larger at a place like Google that has lots of resources, Kapoor says. He adds that a similar shift is happening within academia. Over the past five years, Kapoor says, he’s seen more PhD researchers leave their computer-science doctoral programs to take jobs in industry. There are higher opportunity costs associated with staying in one place at a time when AI innovation is rapidly accelerating, he says.
Investors, wary of becoming collateral damage in the AI talent wars, are taking steps to protect themselves. Max Gazor, the founder of Striker Venture Partners, says his team is vetting founding teams “for chemistry and cohesion more than ever.” Gazor says it’s also increasingly common for deals to include “protective provisions that require board consent for material IP licensing or similar scenarios.”
Gazor notes that some of the biggest acqui-hire deals that have happened recently involved startups founded long before the current generative AI boom. Scale AI, for example, was founded in 2016, a time when the kind of deal Wang negotiated with Meta would have been unfathomable to many. Now, however, these potential outcomes might be considered in early term sheets and “constructively managed,” Gazor explains.
Tech
ICE and CBP’s Face-Recognition App Can’t Actually Verify Who People Are
The face-recognition app Mobile Fortify, now used by United States immigration agents in towns and cities across the US, is not designed to reliably identify people in the streets and was deployed without the scrutiny that has historically governed the rollout of technologies that impact people’s privacy, according to records reviewed by WIRED.
The Department of Homeland Security launched Mobile Fortify in the spring of 2025 to “determine or verify” the identities of individuals stopped or detained by DHS officers during federal operations, records show. DHS explicitly linked the rollout to an executive order, signed by President Donald Trump on his first day in office, which called for a “total and efficient” crackdown on undocumented immigrants through the use of expedited removals, expanded detention, and funding pressure on states, among other tactics.
Despite DHS repeatedly framing Mobile Fortify as a tool for identifying people through facial recognition, however, the app does not actually “verify” the identities of people stopped by federal immigration agents—a well-known limitation of the technology and a function of how Mobile Fortify is designed and used.
“Every manufacturer of this technology, every police department with a policy makes very clear that face recognition technology is not capable of providing a positive identification, that it makes mistakes, and that it’s only for generating leads,” says Nathan Wessler, deputy director of the American Civil Liberties Union’s Speech, Privacy, and Technology Project.
Records reviewed by WIRED also show that DHS’s hasty approval of Fortify last May was enabled by dismantling centralized privacy reviews and quietly removing department-wide limits on facial recognition—changes overseen by a former Heritage Foundation lawyer and Project 2025 contributor, who now serves in a senior DHS privacy role.
DHS—which has declined to detail the methods and tools that agents are using, despite repeated calls from oversight officials and nonprofit privacy watchdogs—has used Mobile Fortify to scan the faces not only of “targeted individuals,” but also people later confirmed to be US citizens and others who were observing or protesting enforcement activity.
Reporting has documented federal agents telling citizens they were being recorded with facial recognition and that their faces would be added to a database without consent. Other accounts describe agents treating accent, perceived ethnicity, or skin color as a basis to escalate encounters—then using face scanning as the next step once a stop is underway. Together, the cases illustrate a broader shift in DHS enforcement toward low-level street encounters followed by biometric capture like face scans, with limited transparency around the tool’s operation and use.
Fortify’s technology mobilizes facial capture hundreds of miles from the US border, allowing DHS to generate nonconsensual face prints of people who, “it is conceivable,” DHS’s Privacy Office says, are “US citizens or lawful permanent residents.” As with the circumstances surrounding its deployment to agents with Customs and Border Protection and Immigration and Customs Enforcement, Fortify’s functionality is visible mainly today through court filings and sworn agent testimony.
In a federal lawsuit this month, attorneys for the State of Illinois and the City of Chicago said the app had been used “in the field over 100,000 times” since launch.
In Oregon testimony last year, an agent said two photos of a woman in custody taken with his face-recognition app produced different identities. The woman was handcuffed and looking downward, the agent said, prompting him to physically reposition her to obtain the first image. The movement, he testified, caused her to yelp in pain. The app returned a name and photo of a woman named Maria; a match that the agent rated “a maybe.”
Agents called out the name, “Maria, Maria,” to gauge her reaction. When she failed to respond, they took another photo. The agent testified the second result was “possible,” but added, “I don’t know.” Asked what supported probable cause, the agent cited the woman speaking Spanish, her presence with others who appeared to be noncitizens, and a “possible match” via facial recognition. The agent testified that the app did not indicate how confident the system was in a match. “It’s just an image, your honor. You have to look at the eyes and the nose and the mouth and the lips.”
-
Business1 week agoPSX witnesses 6,000-point on Middle East tensions | The Express Tribune
-
Tech1 week agoThe Surface Laptop Is $400 Off
-
Tech1 week agoHere’s the Company That Sold DHS ICE’s Notorious Face Recognition App
-
Tech4 days agoHow to Watch the 2026 Winter Olympics
-
Tech6 days agoRight-Wing Gun Enthusiasts and Extremists Are Working Overtime to Justify Alex Pretti’s Killing
-
Business1 week agoBudget 2026: Defence, critical minerals and infra may get major boost
-
Entertainment1 week agoPeyton List talks new season of "School Spirits" and performing in off-Broadway hit musical
-
Fashion7 days agoItaly’s Brunello Cucinelli debuts Callimacus AI e-commerce experience
