Tech

From trust to turbulence: Cyber’s road ahead in 2026 | Computer Weekly

Published

on


In 2025, trust became the most exploited surface in modern computing. For decades, cyber security has centered on vulnerabilities, software bugs, misconfigured systems and weak network protections. Recent incidents in cyber security marked a clear turning point, as attackers no longer needed to rely solely on traditional techniques.

This shift wasn’t subtle. Instead, it emerged across nearly every major incident: supply chain breaches leveraging trusted platforms, credential abuse across federated identity systems, misuse of legitimate remote access tools and cloud services, and AI-generated content slipping past traditional detection mechanisms. In other words, even well-configured systems could be abused if defenders assumed that trusted equals safe.

Highlighting the lessons learned in 2025 is essential for cyber security professionals to understand the evolving threat landscape and adapt strategies accordingly.

The perimeter is irrelevant – trust is the threat vector

Organisations discovered that attackers exploit assumptions just as effectively as vulnerabilities by simply borrowing trust signals that security teams overlooked. They blended into environments using standard developer tools, cloud-based services and signed binaries that were never designed with strong telemetry or behavioural controls.

The rapid growth of AI in enterprise workflows was also a contributing factor. From code generation and operations automation to business analytics and customer support, AI systems began making decisions previously made by people. This introduced a new category of risk: automation that inherits trust without validation. The result? A new class of incidents where attacks weren’t loud or obviously malicious, but were piggybacked on legitimate activity, forcing defenders to rethink what signals matter, what telemetry is missing and which behaviours should be considered sensitive even if they originate from trusted pathways.

Identity and autonomy took centre stage

Identity also defines the modern attack surface apart from security vulnerabilities. As more services, applications, AI agents and devices operate autonomously, attackers increasingly target identity systems and the trust relationships between components. Once an attacker had possession of a trusted identity, they could move with minimal friction, expanding the meaning of privilege escalation. Escalation wasn’t just about obtaining higher system permissions; it was also about leveraging an identity that others naturally trust. Considering the attacks targeting the identities, defenders realised that distrust by default must now apply not only to network traffic but also to workflows, automation and the decisions made by autonomous systems.

AI as both a power tool and a pressure point

AI acted as a defensive accelerator and a new frontier of risk. AI-powered code generation sped up development but also introduced logic flaws when models filled gaps based on incomplete instructions. AI-assisted attacks became more customised and scalable, making phishing and fraud campaigns harder to detect. Yet, the lesson wasn’t that AI is inherently unsafe; it was that AI amplifies whatever controls (or lack of controls) surround it. Without validation, AI-generated content can mislead. Without guardrails, AI agents can make risky decisions. Without observability, AI-driven automation can drift into unintended behavior. This highlights that AI security is more about the entire ecosystem, including LLMs, GenAI apps and services, AI agents and underlying infrastructure.

A shift towards governing autonomy

As organisations increase their reliance on AI agents, automation frameworks and cloud-native identity systems, security will transition from patching flaws to controlling decision-making pathways. We will see the following defensive strategies in action:

  • AI control-plane security: Security teams will establish governance layers around AI agent workflows, ensuring every automated action is authenticated, authorised, observed and reversible.  The focus will expand from guarding data to guarding behaviour.
  • Data drift protection: AI agents and automated systems will increasingly move, transform and replicate sensitive data, creating a risk of silent data sprawl, shadow datasets and unintended access paths. Without strong data lineage tracking and strict access controls, sensitive information can drift beyond approved boundaries, leading to new privacy, compliance and exposure risks.
  • Trust verification across all layers: Expect widespread adoption of “trust-minimised architectures,” where identities, AI outputs and automated decisions are continuously validated rather than implicitly accepted.
  • Zero trust as a compliance mandate: ZTA will become a regulatory requirement for critical sectors, with executives facing increased personal accountability for significant breaches tied to poor security posture.
  • Behavioural baselines for AI and automation: Just like user behaviour analytics matured for human accounts, analytics will evolve to establish expected patterns for bots, services and autonomous agents.
  • Secure-by-design identity: Identity platforms will prioritise strong lifecycle management for non-human identities, limiting the damage when automation goes wrong or is hijacked.
  • Intent-based detection: Since many attacks will continue to exploit legitimate tools, detection systems will increasingly analyse why an action occurred rather than just what happened.

If 2025 taught us that trust can be weaponised, then 2026 will teach us how to rebuild trust in a safer, more deliberate way. The future of cyber security isn’t just about securing systems but also securing the logic, identity and autonomy that drive them.

Aditya K Sood is vice president of security engineering and AI strategy at Aryaka.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version