![]()
Legacy approaches no longer enough to ward off attacks, experts warn.
2025 was littered with cyberattacks and outages, involving government organisations, airlines, retail giants and some of the biggest names protecting the web.
AI and newer technologies such as quantum are making the landscape even more complex, giving security teams the tools to tackle the increasingly sophisticated attacks, while being some of the very reasons causing these challenges.
Continuing on from previous years, in 2026 experts predict a rise in AI malware and a “constant and hybrid” cyber conflict domain. “The threat from cybercriminals is getting more challenging all the time,” comments Roy Shelton, the founder of UK-based IT service provider Connectus.
“Attacks are more sophisticated, more targeted and harder to detect and organisations can no longer rely on legacy approaches to keep them safe.”
Fragmented, but a balanced data spread
A majority of our data is on the cloud and connected through third-party integrations, making SaaS supply chains a particularly vulnerable target. Mike Britton, the CIO at Abnormal AI says that attacking these chains creates a “massive return on investment at a relatively low risk”.
As customers demand tighter control over sensitive data, experts predict that enterprises could migrate select workloads from public clouds back into their own data centres. This is both due to a fear of cloud outages, cyberattacks and an apprehension against large languages models consuming data.
“The next phase of cloud adoption will look more balanced,” explains John Kindervag, the chief evangelist at Illumio. “Customers want tighter control over sensitive data and less exposure to cloud outages or the risk that public large language models will ingest proprietary information.”
Data fragmentation causes harm in other ways as well. Experts warn that the rapid adoption of agentic AI will result in a “hyperconnectivity” that could overwhelm security teams and create blind spots across the digital infrastructure.
The rush to implement agentic AI will lead to insufficient supervision over how they interact with other systems, says Michael Adjei, the director of system engineering at Illumio. “Organisations will struggle to understand what access agents have to their systems and whether they are interacting with customer and sensitive data in the right way.”
‘Safe rooms’ for data
2026 might see the emergence of a digital “safe room” to protect data against malicious AI, says Andre Reitenbach, CEO and co-founder of Gcore.
“These secure environments will allow organisations to operate with confidence that what they see, process, and create is real.
“Like an advanced locking system that blocks wrong passcodes, digital safe rooms will be supported by AI-enforced tooling that can recognise malicious AI through identifying patterns and behaviours.”
It’s a legacy solution adopted to adapt to future needs, he explains. By using isolation and protection tactics, enterprises can create certainty that they are accessing untampered with data.
Identity theft is not a joke
Studies throughout the past year have suggested that bad actors will weaponise agentic AI to commit identity-based attacks.
“Depending on how people use agents, they are, in a way, relinquishing part of their identity to autonomous AI,” Adjei says. “Agents will be assuming people’s identity, accessing usernames, passwords and tokens to log in to systems for automated convenience.”
For enterprises, this means an increase in CEO deepfakes. Adam Boynton, the senior security strategy manager for Europe, the Middle East, India and Africa at Jamf says to expect more cybercriminals using deepfakes of high-profile CEOs and executives of major brands.
“These criminals won’t just focus on transferring money but also on stealing data. As a result, information may be exposed in ways never seen before,” he adds.
New ‘ransomware moment’
Aside from the dozens of headline-making cyberattacks last year, the numerous major outages also proved to be quite a challenge. Some experts are calling AI outages the new “ransomware moment”.
“In 2026, the biggest wake-up call for enterprises will be unexpected AI outages,” says Don Boxley, the CEO and co-founder of DH2i.
“As more organisations rely on AI systems for customer service, fraud detection, claims processing, supply chain routing, and decision automation, even a few minutes of downtime will create real-world business disruption.”
How big of a threat is quantum?
AI is not the only threat facing enterprises. Last year, Nvidia CEO Jensen Huang declared that quantum computing was on its way to an “inflection point”.
When finally fully realised and in conjunction with AI, this technology could create massive new problems for enterprise security, especially when used to decrypt systems.
Though still a while away, Paul Delahunty, the chief information security officer at Irish cybersecurity provider Stryve says that quantum-resistant cryptography, or post quantum cryptography (PQC) must be in place before that moment arrives.
Delahunty believes 2026 will be the year “PQC adoption goes mainstream”.
Are we prepared?
It’s not all negative, however. Experts are seeing a transformation in SOCs in order to tackle these new powerful kinds of cyberattacks.
“AI copilots will be embedded throughout detection and response workflows to spot anomalies, fill data gaps, and recommend next actions,” says Illumio’s VP of industry strategy Raghu Nandakumara.
“This creates a more complete view of the environment, surfacing relevant threats faster, and reducing mean time to detect and respond.”
Meanwhile, Tom Findling, the co-founder and CEO of Conifers comments that the first “real” steps towards security artificial general intelligence (AGI) will begin to take place.
“Security AGI describes systems that understand the entire environment of an organisation, including assets, controls, behavioural patterns, and previous incidents,” he says.
He also believes that SOCs will see further integration with AI.
“AI systems will handle the multiple stages of detection and response, while human analysts will focus on model training, oversight, and performance measurement.”
Insurers tighten reign
Cyber insurers are tightening their reign by increasingly demanding demonstrable evidence of active monitoring, incident response capability and continuous oversight, says Brian Sibley, virtual chief technology officer from Espria.
“Insurance providers want certainty. They want proof that an organisation can detect and contain a breach quickly.”
Organisations are entering a cybersecurity environment defined by AI-driven attacks, opaque supply chains, expanding digital ecosystems and rising insurance scrutiny. Yet many businesses still rely on fragmented tools, manual processes or outdated defences that cannot withstand the speed and sophistication of emerging threats, he explains.
“Threat actors are innovating faster than ever. AI has changed the economics of attack – the supply chain has become a target in its own right, and insurers are placing unprecedented pressure on businesses to demonstrate resilience.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.


