It’s not just that current AI tools are making cybercrime easier but the speed at which new tools are being developed that concerns cybersecurity expert and CEO of BlackFog, Dr Darren Williams.
As new warnings emerge about the threat that artificial intelligence-driven cyberattacks pose to organisations, it has become evident that we’re at an inflection point in the fight against ransomware.
Armed with the ability to create ever more convincing emails and deepfake videos to deceive and defraud, criminal groups have harnessed AI to power up their attacks, increase their profits or to further their ideological causes.
Ransomware gangs are increasingly deploying AI across every stage of their operations, from initial research to payload deployment and negotiations. Smaller outfits can punch well above their weight in terms of scale and sophistication, while more established groups are transforming into fully automated extortion machines.
As new gangs emerge, evolve and adapt to boost their chances of success, here we explore the AI-driven tactics that are reshaping ransomware as we know it.
How AI is making ransomware faster and more scalable
Over the past year, the volume of ransomware attacks has steadily increased, and so far, we have tracked a record-breaking number of incidents for the first three months of 2025. The use of AI tools is elevating attacks to a new level and enabling threat groups to strike more often and in greater numbers.
Mirroring the way large language models (LLMs) such as ChatGPT have become mainstays in the business world, cybercriminals are steadily stripping away the more time-consuming manual elements of their attacks. Combined with the ransomware-as-a-service (RaaS) model that provides greater access to tools, tactics and target lists, this means it’s now far easier for the average group to launch an effective strike.
One recent example is FunkSec, a small ransomware group that rapidly expanded its reach using AI-powered tools. All signs point to the gang being unremarkable – a small number of members with rudimentary coding skills and basic English. Yet despite lacking technical sophistication and resources, the gang amassed more than 80 victims in a single month. Analysis indicates this was achieved with heavy use of AI throughout their operations, improving the quality of their malware.
By removing human limitations, AI is allowing ransomware operations to scale like never before. Attackers can now execute high-volume, high-efficiency campaigns with precision, leaving security teams struggling to keep pace.
AI-driven phishing is making initial access easier
Alongside launching more attacks, AI tools are also helping ransomware gangs strike more effectively. Phishing emails are one of the most common attack vectors for ransomware, and generative AI (GenAI) tools make it easier for cybercriminals to craft more personalised and convincing messages.
For example, alongside improving their malware, FunkSec is also likely using AI to write phishing emails and ransom demands in perfect English. The group even deployed their own custom LLM-powered chatbot to handle negotiations to compensate for the group’s small size.
LLMs can learn the style and tone of specific individuals with data harvested through compromised accounts or found openly online. This saves cybercriminals a great deal of time and effectively eliminates the language errors and inconsistencies which would otherwise indicate the email was a phishing message.
Alongside generating text, we also see more cases of criminal groups using AI in video and audio to deceive their victims. In a recent high-profile example, a new phishing campaign used a deepfake video of YouTube CEO Neal Mohan announcing a new monetisation policy to deliver an executable file that would take over the user’s systems.
With AI handling the creation and execution of phishing attacks, cybercriminals can launch high-volume social engineering campaigns with minimal effort. The business of deception was already well on its way to being a fully automated, scalable operation, and AI is supercharging this growth.
AI-enhanced malware is evading detection
Cybercriminal groups will typically pursue the path of least resistance to making a profit. As such, most cases of malign AI have been lower hanging fruit focusing on automating existing processes. That said, there is also a significant risk of more tech-savvy groups using AI to enhance the effectiveness of the malware itself.
Perhaps the most dangerous example is polymorphic ransomware, which uses AI to mutate its code in real time. Each time the malware infects a new system, it rewrites itself, making detection far more difficult as it evades antivirus and endpoint security looking for specific signatures.
Self-learning capabilities and independent adaptability are drastically increasing the chances of ransomware reaching critical systems and propagating before it can be detected and shut down.
Fighting against the new frontier of AI ransomware
As a result, ransomware is only going to become more dangerous as criminal groups see better results and reap greater rewards. Within the next few years, malware could self-propagate, infiltrate networks, and issue ransom demands with very little human oversight.
AI can even handle the extortion by itself, analysing the victim’s financial data, insurance policies and transaction history to deliver coldly calculated demands which are precision tuned for maximum payout.
However, AI can be a weapon for defenders, too. Advanced AI-driven detection and response solutions can analyse behavioural patterns in real time, identifying anomalies that signature-based tools might miss. Continuous network monitoring helps detect suspicious activity before ransomware can activate and spread.
AI solutions are also important for preventing data exfiltration which is used in 95pc of attacks. By preventing unauthorised data transfers with anti-data exfiltration (ADX) technology, organisations can close down extortion attempts so that attackers have no choice but to move on.
The greatest concerns are not only how AI tools are misused today, but also the speed at which new tactics and tools are being developed.
AI has become the next focal point in the continuous game of cat and mouse between attacker and defender, so those security teams that can effectively adopt AI in their defences have the best chance of keeping the attackers at bay.
Dr Darren Williams is CEO and founder of BlackFog, a global cybersecurity start-up. He is responsible for strategic direction and leads global expansion for BlackFog and has pioneered data exfiltration technology for the prevention of cyberattacks across the globe.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.