How will the sudden emergence of artificial intelligence (AI) platforms such as ChatGPT influence future ransomware attacks?
Right now, there are so many pessimistic answers to this question it can be hard to judge the real-world risk they pose.
On the one hand, there’s no doubt that AI can easily be used to improve individual components of today’s attack, for example improving the language and design of phishing emails to make them read more convincingly (as anyone who’s experimentally coaxed ChatGPT to rewrite an awkwardly phrased phishing email will attest).
At the same time, it’s also likely AI will create entirely new capabilities that are not widely used today, including ones that might soon render today’s defenses obsolete.
Beyond 2025
If the commentary on how this might play out has been interesting but subjective, in January we finally got some official analysis from Britain’s National Cyber Security Centre (NCSC).
In “The near-term impact of AI on the cyber threat,” the NCSC considers the threat AI poses in a wide range of possible cyberattacks, with ransomware near the top of the list.
For the next two years, the NCSC believes that most of the threat lies with the way AI will enhance today’s attacks, especially those carried out opportunistically by less experienced groups. This will increase the speed at which groups can spot vulnerabilities, while social engineering will undergo its biggest evolutionary jump ever.
That said, other capabilities will probably remain much as they are now, for example the ease with which attackers can move laterally once inside networks. This is not surprising; lateral movement remains a manual task requiring skill sensitive to context and won’t be as easy to automate using AI.
After 2025, however, the influence of AI will grow rapidly, and the possibilities will expand. In summary:
“AI’s ability to summarize data at pace will also highly likely enable threat actors to identify high-value assets for examination and exfiltration, enhancing the value and impact of cyberattacks over the next two years.”
It sounds like a gloomy picture of the future but there are two important unknowns. The first factor is how quickly defenders adapt to the threat by improving their defenses, including by using AI to detect and respond to threats.
A second is the ability of cybercriminals to get hold of good quality data with which to train their models. One source is the mountain of old data on the dark web from overlapping breaches stretching back two decades.
However, criminals will need new data to keep AI fueled. If we assume that breaches continue to happen, that makes data even more valuable than it is today.
Therefore, it’s possible that in a competitive market cybercriminals will want to hang on to the data they’ve stolen for longer than they do today rather than release (or sell) it in a form that aids rival groups’ AI models.
There’s zero sign of that happening right now but if it does come to pass, we might deduce from this that AI is becoming an influence. It’s become a commonplace that all business today depends on data. What nobody suspected until recently is that ransomware cybercrime might one day adopt the same idea.