In 2023, artificial intelligence (AI) finally went mainstream. Generative AI tools have captured the public imagination in exciting new ways, and organisations have begun seriously looking into the ways these new solutions might enhance productivity and put new capabilities at their fingertips. Unfortunately, generative AI empowers both well-intentioned and ill-intentioned individuals. Cybercriminals have also been empowered by the technology and they use it to hunt for vulnerabilities more effectively, improve their automated incursion tools, and quickly spin up code for new malicious software. As a result, the security community became infatuated with AI, both for its value in the right hands and its danger in the wrong ones. AI dominated the headlines in 2023 as experts grappled with what it might mean for the future of security.
But as 2024 unfolds the most significant threats facing most organisations aren’t going to come from generative AI. The truth is that while AI-based solutions offer attackers significant value, limitations on what they can do still exist. AI is unable to create new methods for attackers to accomplish their goals. Attackers are limited by physics, the capabilities of software, hardware, and possible attack vectors. That doesn’t mean attackers are standing still—far from it. Attackers have continued to refine their approaches, evolving their tactics to become more efficient and effective. And if businesses want to protect their data in 2024, it’s important to understand not just the flashy new tactics and tools attackers are using, but how they iterate on existing ones.
AI indeed promises the power to enable both defenders and attackers alike, but when analysing which factors contribute the most to a successful attack it’s far from the top of the list. Cyberattacks consistently progress the furthest not when attackers use the most advanced tools, but when visibility or effective security controls are lacking on the defender’s side. AI-based solutions can help attackers identify where those gaps and vulnerabilities exist, but attackers are already exceptionally good at understanding where exposures may lie and taking advantage of them. Cybercriminals don’t need AI to exploit vulnerabilities or insecure configurations: they’ve been doing it for decades, and they are exceedingly efficient at it.
The overwhelming focus on AI has distracted many organisations from shoring up their security fundamentals—a trend that is likely to continue into 2024. AI might help attackers craft more convincing phishing emails, but phishing is already a highly successful attack vector for bad actors. The technology may make it easier for attackers to identify vulnerabilities and misconfigurations, but vulnerabilities and misconfigurations were already significant risks to be mitigated.
On a fundamental level, it doesn’t matter whether attackers are leveraging AI to breach your systems or whether they’re doing it the old-fashioned way—what matters is whether you can detect and stop them before they can cause damage. Organisations that lack visibility across their digital environments will find it impossible to stop intruders whether attackers are armed with AI or not.
It isn’t acknowledged enough, but many of the attacker groups that pose the biggest threats to organisations are the same threat actors year after year (and some for the last decade). These actors are effective businesses at this point: they’ve built infrastructure that allows them to conduct advanced attacks, and they have the right people with the right skills to pull attacks off consistently. As flashy and impressive as generative AI is it can’t replicate the ability of skilled and experienced adversaries. In the coming year (and, really, for the near future), it will be these groups and individuals that pose the greatest threat to businesses—not AI.
Defending against them more effectively will require organisations to look past shiny objects like AI and instead double down on fundamentals, ensuring they have the necessary detection and response capabilities to make sure that when attackers get in, they can’t keep their presence a secret for long. Whether those capabilities are developed in-house or outsourced to third-party providers, businesses of all sizes will need to be equipped to address not just the symptoms, but the disease itself.
While the same threat actors are still around, we can anticipate seeing more individual threat actors and groups use existing threat vectors. For example, we saw in 2023 a substantial rise in phishing and broader use of advanced phishing methods thanks to enterprising criminals who have empowered others to conduct attacks. Specifically, these people have created and released more accessible platforms to facilitate phishing than had existed previously. These platforms make well-known tactics available to more actors. The higher number of actors and higher number of attacks result in a higher number of opportunities for security controls to fail. This is why ensuring defenses are in place and are robust is essential.
When looking to the future, it’s important to strike a balance between keeping an eye on new and emerging threats and ensuring your security solutions can address those that already exist. Yes, attackers will leverage AI-based tools to improve their attack capabilities, but those primarily leverage tactics that should already be familiar to security experts.
Organisations need to avoid being distracted by the allure of generative AI and instead focus on improving their ability to detect and defend against the tactics that attackers are already in use.