Right now, the answer depends on who you ask, which is confusing for professionals whose job it is to have an informed opinion. If you’re a tech or security vendor, it’s likely you see AI as something positive that cyber security has been building up to for the last 30 years.
How will AI affect cyber security and will it be positive?
As former Google China head, Kai-Fu Lee stated in his 2020 book, AI 2041.
“Artificial intelligence could be the most transformative technology in the history of mankind.”
That’s quite a claim. Lee wasn’t talking about cyber security specifically but it’s hard to imagine a technology that could transform mankind without also transforming cyber security.
Sceptics aren’t hard to find either. These people see AI as a double-edged sword that will empower defenders with advanced capabilities while putting them at risk as criminals learn how to deploy their own offensive AI. This draws our attention a defining characteristic of AI – its principles can be exploited by anyone with access to open source models and can’t necessarily be locked up inside proprietary systems.
Beyond that, a fundamental anxiety is that AI systems might themselves come under attack and become untrustworthy, perhaps without anyone knowing this has happened.
AI’s use in cyber-defence
Check the spec sheets for any recent security product and the word AI will be on there somewhere. Its uses include automation (doing repetitive tasks or making it easy to respond to situations at speed) and spotting and reacting to unknown threats that traditional security technologies struggle to make sense of.
AI’s basic mode uses unsupervised or supervised machine learning (ML) to spot patterns and make predictions using large data sets. One step up from this is deep learning (DL), a more advanced neural network mode in which the output from one learning stage is used as input for a new stage. It is the latter – designed to work at scale and requiring less human input – that cyber security architects use in their AI.
On paper, the concept looks compelling. AI systems don’t get tired, bored, make unforced errors, or pack up on a Friday for the weekend. On the other hand, despite numerous claims, not much of this has been validated against today’s cyber security threats which still depend overwhelmingly on conventional, non-AI techniques.
AI-powered cyber attacks
An obvious problem with AI cyber-defence is that attackers can also up their game by using the same technology to outsmart defences. There is very little evidence this is yet happening on any scale which means it is probably being used only experimentally.
On the other hand, assuming AI was being used, how would anyone know? The attacks that are common right now such as credential theft and vulnerability exploits don’t need AI to succeed. Arguably, this situation creates a sizable knowledge gap.
This is probably already changing, allowing cyber attack AI to be used to accelerate and automate today’s attacks to work more effectively:
- Tailor phishing attacks to be more convincing, for example using deepfake voice and video.
- Automate ransomware attacks and target reconnaissance.
- Remain undetected inside networks by replicating legitimate traffic patterns.
- Defeat anti-bot CAPTCHAS by accurately mimicking human behaviour.
Mostly, what AI adds to such attacks is speed. Everything can be done faster, which means that a target’s defences can be assessed more swiftly and a range of techniques to compromise them deployed faster than the defenders can respond.
For anyone who wants more insight into how AI cyber attacks might work, the Finnish Transport and Communications Agency (Traficom) recently produced a report that outlines some of the possibilities.
Adversarial AI
Another threat is that AI could be used to undermine itself. This idea is called adversarial AI and, as far as anyone knows, has only been attempted by researchers trying to test its limits.
In this mode, adversarial AI (not to be confused with generative adversarial networks, or GANs, which are completely different) would be used to confuse and fool AI systems under real-world conditions. A variety of techniques have been suggested, including:
- Tricking machine learning models by feeding subtly deceptive data so that it misclassifies images or spam.
- A data poisoning attack in which the data being fed to the AI model is manipulated (this requires access to the data).
- Fooling online classification systems by sending them fake data.
- A denial of service attack where an AI is slowed by being bombarded with complex problems.
Another factor increasing the theoretical vulnerability of AI to adversarial attacks is that many won’t be centralised. One of the mega-trends in network design is called edge computing, the idea that servers and applications should be as local to the traffic they’re serving to increase performance and resilience. This also applies to AI located at edge locations, which could make them harder to monitor for tampering and manipulation.
AI skills
Ironically, for an idea that is all about the power of machines, what seems to be limiting it today is a shortage of human beings who understand it. This is AI’s version of the skills crisis affecting other parts of cyber security. On that, a survey from mid-2022 of 500 technology leaders found that 82% agreed that hiring these AI skills was currently difficult.
And yet Gartner recently predicted that “by 2028, AI-driven machines will account for 20% of the global workforce and 40% of all economic productivity.”
This implies that far from suffering from a skills crisis AI will eventually replace some jobs. That. presumably, will include some jobs in IT and cyber security that might otherwise have required humans to do them.
Conclusion: A better world for SMBs?
If you’re an SMB worried about affording cyber security investment and skills, the best thing about AI is that it will likely commoditise more advanced security technologies that today are out of your price range. Bought as services from larger platforms that integrate AI automation and service bots, the downside to this is that we could see less choice as large tech platforms monopolise the field.
AI won’t remove today’s security worries but it will transform them into new and unfamiliar forms.