Cyber security companies are already using AI to detect potential attacks by flagging suspicious behaviour © Getty Images/iStockphoto

Is the future of cyber security machines versus machines?

As hackers increasingly use automation and machine learning to launch cyber attacks at scale, cyber security defenders, too, are turning to artificial intelligence to detect hacks — and, in some cases, kill them dead automatically.

But the use of AI for cyber defence is still nascent, according to many experts, and must be deployed with care. Some argue there is a tendency for the cyber security industry to exaggerate AI’s potential and successes, and use it as a buzzwor­d.

Machine-on-machine cyber security is “far away”, according to Sohrob Kazerounian, AI research lead at Vectra. “Having a fully automated system in the cyber security domain would mean essentially trusting the computer with decisions.

“There are critical things that would be hugely costly if done incorrectly,” he says. “It’s a question of ‘how accurate is this thing relative to the human?’ And, in the cyber security domain, it’s just simply ‘not very’.”

So how far along are we?

Already, cyber security companies are using AI to help detect potential attacks by flagging suspicious behaviour.

Justin Fier, vice-president of tactical risk and response at Darktrace, says the UK-based company uses “various forms of machine learning to go into your digital estate and, quite simply, establish a sense of self, establish what is specific to an organisation”.

He adds: “The minute something deviates — big or small — we can actually alert you to that.”

Darktrace also has automated responses to known threats such as ransomware strains. “Now, the median time to detect and remediate ransomware is 45 minutes. In the next year or so, we’re going to be talking in terms of seconds and nanoseconds,” says Fier.

Meanwhile, researchers at Cardiff University recently devised a new method of wielding AI that it argues could automatically detect and kill malware in real time — in just 0.3 seconds, on average.

The findings, published in the journal Security and Communications Networks in 2021, are based on building a profile of the behaviour of malware to predict how it might behave later, and then blocking that activity, rather than analysing what the structure of a piece of malware looks like, as is common in antivirus softwa­re.

While the research is cutting edge — the technique could stop damage in its tracks in ways that are not possible now — there are limits.

The researchers found that fast-acting ransomware is prevented from corrupting 92 per cent of files — but with a false-positive rate of 14 per cent, which is too high to be adopted in real life. It therefore remains a work in progress.

“Every day now, we’re making inroads into . . . trying to refine the algorithms and the statistical filtering itself to reduce the amount of false positives, while still being accurate in killing the right processes and reducing file encryption,” says Pete Burnap, a professor of data science and cyber security at Cardiff University, who worked on the research.

Cas Bilstra, who leads risk intelligence at Dutch security group Eye Security, says “the strength of AI is that it can explore many more possibilities than humans can, it can see patterns that are very deep”.

But Bilstra notes: “It will only do what it has been trained to recognise. If you are a very smart criminal and you come up with some kind of malware that is totally different from all the malware that has been before, a system such as this won’t recognise it, because it’s trained on known malware samples.”

Kazerounian says Vectra’s engineers build AI and machine learning systems that still “keep in mind that the human is going to have to be in the loop”.

Part of improving the systems in future will involve trying to utilise more data to train the machine learning models.

Accessing large data sets can be difficult, however, particularly in the realm of cyber security, where companies are concerned about falling foul of privacy laws, for example.

“Companies will have their own proprietary data sets — it’s intellectual property and protect­ed. It’s tough for people to agree which information to share and how it should be processed,” points out Kazerounian.

In the meantime, researchers will continue to try to simulate attacks and responses over the long term, to develop the automated cyber defences that today seem distant.

“Our five to 10 year challenge is to try and think about how we can script automated attacks on virtualised [simulated] networks . . . [and] to start bringing in this armoury of how do we defend against it,” Burnap says.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments