An Asian woman studying a see through screen which is producing lines of AI generated text
Advanced protection: generative AI bots are already being used to help human analysts detect and respond to hacks © Laurence Dutton/Getty Images

Artificial intelligence technology has been a buzzword in cyber security for a decade now — cited as way to flag vulnerabilities and recognise threats by carrying out pattern recognition on large amounts of data. Anti-virus products, for example, have long used AI to scan for malicious code, or malware, and send alerts in real time.

But the advent of generative AI, which enables computers to generate complex content — such as text, audio and video — from simple human inputs, offers further opportunities to cyber defenders. Its advocates promise it will boost efficiency in cyber security, help defenders launch a real-time response to threats, and even help them outpace their adversaries altogether.

“Security teams have been using AI to detect vulnerabilities and generate threat alerts for years, but generative AI takes this to another level,” says Sam King, chief executive of security group Veracode.

“Now, we can use the technology not only to detect problems, but also to solve and, ultimately, prevent them in the first place.”

Generative AI technology was first thrust into the spotlight by the launch of OpenAI’s ChatGPT, a consumer chatbot that responds to users’ questions and prompts. Unlike the technology that came before it, generative AI “has adaptive learning speed, contextual understanding and multi­modal data processing, and sheds the more rigid, rule-based coat of traditional AI, supercharging its security capabilities,” explains Andy Thompson, offensive research evangelist at CyberArk Labs.

So, after a year of hype around generative AI, are these promises being delivered upon?

Already, generative AI is being used to create specific models, chatbots, or AI assistants that can help human analysts detect and respond to hacks — similar to ChatGPT, but for cyber security. Microsoft has launched one such effort, which it calls Security Copilot, while Google has a model called SEC Pub.

“By training the model on all of our threat data, all of our security best practices, all our knowledge of how to build secure software and secure configurations, we already have customers using it to increase their ability to analyse attacks and malware to create automated defences,” says Phil Venables, chief information security officer of Google Cloud.

And there are many more specific use cases, experts say. For example, the technology can be used for attack simulation, or to ensure that a company’s code is kept secure. Veracode’s King says: “You can now take a GenAI model and train it to automatically recommend fixes for insecure code, generate training materials for your security teams, and identify mitigation measures in the event of an identified threat, moving beyond just finding vulnerabilities.”

Generative AI can also be used for “generating [and] synthesising data” with which to train machine learning models, says Gang Wang, associate professor of computer science at the University of Illinois Grainger College of Engineering. “This is particularly helpful for security tasks where data is sparse or lacks diversity,” he notes.

The potential for developing AI cyber security systems is now driving dealmaking in the cyber sector — such as the $28bn acquisition of US security software maker Splunk by Cisco in September. “This acquisition reflects a wider trend and illustrates the industry’s growing adoption of AI for enhanced cyber defences,” says King.

She points out that these tie-ups allow the acquirer to swiftly expand their AI capabilities while also giving them access to more data, to train their AI models effectively.

Nevertheless, Wang cautions that AI-driven cyber security cannot “fully replace existing traditional methods”. To be successful, “different approaches complement each other to provide a more complete view of cyber threats and offer protections from different perspectives”, he says.

For example, AI tools may have high false positive rates — meaning they are not accurate enough to be relied upon alone. While they may be able to identify and halt known attacks swiftly, they can struggle with novel threats, such as so-called “zero day” attacks that are different from those launched in the past.

As AI hype continues to sweep the tech sector, cyber professionals must deploy it with care, experts warn, maintaining standards around privacy and data protection, for example. According to Netskope Threat Labs data, sensitive data is shared in a generative AI query every hour of the working day in large organisations, which could provide hackers with fodder to target attacks.

Steve Stone, head of Rubrik Zero Labs at data security group Rubrik, also notes the emergence of hacker-friendly generative AI chatbots such as “FraudGPT” and “WormGPT”, which are designed to enable “even those with minimal technical” skills to launch sophisticated cyber attacks.

Some hackers are wielding AI tools to write and deploy social engineering scams at scale, and in a more targeted manner — for example, by replicating a person’s writing style. According to Max Heinemeyer, chief product officer at Darktrace, a cyber security AI company, there was a 135 per cent rise in “novel social engineering attacks” from January to February 2023, in the wake of the introduction of ChatGPT.

“2024 will show how more advanced actors like APTs [advanced persistent threats], nation-state attackers, and advanced ransomware gangs have started to adopt AI,” he says. “The effect will be even faster, more scalable, more personalised and contextualised attacks, with a reduced dwell time.”

Despite this, many cyber experts remain optimistic that the technology will be a boon for cyber professionals overall. “Ultimately, it is the defenders who have the upper hand, given that we own the technology and thus can direct its development with specific use cases in mind,” says Venables. “In essence, we have the home-field advantage and intend to fully utilise it.”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article