In one look
- Generative artificial intelligence (AI) is expected to strengthen cybersecurity, particularly in identifying threats, although it is unlikely to lead to full automation in the near future.
- Malicious actors are also exploring the potential of generative AI to facilitate cyberattacks through innovations such as self-evolving malware.
- Now, through a range of measures, buyers and providers of cybersecurity services can take advantage of new technology while remaining protected.
This article is part of Bain’s 2023 Technology Report.
Just months after its public breakthrough, generative AI showed its potential to transform cybersecurity products and operations. After the launch of ChatGPT and other extended language model (LLM)-based products, the cybersecurity industry is considering generative AI as a key tool. This is despite the launch challenge that generative AI faces in cybersecurity, namely the sensitive and siled nature of security data, making it difficult to obtain comprehensive, high-quality data sets for training. and update an LLM model.
So far, threat identification is the hot spot. When we analyzed cybersecurity companies that use generative AI, we found that all were using it in the identification stage of the SANS Institute’s well-known incident response framework – the largest adoption in the one of the six SANS stages (preparation, identification, containment, eradication, recovery and lessons learned). This aligns with our assessment that threat identification offers the greatest potential for generative AI to improve cybersecurity (see Figure 1). Generative AI is already helping analysts detect an attack more quickly and then better assess its scale and potential impact. For example, it can help analysts filter incident alerts more effectively, rejecting false positives. Generative AI’s ability to detect and track threats will become even more dynamic and automated.
Threat identification offers the greatest potential for generative AI to improve cybersecurity, and this is where industry adoption has been strongest so far.
For the containment, eradication, and recovery stages of the SANS framework, adoption rates range from about half to two-thirds of the cybersecurity companies we analyzed, with containment being the furthest along. At this point, generative AI is already closing knowledge gaps by providing analysts with repair and recovery instructions based on proven tactics from past incidents. Although automating containment, eradication and recovery plans will achieve greater gains, full automation is unlikely in the next 5-10 years, if at all. The long-term impact of generative AI in these areas will likely be moderate and will likely still require some human oversight.
Generative AI is also used at the lessons learned stage, where it can automate the creation of incident response reports, improving internal communication. Importantly, reports can be incorporated back into the model, thereby improving defenses. For example, Google’s Security AI Workbench, powered by Sec-PaLM 2 LLM, converts raw data from recent attacks into machine-readable and human-readable threat intelligence that can accelerate responses (under human supervision). But while the quality of AI-powered generative incident response reporting is expected to continue to improve, human involvement will likely remain necessary.
A double-edged sword
Of course, generative AI can also be used as a tool by cyberattackers, providing them with capabilities similar to those of defenders. For example, less experienced attackers can use it to create more attractive emails or more realistic deepfake videos, recordings and images to send to phishing targets. Generative AI also allows malicious actors to easily rewrite known attack code to be just different enough to avoid detection.
Generative AI has certainly become a trending topic for malicious actors. Mentions of generative AI on the dark web have proliferated in 2023 (see Figure 2). It’s common to see hackers boasting about using ChatGPT. A hacker said he was able to use generative AI to recreate malware strains from research publications, such as a Python-based thief that can search and retrieve common file types (.docx , PDF, images) on a system.
The use of generative AI for nefarious purposes has become an increasingly popular topic on the dark web following the launch of ChatGPT.
The threat from bad actors will only increase as they use generative AI to standardize and update their tactics, techniques, and procedures. Generative AI-assisted dangers include malware strains that evolve automatically, creating variants to attack a specific target with a unique technique, payload, and polymorphic code that are undetectable by existing security measures. Only the most agile cybersecurity operations will stay ahead.
Generative AI will advance rapidly and it is essential that all stakeholders, from cybersecurity providers to businesses, continually update their specialist knowledge and strategy to take advantage of it and stay protected.