Kaspersky briefing: ChatGPT and the language of cybersecurity

Focusing InfoSec on market verticals improves C-level uptake, but the language of cybersecurity remains problematic. Could ChatGPT help?
7 February 2023

Jargon-busting tech: advanced AI chatbots such as ChatGPT and its competitors could make InfoSec terms easier to navigate. Image credit: Shutterstock Generate.

Getting your Trinity Audio player ready...

When you get the opportunity to sit down with senior security researchers, you know that you’re going to learn a thing or two. And today’s briefing with Kaspersky was a great example. Speaking to TechHQ from a London, UK, location familiar to fans of the action film ‘Mission Impossible’, Kaspersky’s cybersecurity experts would have been excused for putting a sensational slant on things. But that’s not how they roll. The team was quick to deflate the hype surrounding the security risks of advanced AI chatbots such as ChatGPT and upcoming competitors. In fact, ChatGPT could be an InfoSec asset if it helps demystify the language of cybersecurity, which still presents a barrier to C-level communications – according to Kaspersky’s latest findings.

Analyzing the responses to a survey sent to 1800 organizations worldwide, cybersecurity attacks topped the list of biggest perceived risks to business continuity. 57% of respondents placed cybersecurity attacks in the top spot, ahead of economic factors (31%), competitors (31%), and the threat of industrial action (30%). However, when it comes to translating those risks into business actions, security teams can face barriers in championing solutions.

Kaspersky found that while 61% of respondents always made cybersecurity an agenda item, the numbers fell sharply with increasing organizational size. Worryingly, 3% of firms surveyed (over 50 companies) admitted that the issue was only discussed during a crisis. So, what’s holding back the application of InfoSec solutions?

Risk factors

Predictably, given an economic climate where borrowing costs are soaring, budgetary restrictions are impacting security spending. Money allocated to training and cybersecurity awareness programs could be constrained too, if companies keep tightening their belts. But budget cuts can’t take all of the blame for keeping cybersecurity off the agenda. And the Kaspersky survey sheds more light on the issue – “42% [of respondents] believe that jargon and confusing industry terms present one of the biggest barriers to broader management team’s understanding of risk.”

The following list of cybersecurity terms highlights the scope of the problem:

  • TTPs
  • YARA
  • IoC
  • Phishing attacks
  • Zero-day exploit
  • Supply chain attack
  • Nation State attack
  • Suricata rules
  • Ransomware attacks
  • MITRE ATT&CK rules
  • MD5 hash
  • Malware

These are all topics that risk confusing C-suite stakeholders, again, based on responses to the Kaspersky research. Effective decision-making doesn’t happen when senior figures inside companies can’t picture the risks involved. And this brings us to ChatGPT, which has been quite the talking point lately.

The truth about the cybersecurity risks of ChatGPT

One of the big benefits of sitting down with subject matter experts is being able to dig into the details, get their perspective on the threats and opportunities of new technologies, and quickly cut through the hype. When it comes to the topic of ChatGPT, Ivan Kwiatkowski – a senior member of the global research and analysis team at Kaspersky Lab – is clear that the hype train has reached its destination: confusion-ville.

To help separate facts from fiction, Kwiatkowski addresses three of the most popular headlines concerning the cybersecurity risks of ChatGPT. On the topic of using ChatGPT to write malware, he points out that bad actors have never struggled to find malware, and adding a chatbot to the list of available sources is unlikely to change much. Phishing emails are another media-highlighted concern, but arguably the capability of Microsoft Word to fix typos, correct grammar, and provide other writing suggestions has posed more of a long-term threat.

Lastly, Kwiatkowski tackles the subject of using ChatGPT to identify vulnerabilities in computer code. He points out that the most capable of the GPT-3 language models – which form the basis of OpenAI’s chatbot – has a maximum request of 4000 tokens (around 3000 words). This puts a limit on its scope to find problems – for example, in very large programs with tens of thousands of lines of code. That’s not to say that natural language processing models, such as the family of GPT-3 auto-completing algorithms, can’t be applied to software analytics.

Reverse engineering insight

Kwaitkowski has developed an IDA plugin (written in Python and dubbed ‘Gepetto’) which queries OpenAI’s davinci-003 language model to speed up reverse-engineering tasks. Natural language processing can help analysts to understand the role of different variables, and functions, and help to piece together the various building blocks in a software tool or piece of firmware – to give just a couple of examples.

Making headlines out of the cybersecurity risks of ChatGPT could take away from useful features of large language models. Fed with the right prompts, AI chatbots can turn out to be very effective in helping time-pressed executives quickly navigate the language of cybersecurity. Google’s scramble to launch its own chatbot, points to the impressive custom search capabilities of ChatGPT. You can ask OpenAI’s chatbot to provide a sentence, a paragraph, or a 1000-word response – depending on the detail that’s required and how much time you have to digest the information. And this is just the beginning.

As Kwaitkowski notes, ChatGPT is a jack of all trades. In its path will come much more specialized chatbots that are likely to bring profound changes in the way that we all work – C-level executives included. And that might be the headline to focus on rather than the cybersecurity risks of ChatGPT.