Generative AI makes phishing emails harder to detect

The threat that AI poses to cybersecurity is only going up as generative AI gets more complex and accessible.
5 April 2023
Getting your Trinity Audio player ready...

The rise of generative AI has piqued concerns about its potential use in cybercrime and proven the importance of cybersecurity.

Capita, the UK’s largest outsourcing company, confirmed that a cyber incident on Friday led to an IT outage, with staff locked out of their accounts. The company said that the incident affected internal Microsoft 365 applications, but didn’t disclose any more information about the nature of the issue.

Justifying the worries of many, novel social engineering attacks have risen by 135% with the uptake of generative AI. This is according to researchers at Darktrace, whose findings on the first two months of 2023 also showed that 82% of employees are worried that hackers will create scam emails using generative AI.

Although the Capita incident has been not confirmed as an AI-based attack, the implications of an IT issue are clear: the company’s share price dropped 3% from its value since Friday. As such, cybercrime causes financial damages, even if a bad actor isn’t directly profiting.

The cybersecurity firm said that email attacks targeted thousands of its customers, an increase that matches the adoption rate of ChatGPT. The attacks make use of “sophisticated linguistic techniques” including text volume, sentence length and punctuation. The usual grammatical errors won’t flag an email as phishy.

Chief product officer at Darkspace, Max Heinemeyer said that “defenders are up against sophisticated generative AI attacks and entirely novel scams that use techniques and reference topics that we have never seen before.”

Asked what the usual characteristics of a phishing email are, 68% of respondents said that an invite to click a link or open an attachment. A further 61% identified an unknown sender or unexpected content were warning signs, alongside poor spelling and grammar.

AI cyber security to target AI threats

In the last six months, 70% of employees reported an increase in the frequency of scam emails. However, 79% also said that their organization’s spam filters prevent legitimate emails from entering their inboxes.

According to Heinemeyer, the onus shouldn’t be on humans to identify scam emails anymore, as generative AI evolves to create more complex ruses. It might seem contradictory, but perhaps artificial intelligence will be the answer to the problem it has created.

Beyond generative AI, the threat of AI-driven malware has been conceptualized for years. The idea that malware could install and analyze a specific environment, changing its payload to exploit its host most effectively, has long been floated. In reality though, incidents of this type of attack have been few and far between. It could be that a similar trend emerges for generative AI.

Another worry that has developed alongside AI’s advances is the use of deepfake videos in phishing. Theoretically, a CEO’s likeness could be used to create a video instructing the finance department to move funds into an account that bad actors have access to.

Something of this scale has yet to be reported, however headlines this week are exposing the use of AI voice generators being used in “imposter scams”. Ars Technica reported that Microsoft’s Vall-E text-to-speech AI only needs three seconds of audio to simulate a voice accurately. Stories of people receiving calls that extort money, seemingly for loved ones’ benefit are beginning to emerge.

With the best intentions, we wonder how many people have a lifestyle that would make a family member’s kidnap for a ransom believable. However, given the potential repercussions of email-based scams, it’s important that businesses and IT teams stay on top of the latest developments in AI that could be exploited by bad actors.