How hackers are weaponizing artificial intelligence

From OCR to self-learning malware, hackers are now leaning on AI to bypass security systems.
9 September 2020

In the wrong hands, AI is proving dangerous. Source: Shutterstock

  • Cybercrime is a lucrative activity and one that’s getting easier to enter
  • Threats are becoming more widespread and sophisticated, attackers are increasingly leaning on AI to bypass security systems 

In an age where everything is becoming connected and data is regarded as a business’s most valuable commodity, cybersecurity continues to diversify in a hyper-competitive marketplace. 

Set to hit a worth of US$248 billion by 2023, the prosperity of the sector is down to the constant growth and mutations of cyberthreats, which every year demands higher caliber weaponry with either better precision or a wider spread. 

Cybercrime, today, is where the money is. The tools to enact it are widely available even to non-technical individuals. Anyone can get their hands on exploit kits of varying levels of sophistication, starting from a couple of hundred bucks, right up to tens of thousands. 

A report by Business Insider revealed that a hacker seeding ransomware this way could make around US$84K a month on average

This is both a massively lucrative and ‘accessible’ activity, so it’s certainly not going to subside. It’s predicted that, in the future, all our connected devices will be under attack constantly as cyberattacks become harder to detect, incessant, and ever more sophisticated. 

The risk for businesses, of course, include serious damages in information loss, revenue loss, and a potential end to business operations, if not a crippling fine, injury, or even loss of life

The cybersecurity market will continue to grow as a result, with vendors offering an expansive and sophisticated arsenal. At the same time, these companies and their customers will be locked in a constant race, with their defenses only as good as the next iteration of malware. 

On both sides of this war, emerging technologies will continue to play a key role, and artificial intelligence (AI) is no exception. 

Cybercriminals can take AI designed for legitimate use cases and adapt it to illegal schemes. Readers will be familiar with CAPTCHA, a tool that has been around for decades now in order to defend against credential stuffing by presenting non-human bots the challenge of reading distorted text. As far as a couple of years ago, however, a Google study found that machine learning-based optical character recognition (OCR) technology could solve 99.8% of these challenges. 

Criminals are also using AI to crack passwords faster. Brute force attacks can be sped up using deep learning; researchers have fed purpose-built neural networks tens of millions of leaked passwords, and have asked them to generate hundreds of millions of new passwords, which in one trial, turned out a 26% success rate

Given the black market of cybercriminal tools and services, AI can be used to make operations more efficient and profitable. As well as identifying targets for attacks, cybercriminals can start and cease attacks with millions of transactions in just minutes, owed to fully-automated infrastructure.

According to Malwarebyte’s paper When Artificial intelligence Goes Awry, AI technology could soon bring us into the unwelcome age of ‘malware 2.0’. While there are currently no examples of AI-powered malware ‘in the wild’, if the technology opened new avenues for profit, “threat actors will be standing in line to buy kits on the dark market or use GitHub open source […]”

The biggest concern regarding AI’s use in malware is that new strains would be able to learn from detection events. If a strain of malware was able to determine what caused its detection, the same behavior or characteristic could be avoided the next time around. If a worm’s code was the reason for its compromise, for example, automated malware authors could rescript it. If attributes of behavior caused its detection, randomness could be added to foil pattern-matching rules.

The use of AI could also improve a method of certain Trojan malware variants, where they create new file versions of themselves to fool detection routines.

Faced by this fast-moving and evolving threat, cybersecurity will increase leverage the power of AI itself. 

Advanced antivirus tools can leverage machine learning to identify programs exhibiting unusual behavior, to scan emails for indications of phishing attempts, and automate the analysis of the system or network data to ensure continuous monitoring. 

Given that the cybersecurity industry is facing a widening skills gap, we can reasonably expect investments in ‘intelligent’ cybersecurity systems to be the next best course of action.