AI for cybersecurity: Friend or foe?

Many security firms are now using machine learning to strengthen their defense against hackers. But what are the risks involved?
20 August 2018

2019 could see the ‘consolidation’ of cybercriminal groups. Source: Shutterstock

Artificial intelligence (AI) has emerged as one of the key technologies in today’s digital age, revolutionizing many industries, from customer service to autonomous vehicles.

The technology is also becoming a key weapon in the fight against cybercriminals.

The rise of data breaches has become a critical and concerning issue. In recent years we have seen a tsunami of cyber attacks around the globe, inflicting great costs to both businesses and customers.

According to a report titled “Economic Impact of Cybercrime – No Slowing Down,” by McAfee and the Center for Strategic and International Studies (CSIS), cybercrime cost the world between US$445 and US$608 billion in 2017.

This concern is only heightened by the fast-growing number of IoT devices and the lack of skilled cybersecurity professionals at current.

Could artificial intelligence and machine learning lend a helping hand?

Many security companies are now harnessing the power of machine learning and artificial intelligence in their processes. The technology is being used to help automate the detection of malware on a network, guide incident response, and detect intrusions before they even start.

Smarter autonomous security systems are being developed that use AI algorithms to identify abnormalities in behavior.

Using “supervised learning”, firms can choose and label data sets which the algorithms are then trained on. For example, tagging code that is dangerous malware, and tagging code that is clean.

The detection of malware in this way works so well due to the availability of millions of labeled samples from both malware and benign applications. Therefore, we have a vast amount of training data to teach our algorithms right from wrong.

Despite the growing use of AI and machine learning in cybersecurity, there is a concern that not enough attention is paid to the risks associated with relying too heavily on these emerging technologies.

“What’s happening is a little concerning, and in some cases even dangerous,” warned Raffael Marty, vice president of corporate strategy at security firm Forcepoint, at the Black Hat cybersecurity conference in Las Vegas.

Are security companies merely riding the AI-hype?

Is AI in cybersecurity being over-hyped by companies with super-sized marketing budgets? Source: Shutterstock

A primary concern is that many security firms are rolling out machine-learning-based products simply to satisfy the customer-hype surrounding the technology. Thus, companies may overlook ways in which the machine-learning algorithms have the potential to create a false sense of security.

In the industry, there is the risk that companies are using training information that hasn’t thoroughly been checked in their rush to launch their product to market. This could lead to the algorithm missing an attack.

The risk of data poisoning

Another concern is that if a cybercriminal gains access to a security firms system, they could have the power to corrupt data by switching labels, tagging malware examples as clean code and vice-versa.

If a hacker can determine how an algorithm is set up, or where it grabs its training data from, they can then find a way to feed misleading data into the system. This will build a counter-narrative regarding which content is legitimate versus which content is malicious.

Machine-learning: a weapon also used by hackers

While many security firms have been using machine-learning to better anticipate and respond to attacks, it is also incredibly likely that hackers are using this very same tech to launch bigger, more complex attacks.

Machine-learning technology has proven remarkably skillful at crafting convincing fake messages. This capability greatly enhances the number of phishing attacks that a single hacker can carry out.

It seems hackers are also unlocking the power of AI in accelerating their efforts. Source: Pixabay

Researches from security firm ZeroFOX have also demonstrated a bot which could tailor clickbait for phishing attacks.

Using a machine learning system that scans an individual’s past tweets and hashtags, personalized and targeted messages are generated containing infected links. The team managed to fool over two-thirds of the recipients into clicking the link.

It is clear that there are many blind spots in the use of machine learning and cybersecurity. It is fair to say though, that once these blind spots are reduced, machine learning techniques in cybersecurity shows real promise in helping to cybersecurity professionals manage the growing number of attacks.

Perhaps the greatest challenge is keeping expectations in check amongst the overwhelming hype…