Will AI and 5G advances mean a new world of cybersecurity horror?

The technologies will pave the way for amazing new applications. But will they also prise open new vulnerabilities?
28 October 2019

5G cellular repeaters on a pole. Source: Shutterstock

Artificial intelligence (AI) and 5G will be the perfect data marriage. 

On the one hand, AI and machine learning thrive and accelerate on massive amounts of data and, on the other, the next-gen, high-speed cellular technology will generate massive amounts of data. 

The powerful combination will enable new applications in autonomous driving, augmented and virtual reality, the IoT-powered smart cities— even the tactile internet

But while future-gazers are quick to excite about the possibilities— far as they are from our grasp right now— cybersecurity experts are as equally concerned about the growing risks this data-heavy duo will bring along with it. 

UK cybersecurity company Information Risk Management (IRM), part of Altran Group, surveyed cybersecurity and risk management decision-makers at 50 global companies. It found that an “overwhelming majority” believed the age of 5G-enabled AI would bring new and heightened cybersecurity risks to their organizations.

Representing major industry sectors, across automotive, communications, energy, finance & public sector, software/internet, transport, and pharmaceuticals, 83 percent of cybersecurity leaders proved to be troubled by risks associated with the technological advance.

The findings formed part of IRM’s Risky Business report. In it, Altran Group’s Shamik Mishra said 5G will produce a larger attack surface as more distributed network data centers get deployed.  

“The vulnerabilities in 5G appear to go beyond wireless, introducing risks around virtualized and cloud-native infrastructure.” 

The report noted that in order to drive 5G deployment, a secure infrastructure strategy is vital, but ‘white box’ hardware will be critical to lowering the total cost of ownership. 

“It’s not known whether such hardware has the right security solutions, so implementing device security practices will be critical to making this model work,” read the report.

At the same time, the third-party application providers that will be required for Edge Computing and 5G network clouds needed to execute 5G use cases will “automatically” mean more vulnerabilities are brought into the fray— via rogue applications or vulnerable, non-secured software. 

Meanwhile, cybersecurity professionals lauded AI’s potential to enhance defenses, particularly in network intrusion detection & prevention— where AI learns normal behavior based on the data given to it, and flags abnormalities— fraud detection and secure user authentification. 

However, AI used in this way has the potential to flag false alarms, or miss subtle abnormalities, and without suitable context or explanation, it may be difficult for an analyst to act on. 

The use of AI and in business also introduces new privacy considerations. Deep learning systems can extract information through specifically-created queries. Systems with a public interface or API access are potentially vulnerable in a way that a rule-based system is not. 

In addition, deep learning programs have a tendency to inadvertently memorize facts, and this itself can become a privacy or IP risk that needs addressing through careful design and monitoring. 

The report said AI in cybersecurity was a “double-edged sword.”

“It can provide many companies with the tools to detect fraudulent activity on bank accounts, for example, but it is inevitably a tool being used by cybercriminals to carry out even more sophisticated attacks.”

Earlier this year, for example, what is thought to be one of the first AI cybercrime heists took place, with scammers leveraging AI and voice recording to impersonate a business executive, requesting the successful transfer of hundreds of thousands of dollars of company money to a fraudulent account.