AI and cybercrime: Is your fish tank and smart TV really secure?

Artificial intelligence will make hacking a company easier, faster and stealthier, and the technology to do it is already widely available.
3 May 2018

Hackers even target connected aquariums these days. Source: Shutterstock

Dave Palmer, Director of Technology, at UK-based Darktrace told the audience at CODEX’s Tech Insights Event in London last week that AI will absolutely be used by criminals, not because it’s cool and entertaining but because it will allow them to automate to reach more victims than ever before.  .

Darktrace is one of the world leading cyber security firms founded by former British spies.

The company, which was valued at US$825m last year, claims its AI-powered Enterprise Immune System inspired by the self-learning intelligence of the human immune system is the ‘world’s most advanced machine learning technology for cyber defense’.

The firm says it recently foiled a cyber-attack via a futuristic fish tank at an unnamed casino.

As sophisticated or not as that might seem, Palmer started his talk by saying: “The best way to hack an organisation is still to trick someone who works there.”

He described how he once received an email impersonating a colleague that included details of a private conversation the two had previously had on the street.

A cyber-attacker had earwigged their discussion and used the information to make a spam email more convincing to fool Palmer into engaging with it. The intruder had also tried to replicate his colleague’s style of writing and signature, but fortunately, small anomalies gave its forgery away.

This is an elaborate example of a criminal trying to fool someone into believing a fake email is a real one. However, with today’s technology there are much easier ways to make spam emails convincing, said Palmer.

“In the future machines could learn to be contextually relevant so someone is more likely to open the attachment,” he said.

The software capability to do this is already here: AI assistants already recognize and replicate natural language.

“Natural language generation and social media bots mean we are a long way from that automated spam sending,” said Palmer, “If you’re thinking the cybercriminal part of it might be hard, I guarantee you that technology is easily available: that’s where we are in the cybercrime world now, it is lot more advanced than just competing with a teenager in their bedroom.”

Furthermore, Palmer said microphones, cameras and AI assistants like Alexa and Siri provide ample opportunities for cybercriminals.

He described working with a law firm in London where the video conferencing units were deliberately attacked in the board and major meeting rooms.

“Why go after the databases that are well looked after when you can just hack the very easy to get into pieces of office IoT, such as smart TVs or video conferences and listen to what is going on?” said Palmer.

This too can be automated for added ease. By streaming the hacked audio from video conferencing units to Google or Amazon or Microsoft, the criminals can get back a paragraph of text providing a summary of what was talked about in the meeting [this technology is called natural language generation].

“A cybercriminal could do this all over the world because the software will be able to understand more languages than a professional linguist,” added Palmer.

Furthermore, the criminals can use face recognition software to make sure only the relevant people are in the room when it streams to reduce the risk of getting caught.

“It is very difficult to find out that attack is taking place, I can guarantee you aren’t running antivirus on your smart TVs,” said Palmer.

“AI can also make hacking far stealthier. For example, if you create a visible crisis someone will respond. But to create greater harm for longer a criminal could interfere with highly valuable data, such as oil and gas operators geophysical survey data: “so the drilling and the mining rights are bought in the wrong places.

He added this could be done more covertly by going after the information supply chain and IoT devices used to collect the data.

“If you can start manipulating the beliefs of an organization by being on the edge of their supply chain, rather than break your way in and affect their databases, you could be very successful,” he added.

This is scary stuff for any business, but according to Palmer, it is no longer only theoretical.

“Is AI going to have an impact? Yes, it is. It is not going to increase the volume of attacks, being on the internet now is like having your doors and windows constantly rattled, but I think when things get inside of your business they will start to be far more damaging than they have in the past,” he concluded.