Is your voice making you a vulnerable target?

Researchers have discovered ways in which hackers can target your voice.
4 September 2018

With new technologies come new methods of cyber-attacks. Source: Shutterstock

With advancements in technology come weaknesses and vulnerabilities that give hackers an advantage.

And while this may sound like a scene from a science fiction movie, a new form of attack is emerging in the dark world of cybercrime relating to hackers taking aim at the human voice.

Voice hacking is a subset of identity theft — perhaps one of the most damaging forms of cybercrime. It involves stealing audio samples that can be used to gain unauthorized access, verify identity, and use hidden audio commands to target a speech-controlled system.

This capability has been demonstrated by cybersecurity researchers in proof-of-concept attacks.

Earlier this year, a group of academic researchers from Indiana University, the University of Virginia, and the Chinese Academy of Sciences uncovered “voice-squatting”.

This is a new and novel way that hackers are using to snoop in on Google Home and Amazon Echo devices. The team demonstrated that it’s possible to closely mimic legitimate voice commands in order to carry out actions on these smart speakers.

How does voice-squatting work?

Both Alexa and Google Home smart speakers have third-party developer ecosystems that enable coders to build “skills” or applications for the device.

To voice-squat, a hacker has the potential to develop a new, malicious skill that is specifically created to open when the smart-speaker user says certain phrases.

However, these phrases are designed to sound incredibly similar to those used to open legitimate applications. The result of this is that the device would hear the similar phrase and open the rogue app instead, therefore hijacking the device.

“Alexa: is my voice safe from hackers?” Source: Shutterstock

Once access has been gained, the hacker can eavesdrop or record the user’s sessions.

It can “pretend to yield control to another skill (switch) or service (stop), yet continue to operate stealthily to impersonate these targets and get sensitive information from the user,” researchers said in a paper on the discovery.

When contacted by the researchers regarding this alarming discovery, both Amazon and Google said they were working to address the problem.

However they noted that “protecting such [voice user interfaces] is fundamentally challenging, due to the lack of effective means to authenticate the parties involved in the open and nosy voice channel,”.

Worryingly, this isn’t the only way in which hackers are taking advantage of voice biometrics and speech-controlled devices.

“Deepfakes” is a new AI-based technique that is being used by bad actors. It is the development of machine learning software that enables the creation of fake videos.

The audio equivalent of this technique is now making it possible to clone voices for malicious activities.

While the average Joe is unlikely to become a victim of this sort of audio attack, prominent figures such as CEOs, politicians, and celebrities can be targeted more easily.

For instance, business rivals have the potential to create fake recordings of their competitor’s CEO making inflammatory remarks. They could then post this on social media or leak it to the press, causing a significant amount of damage.

Along with every new technology, cybercriminals are finding new and sophisticated ways to perform attacks. And with the increasing rise of speech-controlled devices, individuals and businesses need to actively find ways to protect themselves.