Cybercriminals use AI to mimic voice of chief executive
Business leaders and staff alike are now becoming aware of the increasingly sophisticated cybercrime threats facing their businesses, and the common forms they take.
Known as spear phishing, a frequent method— particularly among larger businesses— is when an attacker targets recipients at a specific company with an email purporting to be from a known superior in the same company.
Comprising 12 percent of corporate cybercrime, spear-phishing generally sees the attacker request access to confidential information under this guide, allowing them to compromise an organization’s network with malware or directly request a transfer of funds to a bank account.
Given that these are targeted attacks, attackers do their research— using automation tools and phishing kits to scrape publicly-available resources like LinkedIn— which often makes them highly convincing. Given the supposed authority of the request, the attack exploits employee pressure to comply without question.
Organizations are responding to the threat by training employees to recognize indications of fraud and imposing hardline policies and protocols for communicating sensitive requests. But in the age of AI-driven fakery, cyberattackers are constantly discovering new ways to catch us off guard.
AI in cybercrime
As reported by Dark Reading, scammers have leveraged AI and voice recording to impersonate a business executive, requesting the successful transfer of hundreds of thousands of dollars of company money to a fraudulent account.
In March this year, an attacker called the CEO of UK-based energy business pretending to the head of its German parent company.
AI software was thought to have been used to imitate the superior’s voice— including a “slight German accent and other qualities the UK CEO recognized”— which led them to believe the request was genuine.
The ‘boss’ requested the urgent transfer of US$243,000 to an alleged supplier in Hungary within an hour. Following a successful transfer, the attackers made further unsuccessful attempts to scam the company.
The attack is thought to be the first detected instance of cybercrime in which a scammer has used AI to impersonate an individual’s voice. The technology could pose a new type of cyberthreat to organizations which are, as yet, unprepared to deal with.
“Experts have been predicting that cybercriminals will be leveraging AI to assist in scams,” Javvad Malik, Security Awareness Advocate at KnowBe4, told TechHQ.
“The use of technology to impersonate a chief executive has some scary implications, especially given the fact that it is not inconceivable that coupled with video, the same attack could be played out as a video-call.”
Earlier this year, the potential power of AI fakery came to light in a deepfake video of Mark Zuckerberg, where he discusses the power of holding “billions of people’s stolen data”, and that “whoever controls the data, controls the future.”
At least at present, that technology in the wrong hands would likely be targeted at high-profile individuals, given the sophistication of the technology and volume of data sources required to create a convincing impersonation.
As the software is developed and inevitably starts to be sold ‘as a service’— uptake among malicious actors will surely spike in the coming years.
View this post on Instagram
‘Imagine this…’ (2019) Mark Zuckerberg reveals the truth about Facebook and who really owns the future… see more @sheffdocfest VDR technology by @cannyai #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart
While this emerging threat scales in sophistication, and cybersecurity organizations develop the technology to help detect AI-driven cybercrime, businesses must ensure internal policies help limit the success rate of fraud attempts with the implementation of thorough policies and processes.
“One employee should not have the ability to independently create a new payee and transfer large sums of money,” said Malik.
“It is in these scenarios where having robust processes and segregation of duties can help immensely whereby more than one employee is needed to create a new payment, and its legitimacy is validated through pre-approved channels.”
15 January 2021
15 January 2021