Europol: Deepfake technology a mounting threat

In Europe, the use of deepfake technology is gaining ground in the criminal underworld, according to report by Europol.
29 April 2022

When deepfake technology first became popularized a few years ago, many were amazed by the capabilities the technology had to offer. Despite concerns about the abuse of deepfakes, several mobile apps were soon providing deepfake animation services to users who wished to use the technology for fun.

There is no denying that deepfake technology can be used for the wrong reasons. In fact, all types of technologies can be used to cause problems for not only consumers, but organizations as well. In the case of deepfake technology, several questionable videos have been edited to resemble prominent figures, normally politicians.

The technology leverages deep learning advancements for audio and audio-visual content. With the right algorithms, the AI can generate deepfakes. Today, deep learning tech applies neural networks that when paired with the availability of large databases with material to train the generative models on, enables rapid improvements in deepfake technology.

Last month, a deepfaked video of Ukrainian President Volodymyr Zelensky telling his soldiers to surrender to the Russian invasion made its way to social media before being removed. Another recent example is a video of former US President Donald Trump seemingly signing up for Russia’s version of YouTube.

In the US, the Pentagon, through the Defense Advanced Research Projects Agency (DARPA) is working with several of the country’s biggest research institutions to get ahead of deepfakes.

Over in Europe, the use of such manufactured media is gaining ground in the criminal underworld, according to report by Europol. The policing agency warned that deepfake technology should be targeted as a priority — especially since it has the ability to make things appear as though people say or do things that they never did, and then be distributed online where its veracity is hard to pin down.

Enterprises not spared from deepfake technology

The Facing Reality? Law enforcement and the challenge of deepfakes report published by the Europol Innovation Lab includes several contemporary examples showing potential use of deepfakes in serious crimes. This included CEO fraud, evidence tampering, and the production of non-consensual pornography.

Advances in artificial intelligence and the public availability of large image and video databases mean that the volume and quality of deepfake content is increasing, which is facilitating the proliferation of crimes that harness deepfake technology. The report highlighted that law enforcement agencies need to be aware of the impact on future police work.

For businesses, deepfake technology can be used for malicious reasons, especially when it comes to disinformation and document fraud. For disinformation, deepfakes can be used to spread false information to the public. For example, a threat actor could create a deepfake that makes it appear that a company’s executive engaged in a controversial or illegal act.

Deepfake as a service

Just as ransomware-as-a-service continues to see a demand globally, the report also stated that deepfakes-as-a-service are also now becoming increasingly in demand, with some willing to pay US$16,000 for the service.

The concern from this is that the technology and its capabilities are becoming more accessible to the masses through deepfake-producing apps and websites. Europol’s report even states that there are special marketplaces on which users or potential buyers can post requests for deepfake videos (for example, requests for non-consensual pornography).

“Those who know how to leverage sophisticated AI can perform the service for others, enabling threat actors to manipulate a person’s face and/or voice without understanding the intricacies behind how it works. Then they can conduct advanced social engineering attacks on unsuspecting victims, with the aim to make a sizable profit. Platforms offering these kinds of services have already started to emerge,” stated the report.

With the use of the technology only increasing, the report concluded that law enforcement could end up having a hard time dealing with the fallout that could arise. While tech companies are already doing their part to suppress deepfakes by having policies to remove or ban deepfake content from being disseminated, the reality is that such content is still widely available — and will continue to be a problem for the foreseeable future.