TikTok acted to quell misinformation on Ukraine

The Chinese-based platform looks responsible in its quest to deliver accuracy.
10 February 2023

TikTok – more responsible than it’s given credit for?

Getting your Trinity Audio player ready...

The US government has a real issue with TikTok. On the surface, that could be attributed to the increasingly Sinophobic stance of US economic policy (with its ever more hardline Anti-Chinese attempts to “rebalance” the semiconductor supply chain), but representatives from both major parties see the social media platform as a threat to US national security, and in December, 2022, it was banned from all government-issue smartphones.

On the surface then, news that in the summer of 2022, 1,704 TikTok accounts were used as part of a pro-Russian network to spread misinformation, disinformation and anti-Ukraine sentiment as a way to influence the way people viewed the illegal Russian invasion of that country, supports the idea that the US government is right to regard TikTok as a potential threat.

A more complex geopolitics.

Except the accounts were targeted towards Germans, Italians and Britons, influencing European (and British) sentiment, rather than impacting the US particularly. There’s arguably some sense there, in that European NATO powers would be the most likely to resist the invasion on their relative doorsteps, and potentially the fastest to supply Ukraine with actual boots-on-the-ground military support. The accounts were aimed at softening resistance among the general public, rather than among leaders or politicians, so the revelations are less supportive of the US government position than they might at first appear.

In a mark of technical sophistication, the accounts used software to spread the pro-Russia, anti-Ukraine propaganda in the local languages of the countries in which they were operating, so as to appeal more easily and directly to that local audience. Depressingly perhaps, the accounts managed to gather more than 133,000 followers before TikTok discovered what was going on.

While it’s important to acknowledge that adversarial influencer groups were able to set up over a thousand TikTok accounts and persuade at least 133,000 people to support their content, to sway opinion on matters of crucial geopolitical import, it’s also worth noting that in the run-up to events like the UK’s Brexit referendum and the 2016 US election, large numbers of voters were swayed by social media propaganda on social media platforms – both before TikTok became the force it is today.

Any available channel.

As such, what we learn is that Putin’s Russia particularly will use whatever media exist to spread its propaganda, and that TikTok is not particularly or especially a channel of threat to either national security or social stability on the basis of these accounts.

And then, there’s what happened once TikTok had discovered the misinformation and disinformation accounts.

ByteDance, the company that owns TikTok, set about a massive corrective action, specifically to make the information available on the platform more accurate and less prone to the “fake news” of the propaganda accounts. It removed nearly 865,000 fake accounts, which between them had over 18 million followers. Implementing its policy on not allowing impersonation, the company culled nearly 500 accounts based in Poland alone.

That sort of response to the growing evidence of the platform being used as a mouthpiece for pro-Russia propaganda while the country was in the process of invading a neighbor is not naturally in line with the status of a threat to national security.

A trend of responsible governance.

What’s more, the response to the realization of the propaganda accounts’ existence was not by any means an isolated event. Recognizing a marked increase in attempts by accounts to post political content (in support of the Russian invasion) in the immediate aftermath of the invasion’s beginning, the platform began to block Russian (and, in fairness, Ukrainian) advertisers from targeting political ads at users in other European countries.

Seeing a need and a gap in its response, it also hired native-speakers in both Russian and Ukrainian to help moderate the platform’s content towards factual accuracy.

And then it began working with Ukrainian reporters to assure its fact-checking process was as accurate as it could be, and set up a digital literacy program to ensure information about the war was factual, restricting access for media outlets with known links to the Russian government, like Russia Today and Sputnik.

Between mid-June and mid-December 2022, TikTok reported that it took down more than 36,500 videos, with 183.4 million views across Europe, on the grounds that they infringed the platform’s “harmful misinformation” policy.

The data on TikTok’s response to the use of its platform by pro-Russian propaganda accounts was released in a report so that TikTok could comply with the European Union’s Voluntary Code of Practice on Disinformation – a code that many of the leading social networks have signed up to.

Future plans on evolving technologies.

Nor is TikTok content to rest on whatever laurels it gained from its response. It explained that in the next few months, it would be updating its policies banning “deceptive synthetic content” – deepfakes, as the rest of the world knows them – particularly in response to the likely wave of generative AIs coming in the wake of ChatGPT. That’s evidence of TikTok trying to get ahead of the next generation of threat and propaganda, as well as dealing with the most recent generation.

While it’s true that none of that will matter to those in the US legislature who see TikTok as a national security threat specifically because ByteDance is based in China, where a provision exists that would allow the government to access any data held by the company – in the event it suddenly decided it wanted it (and that it could command that access without the world, and particularly the US, raising international data security cane about it).

But the report of TikTok’s response to the discovery of authoritarian propaganda on its platform at least looks like the sort of response that should be expected of a social media platform trying to act responsibly on its commitment to accurate information and data stewardship in the 2020s.