How does TikTok fend off questionable content?

"Technology is critical in helping us build transparency and build trust."
8 December 2020

TikTok has had to fend off numerous content complaints, from violent to “immoral” content. Source: Shutterstock

As social media platforms evolve in tandem with the shifting mores of socially acceptable behavior, a spotlight is shone more clearly on what is permissible to be viewed online, and what is considered in good taste. The most recent evolution has seen short-form videos become the most digestible content form, with the preeminent platform delivering this content is TikTok.

From its 2016 launch through to its 2018 merger with Musical.ly that gave the fledgling app entry to the US market, Tiktok has brought its brief video-only content platform to higher highs. More entrenched social media platforms recognized shifting user preferences for easily digestible video, with Facebook trying to launch its TikTok rival Lasso back in 2018.

By February 2019, TikTok had surpassed one billion installs on the App Store and Google Play Store. That same month, it received its first US$5.7 million fine for violating U.S. children’s privacy laws, and that same day released an update to provide a more restricted in-app experience for under 13-year-olds.

In March 2020, TikTok’s extreme global popularity was contrasted sharply with escalating government scrutiny of its data privacy and content policies, leading the platform to introduce a “transparency center” in the US where experts could examine its moderation practices.

As it topped 2 billion downloads worldwide, TikTok began introducing dedicated content moderation teams in various markets to monitor for inappropriate videos such as violence, nudity, as well as videos that violate local country laws.

“Everyone has a role to play, and trust and safety is a shared responsibility,” says Arjun Narayan, TikTok’s Director of Trust and Safety in Asia Pacific (APAC). “We have robust Community Guidelines that clearly outline what users can and cannot do, comprehensive safety features that empower users to report inappropriate or offensive comments, videos or accounts, and content moderation technology.”

Organizations like the World Economic Forum (WEF) have called on social media giants to take a harder stance in combating extremism, political misinformation, and hate speech against different groups. Social platforms have also been cited by free speech groups for censoring specific content, like how Facebook removed certain content from its platform that were perceived as insults against the Thai monarchy, at Thailand’s behest.

With the rise of ‘fake news’ and the oft-conflicting viewpoints on what qualifies as harmful content, digital platforms rely even more on tech-driven solutions to parse, identify, and catalogue content that has been deemed sensitive.

“No single solution can win in isolation, but there’s no question that technology is critical in helping us build transparency and build trust. As technology advances and new, more sophisticated issues such as deep fakes emerge, content moderation teams and systems need to also get smarter to stay one step ahead,” Narayan elaborated to Tech Wire Asia.

“This means constantly evolving, enhancing and creating technologies that protect the safety and well-being of our TikTok community,” he emphasized. “Over the past year, we have introduced fact-checking programs to help us verify misleading content as well as adding in-app educational PSAs on hashtags related to important topics in the public discourse, such as COVID-19 and harmful conspiracies.”

Despite repeated data privacy and transparency maneuvers, TikTok’s fate remains uncertain in some of its larger markets like the US and India. But Director of Trust and Safety Narayanan maintains that the popular app’s content moderation teams can only rely on TikTok’s Community Guidelines, and attempt to adhere to the local laws of the countries they operate in.

“This has been our guiding principle regardless of global developments. Our guidelines clearly warn against the uploading or sharing of fake content, defamatory content, spam, intellectual property infringement, among other forms of malicious activity,” he reiterated. “If we find any behaviour that violates these guidelines, we will investigate the case and take proper action.”