Metaverse misinformation – a content moderation nightmare?

False and misleading information is already hitting the metaverse - what will content moderation in a virtual world be like?
10 January 2022

Sean Mills, Head of Content at Snap, Inc., takes the stage at the virtual Snap Partner Summit 2020. (Photo by GETTY IMAGES / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

Misinformation in the metaverse has become a rising concern especially when it comes to content moderation, with industry watchers believing it will be extremely difficult, considering the currently Herculean task in moderating existing social media platforms for misinformation.

It has rarely been clearer how the digital transmission of information in a lucid manner can have life-saving consequences than the COVID-19 pandemic, as the speed at which trusted, verifiable information can be disseminated has already saved countless lives.

However, as much as the freedom of speech is identified as a right everyone should have, and as people exercise their right to information, the problem of misinformation comes in, especially when fact-checking can lead to an echo chamber, where the threadbare bits of data is supported by similar articles — even though the original information has been proven faulty to begin with.

Social media and content moderation

Social media platforms have spotty histories with regards to content moderation, though recent efforts have seen social platforms step up, such as the permanent suspension of Representative Marjorie Taylor Greene’s personal account, following her tweet about “extremely high amounts of COVID-19 vaccine deaths” in the US.

False and misleading information is already hitting the metaverse - what will content moderation in a virtual world be like?

Facebook CEO Mark Zuckerber testifying remotely via videoconference as US Senator Thom Tillis listens during a 2020 Senate Judiciary Committee hearing titled, “Breaking the News: Censorship, Suppression, and the 2020 Election” on Facebook and Twitter’s content moderation practices. (Photo by HANNAH MCKAY / POOL / AFP)

This falls in line with Twitter’s COVID-19 misleading information policy, where an account that has received five strikes will be permanently suspended. Meta’s Facebook has also implemented a system in May 2021 where it would start to bury a user’s activity further down its news feed, if their posts had been investigated by one of the firm’s fact-checkers.

“Whether it’s false or misleading content about Covid-19 and vaccines, climate change, elections, or other topics, we’re making sure fewer people see misinformation on our apps,” the company said. Meta then shared that, by July 2021, they had removed over 18 million examples of coronavirus misinformation since the start of the pandemic.

Are the current countermeasures enough for metaverse misinformation?

As much as these countermeasures are capable, there remain critics who believe more needs to be done, or that present measures are insufficient to curb the spread of misinformation in the metaverse .

A recent case of this would be the lawsuit by former TikTok content moderator Candie Frazier against the social media platform and its China-based parent company Bytedance, where she allegedly “reviewed videos that featured extreme and graphic violence for up to 12 hours a day”.

While the lawsuit saw a stronger focus on the mental health of the plaintiff, it further highlights just how easily social media platforms can be used to spread and make available not only misinformation, but harmful or extremist content to a wide audience, essentially on demand.

The Facebook Papers case further highlights this vulnerability on social media platforms concerning the spread of false or damaging information. This does not bode well for the fight against metaverse misinformation, especially given the mostly dismal track record on policing offensive content by big tech companies.

“The Facebook Papers showed that the platform can function almost like a turn-key system for extremist recruiters and the metaverse would make it even easier to perpetrate that violence,” said Karen Kornbluh, director of the German Marshall Fund’s Digital Innovation and Democracy Initiative and former US ambassador to the Organization for Economic Cooperation and Development.

It certainly does not help that the incoming Chief Technology Officer for Meta, Andrew Bosworth, has stated in an interview that the stopping of misinformation is not Facebook’s problem at all, instead pinning the blame on individuals who participate in posting or sharing erroneous data.

This alone speaks volumes regarding the likely view the company is going to take regarding metaverse misinformation, and the nightmare it is likely to become manifested as the metaverse moves closer to reality from the realm of fiction and theory.