Moderation and misinformation – Facebook fires back against abuse claims

22 September 2021

Senator Mike Lee photographs a display at a Senate Judiciary Committee hearing titled, “Breaking the News: Censorship, Suppression, and the 2020 Election” on Facebook and Twitter’s content moderation practices, on Capitol Hill last year. (Photo by HANNAH MCKAY / POOL / AFP)

Facebook on Tuesday fired back after a series of withering Wall Street Journal reports that the company failed in its moderation attempts to keep users safe, with the social media giant noting an increase in staff and spending on battling abuses.

The company has been under relentless pressure to guard against being a platform where misinformation and hate can spread, while at the same time remain a forum for people to speak freely. A lot of this comes down to content moderation fault lines or blatant ignoring of findings from their own studies, and Facebook has struggled to respond to these allegations until now.

A series of recent Wall Street Journal reports said the company knew its Instagram photo-sharing tool was hurting teenage girls’ mental health, and that its moderation system had a double standard allowing VIPs to skirt rules. One of the articles, citing Facebook’s own research, said a 2018 change to its software ended up promoting political outrage and division.

But Facebook said yesterday that it has spent in excess of US$13 billion over the past five years, on teams and technology devoted to fighting abuses on its platforms. Some 40,000 people now work on safety and security for the California-based tech giant, quadruple the number in the year 2016, according to Facebook.

“How technology companies grapple with complex issues is being heavily scrutinized, and often, without important context,” Facebook contended in a blog post. The social network launched an about.facebook.com/progress website to showcase work done to counter abuses.

Facebook’s Nick Clegg also attacked the reporting in a blog post last weekend, saying that the series of articles was unfair. “At the heart of this series is an allegation that is just plain false: that Facebook conducts research and then systematically and willfully ignores it if the findings are inconvenient for the company,” he wrote.

Facebook has been accused for years of being too lax on the moderation of problematic content, including false rumors and conspiracy theories, while on the other hand quickly stamping out content frowned upon by advertisers, such as pornography. The social giant insisted it would not position itself as an arbiter of truth – before gradually changing its tune and succumbing to pressure to responsibly monitor its own platform — in the face of growing outcry from watchdogs and elected officials.

The Journal stories cited, in part, studies commissioned by the company and which contained disturbing revelations like: “We make body image issues worse for one in three teen girls.”

Clegg said the stories selectively employed quotes in a way that offered a deliberately lop-sided view of the company’s work. “We will continue to ask ourselves the hard questions. And we will continue to improve our products and services as a result,” he said in the closing lines of his post.

Facebook recently launched an effort targeting users working together on the platform to promote real-world violence or conspiracy theories, beginning by taking down a German network spreading Covid misinformation. The new tool is meant to detect organized, malicious efforts that are a threat but fall short of the social media giant’s existing rules against hate groups, said Facebook’s head of security policy Nathaniel Gleicher.