Content moderation and the rise of harmful UGC

Humans are always going to human. Content moderation is there to stop them taking that too far.
22 June 2023

Never read the comments. It’s where anonymous people become their worst selves.

Getting your Trinity Audio player ready...

• More harmful UGC demands more – and more complex – content moderation.
• Digital literacy can help parents keep their children safe.
• Companies can do more, engaging a content moderation firm to police their platforms.

The internet is awash with UGC (user-generated content). In a way, that’s not a bug but a feature of the design – comments sections, blogs, vlogs, and the very nature of social media all depend on the continued creation of UGC. But where there are humans free (and relatively anonymous) to wreak havoc with everybody’s day, humans will… human. So content moderation is becoming increasingly vital in the face of a daily tidal wave of increasingly harmful UGC.

We spoke to Alex Popken, VP of Trust and Safety at WebPurify, a content moderation company that combines AI and human moderation in its mix, to find out what’s causing the wave, how it’s showing itself – and what can be done to beat it down.

THQ:

Can you give us a sense of the scale of the rise in harmful UGC?

AP:

Sure. First of all, any content that’s created and shared by users of online platforms in the form of multimedia, via text, images, video, audio, is UGC. Often, we think about social media as synonymous with UGC, but really, it spans many online industries.

So, the comments section of a blog, restaurant reviews, Amazon reviews, online dating sites, ecommerce platforms, it’s all UGC, and our clients run the gamut. So the scale is significant. Last year alone, there were over 5 billion people online, and newer generations, having grown up with the internet, use it as a place to cultivate communities.

That means we’re seeing a lot more UGC in general.

And then, of course, as you said, humans are going to human and bad actors will exploit platforms and weaponize them to create harmful UGC.

THQ:

Can you define what harmful is, and who is actually being harmed by what? Just to head off the inevitable people who are… due any second… who’ll say “You can’t say anything these days without someone getting offended on the internet!”

More harmful UGC, more content moderation.

AP:

Yeah. Any content that has a negative impact on individuals, communities or even society as a whole is classed as harmful UGC. And as I said, there’s quite the gamut, ranging from copyright infringement through to child sexual abuse material on the obviously illegal side of the spectrum.

We also see things like hate speech and harassment UGC, and it impacts not only the people who have to see it, but the brands that are hosting the content.

If you run an online platform, and you have a lot of hate speech on there, that is obviously going to have negative implications for your brand, and ultimately, for your bottom line.

THQ:

Unless that’s what your platform’s for, or, for instance, you don’t care about hate speech, you don’t generally want to become known as “the place with all the hate speech.”

content moderation of harmful UGC depends on knowing what's harmful.

Of course, “harmful” is a moveable feast.

AP:

Exactly. That’s when companies turn to content moderation solutions to help them deal with harmful UGC.

In terms of who’s harmed, anyone can be, but in particular, vulnerable populations. Young people who might be exposed to cyberbullying or grooming. Members of marginalized communities – that’s important, because marginalized communities have found great value in online platforms, finding like-minded folks and points of connection.

But often, those communities can also be the target of abuse and harassment through harmful UGC, which is where content moderation is crucial.

But also, society more broadly is becoming more and more prone to harmful UGC like disinformation, which sows division and undermines democracy.

Harmful UGC needs content moderation to allow people to safely exist online.

We’ve all made the “What did I just see and why?” face online – but women get a lot more horrifying UGC than men, generally.

That’s part of what content moderation can add to conversation (and the solution) around harmful UGC. No-one is immune to harmful UGC, so technically everyone should be able to count on content moderation to help protect themselves and the ecosystem.

THQ:

So what’s causing the rise in harmful UGC that content moderation has to deal with? As you say, the overall amount of UGC is rising, and therefore, by definition, the amount of harmful UGC is going to rise with it.

But is there something other than that behind the rise? Or is it just a factor of percentage of overall volume?

AP:

Percentage of volume, and a rise in overall volume, are going to be factors, certainly, but it’s probably more complex than that.

Bad actors want to exploit others online and they’ve become more sophisticated over time. When we’re fighting fraud, we like to say you build a 15 foot fence, and they figure out how to jump 16 feet.

THQ:

Forgive us for our devious journalistic brain – we were just thinking we’d just dig a five feet trench, but go on.

AP:

Ha. Yes. The point is that we’re seeing the evolution of bad actors who have figured out how they can exploit others online for their own personal gain.

New media, new forms of UGC, new demands on content moderation.

So for example, fraudsters can create really lucrative businesses, where they are phishing for credit card information and growing their own personal bank account. So harmful UGC has become a lucrative business for bad actors.

Also, people have realized that the level of anonymity online emboldens people to be maybe their worst selves. So there’s a reckoning with that anonymity in the rise of harmful UGC, and the corresponding need for more content moderation.

And then there’s the rise in new media types. When we first started content moderation of UGC 17 years ago, we were really just text moderators, which is pretty binary, right? It was a case of either the text contained harmful content or didn’t.

THQ:

So it was a fast, easily recognizable thing. That’s clearly harmful, that’s silly but not harmful, that’s a description of a fluffy bunny, next!

Content moderation - not all about harmful UGC.

Content moderation is not always about the horrifying stuff – sometimes, people just need a time-out.

AP:

Exactly. But then we saw the rise of images, and then video, which adds a three-dimensional element. And now we’ve got audio, we’ve got even the metaverse and so the sheer complexity and the various forms of media, also have contributed to the rise of harmful UGC – which makes content moderation a much more complex business.

And then of course there are algorithms. So, content goes viral, and often, that’s caused by amplification, by the machines that are powering these platforms. That certainly increases the reach of harmful UGC – and makes it harder and more complicated in terms of content moderation. If it’s viral, it’s everywhere.

New harmful UGC content moderation solutions for civilians.

THQ:

So between human bad actors and algorithmic amplification of the harmful UGC, how do we begin to ensure content moderation still works, in a system it would be all too easy to categorize as out of control?

What can companies actually do to minimize their exposure to harmful UGC? What can people do? What can parents do, even?

AP:

There’s a concept known as digital literacy. It means empowering individuals to responsibly engage with digital technologies in a way that allows them to be informed about the risks, and protect themselves.

So, for instance, if you’re on a platform, and you don’t know how to make your profile private, or how to report problematic content, or even what it means to be exposed to harmful UGC and to need to take advantage of content moderation, it can educate you for your digital life.

Block the creepy stuff – once you know it’s the creepy stuff.

Similarly, part of digital literacy is learning how to compartmentalize your experience online, so the problematic UGC that you encounter doesn’t traumatize you too much, and you can engage whatever content moderation is in place on the platform.

So digital literacy and education are important first steps when it comes to combating the rise of harmful UGC. That’s especially important for children, who don’t have a fully-developed prefrontal cortex, and who are grappling with right and wrong.

It behooves parents to have an open dialogue, to set boundaries around their usage of technology, and to communicate with them about what it means to see creepy, suspicious content and when to raise their hand and say “This feels weird.” There are oftentimes parental controls in place to give parents that leverage. But you have to know they’re there, and you have to know how to use them as part of a broader conversation about content and content moderation.

 

In Part 2 of this article, we’ll delve deeper into companies, platforms, and the battle for content moderation in a world of increasing amounts of harmful UGC.