Twitter and Facebook make serious efforts to improve

It seems like our favorite social media platforms are finally taking safety, security, and user experience seriously.
27 June 2018

Twitter CEO Jack Dorsey takes steps to improve the user experience. Source: Flickr / JD Lasica

There’s a problem when social media giants like Twitter and Facebook struggle to battle bad actors on their platforms.

It mars the experience for everyone and damages the image and reputation of the platform in the minds of users.

In today’s world, where data privacy and security lapses are becoming increasingly difficult to hide, reputation, image, and the resulting experience each platform provides is critical to its survival.

And that is why, both Twitter and Facebook are making significant efforts to repair their platforms, close loopholes, and earn back the trust and confidence of users.

“We are committed to fighting false news through a combination of technology and human review, including removing fake accounts, partnering with fact-checkers, and promoting news literacy,” said Tessa Lyons, Product Manager at Facebook, while announcing several new updates to the platform recently.

Here are some of the measures Facebook has taken this month to make its platform more secure and its content more reliable:

  • Expanded its fact-checking program to new countries
  • Extended its test to fact-check photos and videos
  • Increased the impact of fact-checking by using new techniques
  • Taken action against new kinds of repeat offenders
  • Improved measurement and transparency by partnering with academics

Twitter too, is making significant and thoughtful changes to its platform to transform the experience.

“One of the most important parts of our focus on improving the health of conversations on Twitter is ensuring people have access to credible, relevant, and high-quality information on Twitter. To help move towards this goal, we’ve introduced new measures to fight abuse and trolls, new policies on hateful conduct and violent extremism, and are bringing in new technology and staff to fight spam and abuse,” said Twitter executives Yoel Roth and Del Harvey yesterday.

The company says it has taken significant steps to reduce the visibility of suspicious accounts in Tweet and account metrics, improve the sign-up, audit existing accounts for signs of automated signups, and expand its malicious behavior detection systems.

In addition, Twitter has issued new guidance to users to better secure their accounts:

  • Enable two-factor authentication. Instead of only entering a password to log in, you’ll also enter a code which is sent to your mobile phone. This verification helps make sure that you, and only you, can access your account.
  • Regularly review any third-party applications. You can review and revoke access for applications by visiting the Apps tab in your account settings on twitter.com.
  • Don’t re-use your passwords across multiple platforms or websites. Have a unique password for each of your accounts.
  • You can also use a FIDO Universal 2nd Factor (U2F) security key for login verification when signing into Twitter.

The important thing is, it’s not just a one-time effort that these companies seem to be making. it’s something they’re both constantly (now, finally) thinking about.

Yesterday, for example, Facebook announced that it took down more than 10,000 fake Pages, Groups, and accounts in Mexico and across Latin America because they violated their community standards.

“The content we’ve found broke our policies on coordinated harm and inauthentic behavior, as well as attacks based on race, gender or sexual orientation,” said its Head of Cybersecurity Policy, Nathaniel Gleicher.

“There is no place on Facebook for this kind of behavior — and we’re investing heavily in both people and technology to keep bad content off our services,” he added.

Gleicher highlighted the fact that the company has been hard at work behind the scenes, identifying and removing bad actors every day.

“We took down 837 million pieces of spam and 2.5 million pieces of hate speech and disabled 583 million fake accounts globally in the first quarter of 2018 — much of it before anyone reported the issue to Facebook. By using technology like machine learning, artificial intelligence, and computer vision, we can proactively detect more bad actors and take action more quickly,” Gleicher concluded.