UK is introducing ‘Online Safety Bill’ as part of its Big Tech crackdown

The UK parliament has set new measures that include tougher and quicker criminal sanctions for tech bosses. There are also new criminal offenses for falsifying and destroying data.
18 March 2022

UK is introducing ‘Online Safety Bill’ as part of its Big Tech crackdown. (Photo by various sources / AFP)

  • With the Online Safety Bill, the UK parliament will have to approve what types of ‘legal but harmful’ content platforms must tackle.
  • It will protect children from harmful content such as pornography and limit people’s exposure to illegal content, while protecting freedom of speech.
  • The regulator will have the power to fine companies failing to comply with the laws up to 10%  of their annual global turnover, force them to improve their practices and block non-compliant sites.

A bill, which seeks to tackle access to harmful material online, was introduced to the UK parliament yesterday. The Online Safety Bill, which developed from Theresa May’s 2019 Online Harms White Paper, attempts to re-regulate every aspect of how the British use the internet. For starters, it is meant to protect children from harmful content such as pornography and limit people’s exposure to illegal content, while protecting freedom of speech. 

The Online Harms White Paper from 2019 argued that existing regulatory and voluntary initiatives had “not gone far or fast enough” to keep users safe. The Paper proposed a single regulatory framework to tackle a range of harms. At its core would be a duty of care for internet companies. An independent regulator would oversee and enforce compliance with the duty.

That said, the long-awaited and sweeping legal proposals introduced to the UK parliament yesterday, would force internet companies to remove illegal content from their platforms. It also gives Ofcom, UK’s communications regulator, power to impose massive fines and prosecute executives personally for failures to comply.

“The regulator Ofcom will have the power to fine companies failing to comply with the laws up to 10% of their annual global turnover, force them to improve their practices and block non-compliant sites,” the UK government said in a statement

Additionally, executives whose companies fail to cooperate with Ofcom’s information requests could also face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted.

Companies will also be required to show how they proactively tackle such content spreading. Additionally, Ofcom will be given the power to enter offices and inspect data and equipment to gather evidence, and it’ll be a criminal offense to obstruct an investigator, the Department for Digital, Culture, Media and Sport said in the statement. 

It will also put requirements on social media firms to protect journalism and democratic political debate on their platforms. “News content will be completely exempt from any regulation under the Bill,” the government said.

Details on the Online Safety Bill

The Bill, introduced in the Commons yesterday, is the first step in its passage through the UK parliament to become law and beginning a new era of accountability online. It follows a period in which the government has significantly strengthened the Bill since it was first published in draft in May 2021. 

Now, the bill also includes bringing paid-for scam adverts on social media and search engines into scope in a major move to combat online fraud. The bill will also make sure all websites which publish or host pornography, including commercial sites, put robust checks in place to ensure users are 18 years old or over.

The law would also require the adding of new measures to clamp down on anonymous trolls to give people more control over who can contact them and what they see online. It makes companies proactively tackle the most harmful illegal content and criminal activity quicker. The bill also criminalizes cyberflashing.

Ofcom will also be given the responsibility to scrutinize and challenge algorithms and systems inside big technology companies that can propagate harm online — rather than asking officials to chase and litigate on individual bad pieces of content.  Another new requirement includes having companies to report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency.