MPs are debating the Online Safety Bill – here’s why that matters

Liron Velleman - 19 04 22

Next week, MPs will begin debating one of the most consequential Bills to be debated in recent years: the Online Safety Bill. Our Political Organiser Liron Velleman takes a look at a couple of the major concerns in the Bill.

Our briefing to MPs ahead of Second Reading

Download

Social media abuse and online hatred is tragically a key component of modern society. Just ask anyone from a minoritised community that uses, or used, social media and they will likely have stories of abuse they and their loved ones have had to endure. Despite huge efforts by civil society groups including HOPE not hate to expose, highlight and campaign against hateful content online, social media companies have failed to self-regulate the hatred that exists on their platforms and so it is time for regulation of the online space.

If the Bill passes, Ofcom, the new online safety regulator, will be given the power to regulate social media companies and enforce codes of practice on them to tackle illegal content. Stronger duties will also be placed on some platforms to deal with content that is harmful but is technically legal.

Whilst this is welcome, we have some grave concerns with sections of the Bill that could have huge consequences for hatred online and in particular how the far right use social media. The Bill currently fails to sufficiently regulate smaller platforms that are used by the far right and gives two potential loopholes that could allow the far right to make their way back onto mainstream social media sites, by defining as ‘journalists’ or claiming their speech is ‘democratically important’.

Smaller platforms getting off the hook

Currently, the Bill splits social media platforms up into different categories based on their size and their functionality, with extra duties placed on the platforms in ‘Category 1’. These platforms wouldn’t just have to build systems to effectively and efficiently remove illegal content (low bar but it’s something!) but would also have to show they had a plan to deal with content that was harmful to adult users, but wouldn’t be deemed as illegal.

Whilst this will be a step forward in tackling hatred on the bigger platforms like Facebook and Twitter, it misses the mark in terms of tackling the alternative social media ecosphere co-opted and built by the far right. If this bill is to properly address the issue of online harms it has to reflect the real nature of the online space and take into account not just the size of a platform, but also the risk that it poses. 

The social media landscape

In recent years, far-right figures have begun to migrate to alternative and usually smaller platforms. The result is that there are now broadly three categories of social media platforms used by the far right:

Mainstream platforms

Those that are widely used by all across society, such as Facebook, Twitter, Instagram, Youtube and TikTok. While these platforms all have an extremism problem, they generally have terms and policies that prohibit extreme and discriminatory behaviour, even if they don’t always enact them as consistently as necessary. Where possible the far right want to remain on these platforms, as they afford huge audiences beyond existing supporter bases. This is where they want to propagandise and recruit. 

Co-opted platforms

Those not created for or by the far right, but which have become widely used by them, either because of loose policies, a lack of moderation, or a libertarian attitude towards deplatforming and content removal. Most notable is Telegram, which is an enormous social media app with over one billion downloads globally. Due to its consistent failure to remove extremist activity, it has become a crucial hub for the contemporary far right. The danger for the far right with these platforms is that they may eventually choose to clean up their act and remove illegal or harmful content, making them insecure homes in the long term. 

Bespoke platforms

A growing group of platforms, created by the far right or by people consciously courting extremists. Many of these are essentially clones of major platforms, but featuring little or no moderation. The best known are Gab, BitChute and most recently, GETTR.

We will be working with MPs across the House to find a way for platforms that have a significant risk of harm on their platforms to be brought into Category 1 to be more strictly regulated by Ofcom.

Loopholes for ‘journalists’ and ‘politicians’?

The other major concern for HOPE not hate is on exemptions for ‘journalistic content’ and ‘democratically important’ content. As it is currently defined, these sections could be exploited by the far-right to re-enter the mainstream social media space and spread their hatred there.

The Bill adds extra duties on social media companies to protect journalistic and democratically important content on their platforms. The definitions of both are unspecific and could lead to a wide interpretation of the terms.

Many of the key far-right figures HOPE not hate monitors self-define as journalists. Some of the most high profile and dangerous far-right figures in the UK, including Stephen Yaxley-Lennon (AKA Tommy Robinson) now class themselves as journalists. There are also far right and conspiracy theory “News Companies” such as Rebel Media and Urban Scoop. 

These both replicate mainstream news publishers but are used to spread misinformation and discriminatory content. Many of these individuals and organisations have been deplatformed for consistently breaking the terms of service of major social media platforms, and this exemption could see them demand their return.

Likewise, it could enable a far-right activist who is either standing in an election, or potentially even for just supporting candidates in elections, to use all social media platforms. This could again mean far right figures being ‘replatformed’ onto social media sites where they would be free to continue spreading hate.

It is vital that any discussion about how this Bill protects democratic speech goes beyond limiting censorship, and includes the promotion of a genuinely pluralistic online space. This demands an analysis of the voices that are so often missing or marginalised online, namely the voices of minority and persecuted communities. We will only create a genuinely democratic online space by broadening out the definition of “democratically important” to include not just content that is often removed, but also content that is missing in the first place. It cannot just protect existing “democratically important” speech, it must also create a safe and pluralistic online space that encourages and empowers diverse and marginalised voices, enabling them to be heard.

There seems to be an assumption by the drafters of the Bill that journalistic content cannot or does not cause harm. However, what happens when under this proposed legislation, it could be the case that racist and misogynist content that is legal could be re-uploaded if the content in question was produced by a journalist? It remains unclear whether it is deemed possible for “journalistic content” to cause harm online? And will this mean that far-right figures who have already been deplatformed for hate speech must be reinstated if they stand in an election? Does this include far-right or even neo-Nazi political parties? Or even if they are campaigning on a ‘live political issue’?

The Bill as it currently stands could provide an exemption for fascists to propagate harm online. The loopholes must be closed.

Our briefing to MPs ahead of Second Reading

Download

SHARE THIS PAGE

Stay informed

Sign up for emails from HOPE not hate to make sure you stay up to date with the latest news, and to receive simple actions you can take to help spread HOPE.

Popular

We couldn't do it without our supporters

Fund research, counter hate and support and grow inclusive communities by donating to HOPE not hate today

I am looking for...

Search

Useful links

                   
Close Search X
Donate to HOPE not hate