Our response to the Draft Online Safety Bill

Joe Mulhall - 07 06 21

After a long wait the Government has finally published a draft of the much needed Online Safety Bill. First announced back in 2019, this bill is designed to tackle the complex and very broad issue of ‘harms’ caused online.

Be it Covid-19 conspiracy theories shared in WhatsApp groups, campaigns of harassment by Twitter trolls, or the proliferation of far-right propaganda on YouTube, there is no doubt that harms perpetrated by extremists within the online world remain a pressing issue.

While the draft bill includes much that we welcome, we are increasingly concerned that this crucial bill is being dragged into the ongoing culture war in a way that could dangerously undermine its effectiveness. Vague and badly defined additions about protection of ‘democratic speech’ and comments by Oliver Dowden, Secretary of State for Digital, Culture, Media and Sport, about defences against so-called “woke campaigners”, have resulted in a draft bill that could, at best fail to fix the issues it sets out to, and at worse, actually open up a path for the re-platforming of hateful far-right extremists.

There are many great organisations scrutinising this bill from a variety of angles we care about. You should read the insights of Glitch, the Antisemitism Policy Trust and Demos amongst many others as the expertise of civil society organisations will be central to ensuring this legislation is shaped in a way that is effective and listens to the needs of effected communities. We will of course support their demands where we can and intend to continue to collaborate with a wide variety of organisations going forward.

However, our focus in on the online harm done by the far right, and that’s where this papers delves. With this in mind, we have gone through the draft bill and highlighted some of the potential gaps, problems and concerns and produced questions which need to be answered by the government. With the bill soon to face pre-legislative scrutiny via a committee we hope these questions will prove to be a constructive addition to the ongoing debate.

Questions that have to be answered

What is “democratically important” content?

Section 13 of the Draft Online Safety Bill outlines “Duties to protect content of democratic importance”:  

(6) For the purposes of this section content is “content of democratic importance”, in relation to a user-to-user service, if— (a) the content is— (i) news publisher content in relation to that service, or (ii) regulated content in relation to that service; and (b) the content is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom or a part or area of the United Kingdom.

The press release that accompanied the publication of the draft bill clarified that this will include: “content promoting or opposing government policy or a political party ahead of a vote in Parliament, election or referendum, or campaigning on a live political issue.”

While the aim of protecting speech of democratic importance is an admirable one, there is, at present, a lack of clarity around definitions, which opens this clause up for abuse by the far right.

Questions:

  • Is it possible for “democratically important speech” to cause harm online? What happens if content produced by a politician or journalist is also harmful? What is more important in this bill – reduction of harm caused by hateful online content or protection of “democratically important” speech? What happens when they come into conflict?
  • What is the definition of “democratically important”? More detail is needed here.
    • The definition at present seems very narrow and unprecise. It seems that speech related to elections will be protected as well as speech related to “live political issues”. How is the latter defined?
    • It is important that any discussion about how this bill protects democratic speech goes beyond limiting censorship, and includes the promotion of a genuinely pluralistic online space. This demands an analysis of the voices that are so often missing or marginalised online, namely the voices of minority and persecuted communities. We will only create a genuinely democratic online space by broadening out the definition of “democratically important” to include not just content that is often removed, but also content that is missing in the first place. It cannot just protect existing “democratically important” speech, it must also create a safe and pluralistic online space that encourages and empowers diverse and marginalised voices, enabling them to be heard.
  • The bill indicates that content will be protected if created by a political party ahead of a vote in Parliament, election or referendum, or campaigning on a live political issue.
    • Will this clause mean that far-right figures who have already been deplatformed for hate speech must be reinstated if they stand in an election?
      • Does this include far-right or even neo-Nazi political parties?
    • If immigration was deemed a “live political issue” could far-right “migrant hunters” demand that their prejudiced content is protected?
    • If there was a “grooming gang” case going through the courts could local far-right activists claim that their anti-Muslim content is protected as it is a “live political issue” in their community?
    • Under this proposed draft, it could be the case that racist and misogynist content that is legal could be re-uploaded if the content in question was deemed to be either “democratically important” or a “live political issue”.

What is “Journalistic Content”?

Section 14 of the draft bill outlines “Duties to protect journalistic content” which includes “a dedicated and expedited complaints procedure available to a person who considers the content to be journalistic content.”

(8) For the purposes of this section content is “journalistic content”, in relation to a user-to-user service, if— (a) the content is— (i) news publisher content in relation to that service, or (ii) regulated content in relation to that service; (b) the content is generated for the purposes of journalism; and (c) the content is UK-linked.

In short, it seems that journalistic content is simply defined as content “generated for the purposes of journalism”.

The press release that accompanied the draft bill stated that “Articles by recognised news publishers shared on in-scope services will be exempted” and that:

This means they [Category 1 companies] will have to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists’ removed content, and will be held to account by Ofcom for the arbitrary removal of journalistic content. Citizen journalists’ content will have the same protections as professional journalists’ content.

Questions:

  • Is it possible for “journalistic content” to cause harm online? What happens if content produced by a journalist is also harmful? What is more important in this bill – reduction of harm caused by hateful online content or protection of content by journalists? What happens when they come into conflict?
  • Is the bar for removal of harmful content higher if produced by a journalist and published by a news outlet than if the same opinion/statement is posted by an individual user?
  • Does this protection include far-right activists who self-identify as journalists?
    • Some of the most high profile and dangerous far-right figure in the UK, including Stephen Yaxley-Lennon (AKA Tommy Robinson) now class themselves as journalists. Would this protection mean that his content would receive additional protections?
    • There are far right and conspiracy theory “News Companies” such as Rebel Media and Alex Jones’ InfoWars. These both replicate mainstream news publishers but are used to spread misinformation and discriminatory content. Would they receive additional protections under this clause of the bill?
    • Under this proposed draft, it could be the case that racist and misogynist content that is legal could be re-uploaded if the content in question was produced by a journalist.

Will this bill adequately deal with small but extreme platforms?

The Draft Bill states that as soon as reasonably practicable OFCOM must “establish a register of particular categories of regulated services”, splitting them into Category 1, 2A and 2B, each with threshold conditions.

This is extremely important as according to the Governments response to the White Paper, only Category 1 services “will additionally be required to take action in respect of content or activity on their services which is legal but harmful to adults.” The aim of this is to “mitigate the risk of disproportionate burdens on small businesses.”

Questions:

  • Some of the most dangerous platforms used by the far right to spread hate and organise are smaller platforms they have co-opted, or even small platforms they create themselves. (For example Bitchute, 4Chan, Gab)
  • Will these platforms be classed as Category 1 due to the danger they pose despite being small and despite this additional regulation being a significant burden on them as a small business?

A BETTER WEB: REGULATING TO REDUCE FAR-RIGHT HATE ONLINE

A BETTER WEB: REGULATING TO REDUCE FAR-RIGHT HATE ONLINE

In November, we laid out the principles that HOPE not hate believes should underpin the Online Harms legislation. You can read our full briefing here.

Download

SHARE THIS PAGE

Stay informed

Sign up for emails from HOPE not hate to make sure you stay up to date with the latest news, and to receive simple actions you can take to help spread HOPE.

Popular

We couldn't do it without our supporters

Fund research, counter hate and support and grow inclusive communities by donating to HOPE not hate today

I am looking for...

Search

Useful links

                   
Close Search X
Donate to HOPE not hate