By Melissa Ryan
This week’s newsletter is focused on deplatforming. We’ve been planning it for weeks but as often happens with our thematic issues the timing couldn’t be more apt. Following a week where Facebook and YouTube made clear that politicians won’t be held to the same standards as the rest of us on their platforms, President Trump has repeatedly used his Twitter account this week to attack and call for the outing of the Ukraine Whistleblower, threaten a civil war from his base, and suggest that the House’s Impeachment Inquiry was actually a coup.
Trump’s language seems intended to incite violence and patience is starting to wear thin. Senator and Presidential Candidate Kamala Harris has called on Twitter to suspend President Trump’s account. So has Recode founder and New York Times columnist Kara Swisher, whose column calling for Trump’s suspension also offers some hypothetical situations about what will happen once Trump is no longer in office worth considering.
Attempting to define what deplatforming is and isn’t can be a challenge. Merriam-Webster hasn’t yet added deplatforming to their dictionary but has defined it for an article about emerging words: “Refers to the attempt to boycott a group or individual through removing the platforms (such as speaking venues or websites) used to share information or ideas” (with a note about how the term is still fluid and the definition might change).
Deplatforming as a concept can make people nervous, even friends and allies on the left. I think part of the reason for this is that conversations around deplatforming tend to center on the deplatformed figures rather than the users who have been harmed by them. There are a few exceptions to this, mostly after high-profile harassment cases such as when Milo Yiannopoulos’ racist and misogynist harassment of actress Leslie Jones drove her off of Twitter, or after a domestic terrorist incident where the perpetrator was radicalized online like the Christchurch or El Paso mass shootings.
But far-right figures and communities aren’t deplatformed for their expression. They get deplatformed because they’re harming the rest of us either through targeted harassment, spreading hate, or manipulating social media. And for the most part, they only get deplatformed when tech platforms feel enough public pressure to do it. For the past three years, tech companies have had to be shamed into changing their content moderation policies. Then they’ve had to be shamed again into enforcing them. The pressure to remove content and deplatform repeat offenders has come from governments, journalists, researchers and advocates and it often still isn’t enough to move them into doing the right thing and protecting the vast majority of their users.
For all the naysaying (and there’s a lot of it!) anecdotal evidence suggests that deplatforming is an effective tactic for curbing the spread of hate, harm, and harassment online. Can you remember the last time 8chan made the news? Milo Yiannopoulos? Alex Jones?
Deplatformed figures and communities can’t spread hate, they can’t drive media narrative, they can’t make a profit, and crucially they can’t radicalize others and grow their audiences. Joan Donovan, speaking to Vice last year, put it this way: “Generally the falloff is pretty significant and they don’t gain the same amplification power they had prior to the moment they were taken off these bigger platforms.” Deplatforming works.
A new report from HOPE not hate shines a light on the murky and dangerous world of Patriotic Alternative (PA), the largest and most active…
Benjamin Raymond was the founder and mastermind behind National Action (NA), which in 2016 became the first far-right group banned in this country since the…