Bots, fake news and the anti-Muslim message on social media

- 28 11 17

INTRODUCTION

• In this report, we show how recent terror attacks in the UK have been successfully exploited by anti-Muslim activists over social media, to increase their reach and grow their audiences.

• Monitoring key anti-Muslim social media accounts and their networks, we show how even small events are amplified through an international network of activists.

• We also provide concrete evidence of a leading anti-Muslim activist whose message is hugely amplified by the use of a 100+ strong ‘bot army’.

• The global reach, low price and lack of regulation on social media platforms presents new possibilities for independent, single issue and extremist viewpoints to gain significant audiences.

• We delve into the murky and secretive world of the dark web to explore just what tools are available for manipulating social media and show how easy it is to make use of these tactics even for non-tech savvy users.

• Through testing, we conclude that even cheaply inflating one’s number of followers has an effect on the ability to reach a larger audience.

• We situate these developments in the context of increasing hostility towards Muslims and immigration in the Europe and the US.[1][2]

“Trigger events” such as terror attacks, and other events that reflect badly on Muslims and Islam, cause both an increase in anti-Muslim hate on the street[3] and, as we will show, also online.

DISINFORMATION ON SOCIAL MEDIA

Social media outlets such as Facebook and Twitter are key for public debate and political discourse.

After much criticism, especially in the wake of the tragic events in Charlottesville, Virginia, both Facebook and Twitter are doing more to moderate hateful content on their platforms[4]. However, the internet is still awash with anti-Muslim websites and social media accounts on mainstream platforms. Most worrying is that their size and thus their impact is also on the rise.

Research by Miller et al. and the Runnymede Trust argues that this can likely be attributed to a general increase in anti-Muslim sentiments in Europe and the USA. These are often attributed to anti-Muslim framings of events such as Brexit, increased migration from the Middle East and North Africa to Europe, the clear anti-Muslim rhetoric in Donald Trump’s presidential campaign and terror attacks.[5] [6] [7]

As part of this report, data on key anti-Muslim Twitter accounts was continuously collected between March and November 2017 in order to assess the growth and reach of these anti-Muslim movements on the platform.

The data, which can be seen in the graph below, shows a steady increase among all accounts but some stand out, with short periods of rapid increases in followers among multiple accounts.

Attempting to explain these short bursts of new followers we turn to the scholarship around “Trigger events”. Cuerden and Rogers argue that terror attacks by Muslims cause intense media debate and negative images of Islam, Muslims and immigration which facilitates this creation of a perceived conflict.

Mills et al. found that the perception of conflict between groups facilitates retribution and violence against anyone in the opposite group, as they are dehumanised and seen as monolithic.[8] However, the effect was also seen when immigration and unemployment received increased public attention, underlining that the effect is in part connected to fear and insecurity.[9]

Our data also shows clear signs of these events in the social media sphere. But the effect is not completely uniform. Events such as the Manchester and London Bridge attacks co-occur with rapid increases in followers among many of the key anti-Muslim accounts in the UK. But the Westminster attack shows much less impact on the number of followers, despite being similar to the London Bridge attack in terms of the background of the attacker and the type of incident.

It is important to acknowledge that spikes in direct hate and interest in anti-Muslim alternative media do not necessarily contribute to a long term increase in anti-Muslim hate. Increased activity in this sphere could be explained by increased mobilisation among those already holding prejudiced views of Muslims, as indicated in HOPE not hate’s Fear and HOPE 2017 report.[10]

Followers over time, post-terror attacks, for key British anti-Muslim accounts

These short periods of rapid gains in followers co-occur for multiple accounts. One example is in the hours and days after the Manchester attack, where several of the most prominent anti-Muslim accounts in the UK gained a significant number of followers. Among these were the accounts for failed UKIP leadership hopeful Anne Marie Waters (now running the far-right For Britain party), and for Paul Golding and Jayda Fransen, the leaders of the anti-Muslim street movement Britain First.

But Tommy Robinson (aka Stephen Lennon), former leader of the English Defence League (EDL), stands out. He gained +40,042 followers in the week after the Manchester attack, an increase of +17%, with the majority of those (29,396) coming within 48 hours after the bombing. Similarly, he gained +13% (22,365) and +14% (40,757) followers after the Westminster and Finsbury Park attacks, respectively. This compared to his weekly average increase of +6,422 new followers per week over the period March – November 2017.

Tommy Robinson’s (Stephen Lennon’s) number of Twitter followers over time

Increased number of followers intuitively suggests that the accounts’ resonance have increased, but this effect is far from uniform.

Unsurprisingly, at the same point as these accounts rapidly increase their number of followers, the number of interactions spikes as well. This is especially visible for Lennon/Robinson, who has managed to use Twitter in a very effective manner around each attack, with tweets that are widely retweeted. Spikes in interactions are of course also influenced by other independent factors, not all related to terror attacks, making it difficult to explain spikes when looking at a single account.

Tweets per day for Anne Marie Waters and Tommy Robinson (Stephen Lennon) after terror attacks

With each increase in Twitter followers comes a larger potential reach for every single tweet and therefore a potential influence on the public debate. A clear example of this can be seen in the aftermath of the London Bridge attack on 3 June 2017.

Out of the top 100 most shared tweets about the attack, 32 showed clearly negative sentiments against Muslims. Notable among these were tweets shared by the largest anti-Muslim accounts, such as those run by Paul Joseph Watson of conspiracy site Infowars, alt-right commentator Brittany Pettibone, Raheem Kassam of Breitbart London, Canadian far-right alternative media outlet Rebel Media and the Voice of Europe.

Lennon/Robinson’s cameraman for Rebel Media, Caolan Robinson, who posted the third most retweeted tweet after the attack, accused CNN of fabricating a “muslim protest” against the attack, ahead of the reactions by the Metropolitan Police, the BBC and London Mayor Sadiq Khan.

Screenshots of tweets after London Bridge

and

“1. Islam attacks London\n2. Libs say nothing to do with Islam\n3. Light a few candles\n4. Invite more refugees\n5. Blame Trump \n#LondonBridge https://t.co/Glefb8Tto4

International

Reactions to current events such as violent attacks are contextual and might differ between countries and continents. For example, attacks in the UK do not necessarily effect sentiments towards Muslims in North America. However, there are several indicators of a growth trend internationally for these accounts

Twitter accounts for North American anti-Muslim figures such as Pamela Geller (@pamelageller), Brigitte Gabriel (@actbrigitte) and David Horowitz (@horowitz39) are all growing steadily. On average there was a +117% growth in followers for key anti-Muslim activists in the UK and USA between March to November 2017. Significant events seem to resonate internationally as well. Geller’s and Gabriel’s account’s both showed rapid increases in followers at the time of the Westminster, Manchester and London Bridge attacks in the UK.

Brigitte Gabriel’s number of followers on Twitter over time
Pamela Geller’s follower growth on Twitter

This effect can be seen among anti-Muslim websites as well. Pamela Geller’s ‘Geller Report’ increased from one million views per month to over two million in the period between July and October 2017. Likewise, the Gates of Vienna counter-jihadist blog doubled in visitors per month between May and October 2017.[11]

Monthly visitors for Geller Report[12]

In line with previous scholarship by Miller et al. these types of events have a triggering effect in lowering the barrier for the critique of Islam, but also in direct hate against Muslims.[13]

The growth among Twitter accounts and websites spreading anti-Muslim hate is alarming. In such a key area of public interest, it is an indication of increased interest in these views and as each account or site grows, more people are exposed to deeply prejudiced anti-Muslim views.

Building networks

Far-right events and actions often take place outside of mainstream social media channels and instead on closed, anonymous or less regulated chatrooms and other types of platforms such as Gab, a free speech-advocating Twitter alternative, and group chat platform Discord.[14]

Mainstream social media is an important venue for the anti-Muslim movement to network and disseminate and get information. It has made use of these possibilities to the full. The movement is connected across Europe and the US on platforms such as Twitter, as is clearly shown by their high level of interaction on the network.

The blue circles represent key anti-Muslim accounts and the lines between them represent interaction. The data was gathered for the whole month of October 2017.

Closely-connected social media helps facilitate ‘crowdsourcing’ of anti-Muslim hate messages and propaganda between anti-Muslim activists and the wider far-right milieu. These networks allow for small, local events to be spread around the world and collated into a bigger canon of anti-Muslim propaganda. This serves to strengthen the impression that Islam is a grave threat, despite the number of terror incidents actually being relatively few and far between.

The most prominent anti-Muslim activist in the USA, Pamela Geller, publishes a daily list of “news bulletins” in her email newsletter and on her Twitter account. The list of 10-15 items gives the impression that the frequency of incidents is high but these are sourced widely and frequently include items from North America, Europe, the Middle East, North Africa and Australia

A very similar formula is used by websites such as Gates of Vienna, a central anti-Muslim blog, and Voice of Europe, a large Twitter account and website dedicated to reporting anti-Muslim and anti-immigration views.

Super Spreaders 

Marginal anti-Muslim activists can spread their content internationally via much larger and influential social media accounts.

One telling example of this phenomenon is how a picture of a McDonald’s flier in Arabic, posted by Swedish anti-Muslim activist Jan Sjunnesson (@sjunnedotcom), was rehashed countless times and turned into articles in at least eight languages.

On 29 April 2017 Sjunnesson posted a picture of this McDonald’s flier with the caption “McDonalds in Södertälje [a town in Sweden]”. Despite coming from a small Twitter account and being written in Swedish, it was spread quickly on Twitter and was turned into articles on multiple anti-Muslim websites warning about the increasing influence of sharia law and the loss of identity in Europe and the US. It was used by sites such as fake news outlet The Gateway Pundit, neo-nazi blog The Daily Stormer[15] and Russia’s Sputnik News[16] to promote anti-Muslim ideas.

The Gateway Pundit wrote:

Women in Islamic clothing wander around Sweden and violent Muslim men beat and rape the Swedish natives. McDonald’s is also doing their part to make sure that Sweden loses their language and identity by catering to their new demographic, Arabs.[17]

This highlights one important characteristic of social networks: that certain people are more important than others in getting a story widespread attention. These users are often called “super spreaders”, a term for the most contagious patient borrowed from the field of epidemiology.

Super spreaders don’t necessarily have huge followings themselves, but they are more connected than most, and their connections in turn are well connected. But other characteristics, such as trust outside of the network, are also relevant.[18]

Super spreaders have much higher influence on social networks than other users, as their message has a much higher probability of a large reach. In this way, they act as gatekeepers to pushing messages ‘viral’ on social media platforms. Jan Sjunnesson is a telling example. He has 7,250 followers and tweets in Swedish. Intuitively he is not a likely candidate to be featured on The Gateway Pundit. The truth is that he wasn’t. But a much larger user, Peter Imanuelsen (@PeterSweden7), was.

Imanuelsen is an English language-using, far-right, anti-Muslim hate monger on Twitter with roughly 10 times more followers than Sjunnesson. He is an associate of Voice of Europe and connected to at least six key anti-Muslim accounts [Figure 1]. Peter Sweden reposted the image one day later and it was retweeted 4,141 times compared to the original 98.

Sjunnesson’s message reached Imanuelsen, clearly a much more connected user, and it received far greater attention as an article written in The Gateway Pundit (without crediting Sjunnesson).[19]

How the McDonald’s story spread…

Dishonest skewing & fake news

Intentionally dishonest alternative media outlets such as Breitbart[20] and other social media accounts have received considerable attention in the wake of the 2016 US presidential election. Such sites and accounts have propelled the term ‘fake news’ (news items presented as fact without having any basis in reality[21], but also often including heavily skewed news with a factual basis) into common use.

The intention behind producing the content is important when it comes to fakes news. Mistakes and unintentional misinformation, as well as news satire, should not be considered ‘fake news’ (although they might look similar). It is an intention to mislead and straight-out fabrication that sets fake news apart.

However, there are different types of fake news. The spectrum of such content ranges from the clearest outright fabrications, without any basis in reality, to heavily-skewed news items based on actual events. These skewed items make use of real events but leave out important information or fabricate details to make the reader draw whatever conclusion the creator intended.

Spreading fake news often relies on strategic amplification.[22] Using the power of the network and super spreaders, manipulators can push stories ‘up the chain’ of media outlets. An item might be posted on an image board such as 4Chan, a central anonymous forum for the alternative right, where barriers of entry are low or non-existent and from where other users can pick it up and share it on mainstream social media platforms. Influential super spreaders amplify the story, which is then taken up by smaller publications which more influential outlets sometimes make use of to find new material. The Gateway Pundit is for example often referred to by Fox News.[23]

Variations of this tactic are also possible. Smaller outlets can be completely overridden if a story gets enough traction on social media. Topics covered by Trump during the 2016 US presidential election, for example, were widely shared and seen as newsworthy solely because of his candidacy.

Discussions on the use of disinformation campaigns and fake news have heavily focused on the possibility to influence electoral politics[24], but similar strategies are employed by anti-Muslim activists on social media and via alternative news sites dedicated to spreading anti-Muslim an anti-Islamic sentiments.

One typical tactic employed by fake news outlets is to use exaggerated headlines that confirm the reader’s beliefs and prejudice, to attract clicks and encourage sharing on social media. This in turn generates advertising revenue and an increased spread of the message.

Gates of Vienna, a prominent anti-Muslim hate website and Twitter account, ends its daily news feed with a caveat: “We check each entry to make sure it is relatively interesting, not patently offensive, and at least superficially plausible” [emphasis added]. The quote epitomises the fake news formula and it is applied by most anti-Muslim alternative media outlets and social media accounts examined in this report.

Alternative media outlet Breitbart, which is run by Donald Trump’s former chief strategist Steve Bannon, has been at the forefront of exploiting this technique to gain readers and drive visibility. Its reporting on Islam and Muslims is largely indistinguishable from the anti-Muslim movement’s rhetoric or even that of the far right.

On Breitbart London there is often insufficient or no evidence at all to support some of the wilder claims of its headlines. Writers often de-escalate from the stronger headline claim to a weaker one in the body of the piece. Such de-escalation leads to the bizarre, openly self-contradictory nature of many articles. ‘Seven Found Guilty of Robbing German Churches to Finance Jihad’, for example, later back-pedals when it admits that: “Judges said it was unclear whether the funds they generated were actually used to support armed jihad and if so, to what extent.”

Failing de-escalation, the writer will instead attempt to rely on putative evidence that doesn’t actually support their claims either.

In ‘Europe’s Rape Epidemic: Western Women Will Be Sacrificed At The Altar Of Mass Migration’, Breitbart London writer Anne Marie-Waters (of anti-Muslim For Britain party) cites a Norwegian (specifically Oslo) rape statistic through a Christian Broadcasting network article, which neither provides a link to cited police reports and only a link to an unavailable YouTube clip when referring to the “rape epidemic”. With such poor ‘evidence’ to support the notion of a rape epidemic in Europe resulting from mass migration, Waters relies heavily on confirming her readers’ prior beliefs through mere speculation and rhetoric.

The result is an echo chamber that reinforces anti-Muslim sentiments and the belief that mainstream media (‘MSM’) cannot be trusted. Tappin, van der Leer and McKay show that readers are more likely to trust news items that confirm their own beliefs, or possibly what they hope to be true, than the other way around.[25] Simultaneously, consistent repetition of a one-sided and wholly negative perspective on Muslims, Islam and immigration also causes what Pennycook and Rand call an “illusory truth effect in which the repetition […] increases perceptions of accuracy”[26].

Steve Bannon, head of Breitbart and Donald Trump’s former chief strategist
<< Read HOPE not hate’s Breitbart report >>

The usage of social media platforms for all major anti-Muslim accounts follows a similar pattern.

Terror attacks, but also other forms of crime, are regularly published by anti-Muslim accounts on Twitter and Facebook promoting the general theme that Islam and Muslims are incompatible with or constitute threats towards Western society. Often other societal developments and news events that are not necessarily connected to Muslims or Islam are heavily skewed or even invented in order to promote anti-Muslim ideas.

A common example is the use of fabricated statistics on crime rates among Muslims. Breitbart’s attempt to perpetuate the idea that there are areas in Sweden where police have little control due to migrant and Muslim crime is one clear example.[27]

Another common take is items designed to assert the idea that Islam is gaining increasing influence over European and American society, via articles with titles such as Pamela Geller’s ‘LOOK: Before and After Islamization of American Education’. As indicated by the steadily growing readership and interaction, the tactic has proved effective.

Coordinated disinformation campaigns: a case study

Online forums and image boards coordinate anti-Muslim social media campaigns and spread wholly fabricated messages across social media. One example is the image of a woman in a headscarf on the day of the Westminster attack in March 2017.

The picture shows a Muslim woman walking with a phone in her hand, past a group of people aiding one of the other victims of the attack. It gained significant attention after a Twitter user called @Southlonestar claimed that the woman was indifferent to the suffering of others and that this was generally true for all Muslims. That the woman was indifferent was not true (this has been refuted by both the photographer and the woman herself). Other pictures in the series shows her noticeably distraught and likely shocked by what she has just seen.[28]

In November 2017 it was revealed that @Southlonestar was one of the 2,700 accounts that had been handed over to US House Intelligence Committee by Twitter as a fake account created in Russia to influence UK and US politics. Except for spreading anti-Muslim hate, it also spread messages before the US presidential election and was also one of the accounts that tweeted pro-Brexit messages on the day of the EU referendum in June 2016.[29]

No matter the circumstances behind the picture it was quickly shared by several major far-right and anti-Muslim accounts on Twitter, including those for alt-right leader Richard Spencer and Pamela Geller.[30]

Alt-right leader Richard Spencer shares the ‘fake news’ image of a supposedly uncaring Muslim woman at the Westminster attack

However, this image of the Muslim woman would soon be used for even more nefarious purposes. The same evening it was shared, the picture was appropriated by users on the /pol/ board on the online forum 4chan. One user posted a picture where the woman was montaged into another setting with the simple comment next to it “you know what to do”, meaning that he wanted his fellow users to create image montages of the woman in other settings.

The intention was further clarified in a comment below. In it, the same user uploaded a file where the image of the woman had been cut out from the background in order to make it easier to montage into other pictures. The anonymous users who posted it wrote: “Go forth and make dank OC” [“Dank” means good or high quality and “OC” means Original Content].

4chan users manipulate the Muslim woman image to reuse/superimpose elsewhere

In the comments that followed were hundreds of variations of the posted image, most situating the woman next to atrocities of varying degrees. Clearly inspired by the original post or its derivatives, the pictures aimed to send the message that the woman (and Muslims overall) were unmoved by the suffering of others – or even enjoyed it. Many of them were extreme and obvious parodies and did not leave the forum. In one she is seen walking past what looks like an extermination camp in Germany.

However, importantly some did not stay on 4chan. Two weeks later a manipulated image was spread on social media in Sweden, after four people were murdered by a car in a terrorist attack in Stockholm. The image showed a paramedic walking between what looked like covered bodies while in the background the familiar silhouette of the woman on Westminster Bridge was superimposed. It was blurred to make it fit in with the setting but undoubtedly was a cut-out from the Westminster photograph. It didn’t get widespread reach as it was debunked quickly by Twitter users, but a similar attempt was made again after the Manchester attack on 22 May 2017.

The same woman was again montaged into a picture of the place of the attack, making it look as if she was leaving the scene untouched, with victims lying behind her. This image was retweeted 52 times and liked 158 times.

Image superimposed onto Stockholm attack and, below, after Manchester arena bombing

Online-Offline

Recent actions by alternative right[31] activists have also turned up in the offline world.

During a counter-demonstration against the invitation of Mike Cernovich, an alternative right activist and conspiracist, to give a speech to the College Republicans club at Columbia University, one alternative right activist walked in front of anti-fascist demonstrators and, turning to face photographers, unfurled a large banner. The banner read: “NO WHITE SUPREMACY NO PEDO BASHING NO MIKE CERNOVICH”. The action can only be seen as a way to intentionally smear anti-fascist activists.[32]

The picture was then posted by Paul Joseph Watson, one of the most widely-followed alternative right and anti-Muslim personalities on Twitter (and editor-at-large of conspiracy site Infowars). It was retweeted over 20,000 times despite quickly being discredited as fake with the probable intention of smearing the demonstrators.

“Superficially plausible” seems to have become the motto of anti-Muslim and far-right actors on social media. Claims are often easily debunked but thanks to the speed of the network and users’ seeming willingness to share what confirms their point of view, these stories are spread widely.

MANIPULATION

On platforms like Twitter and Facebook multiple factors influence what users trust. For example, users that we know about from other contexts and those that we share contacts with are more likely to be trusted.

In the offline world, these attributes are difficult to manipulate and change only with effort or over a long period of time. On social media platforms, where users are represented with much more basic information and numerical measurements such as number of followers and number of replies, it is much easier to impersonate and manipulate these attributes.

A case in point is “Jenna Abrams”. Abrams presented herself as a pro-Trump American woman on Twitter and her witty, anti-feminist and anti-Muslim hate tweets amassed her over 70,000 followers. She was retweeted even by Trump. But Abrams never existed.

As part of the investigation into Russian influence of the 2017 US presidential election, it was revealed that the account was a bot designed to cause division and conflict in American society. Former National Security Adviser Michael Flynn retweeted Abrams just three days before the 2016 election, which indicates how effectively social media can be deceptively used.

Quotes from Jenna Abrams’ Twitter account

“Muslims laughing at victims: “Shoot that m****r f**king girl right there” #LondonAttacks https://t.co/GvuKcBCdzp

To @lsarsour and @MuslimIQ, Sharia Law is peaceful. It’s simply: Beheadings Stonings Hangings Crucifixions Honor killings Genocide …

Because it is so simple to manipulate the signals of trust on social media platforms, bots and fake accounts become powerful tools in directing political discourse. Bots range from simple to technologically-advanced pieces of software that not only post content independently but interact with real users.

After the 2016 US presidential election there was much attention focused on social media bots active on Twitter. In mainstream media, bots have been discussed as a tool deployed by states and well-funded organisations, such as political campaigns, to influence elections. However, while state-sponsored disinformation campaigns present a danger, this one-sided framing of the issue overestimates the amount of funds and expertise needed to manipulate social media using bots and other dishonest amplifications techniques.

The simplest and cheapest types of bots are accessible and usable by private individuals and small organisations with little or no technical knowledge needed, readily available for purchase on various websites. These are simple bots that do not interact with other users and a relatively easy to detect. Their profiles might look genuine but their behaviour is often distinctly ‘bot-like’.

The simplest bots only follow and retweet other users, but the impact of inflating shares and follower numbers should not be underestimated. A user with a large number of followers is generally easier to trust and may seem more ‘legitimate’. Retweets, even by bots, increase a message’s reach and can potentially make a topic go ‘viral’ (see bot case study).

The more advanced bots often mix human control with artificial intelligence and are notoriously difficult to detect. “Jenna Abrams” is a case in point. But technological advancements in areas of machine learning and artificial intelligence, and increasing interest in these types of projects, mean that advanced bots are within the reach of tech-savvy individuals with minimal resources.

Case Study: Pamela Geller and social media

Pamela Geller, leading anti-Muslim activist

Pamela Geller is one of the most prominent individuals of the anti-Muslim movement in the USA. Banned from entering the UK, her website attracts 2.7 million visitors per month and she distributes a daily newsletter with news items that have a clear anti-Muslim angle.

Her main twitter account had 168K followers as of November 2017. The account is primarily used to drive traffic to her website where articles about the danger of Islam are published 10-15 times a day.

But this is not her only account. Counter to Twitter’s Terms of Service[33] [34] Geller continues to run at least one ‘mirror’. Her old account (@atlasshrugs) continues to post copies of each post from her main account, as if they were tweeted by that account. The strategy helps her stay online in case one account is suspended, while at the same inflates the reach of her message.

Counter-jihadists together: Tommy Robinson (Stephen Lennon), Robert Spencer and Pamela Geller

More noticeably, we have identified that bots are used to amplify Geller’s messages on Twitter. At least 102 accounts copy each tweet from Geller’s account, similar to her own mirror accounts, only these appear to be other individuals of varying backgrounds and only copy those tweets that do not mention other users. They do not tweet anything or little else than Geller and do so within minutes of her posts. The tweets include the photo and link to Geller’s website (as opposed to more common behaviour on Twitter, which is to retweet and thereby acknowledge the original source of the tweet).

These accounts also exhibit many bot-type characteristics. In addition to exclusively posting content with links to Geller’s website, they do not mention any accounts, including Geller. They are highly synchronised, which means they post the same content at nearly exactly the same time.[35] Furthermore they often repeatedly post the very same tweet on the same day, making them incredibly active.

Take the Twitter account @TPartyWoman as an example. On 13 November the account tweeted 34 links to Geller’s site. Some of the tweets were simply repetitions of previous tweets sent out minutes or hours earlier. The link to an article: ‘Muslim construction workers attack Jewish preschool near Tel Aviv’ was tweeted three times during the day.

Further analysis of this network (which accounts are followed and which in return follow those accounts back) reveals peculiar properties.

There is a considerable amount of overlap between the different users’ networks. Some pairs of accounts share as much as 45% of their followers, and who they follow, with each other. The probability of this type of overlap between two independent Twitter accounts with thousands of connections is remarkably low and therefore an indication that they are controlled by the same ‘bot master’.

There is no way of telling who this might be, as Twitter does not provide any identifiable information, but that these bots inflate and amplify the message of Geller’s anti-Muslim Twitter account and website is unquestionable.

The practical impact can of course be questioned. As many of these accounts follow each other the effect of each account’s activity is somewhat mitigated. But keeping in mind that there are at least 100 active accounts with an average of 2,314 followers (and at least some of their followers are likely genuine), the accounts help extend the reach of Geller’s content to at least 230,000 additional accounts per article. The tactic is a simple way to amplify Geller’s message and increase the traffic to her site, which in turn generates advertising revenue and potential sales of her newly-released book

Bots case study

What have become known as ‘social media bots’ are social media accounts that are in part or fully-controlled by software rather than a human. The idea is to shift discourse and amplify messages at a much larger scale than what would otherwise be possible if real people were broadcasting the message

Bots can be effective where it is difficult to establish the true identity behind a social media profile, especially on Twitter, where there is very limited identifiable and verifiable information.

Academic research on bots is plentiful. Several large-scale projects are actively looking to find algorithms to detect bots on Twitter in order to more accurately estimate the influence they have. Some of these methods have shown progress but the varying kinds of bots, as well as the quickly developing technology, make accurately finding bots challenging. A report from the Oxford Internet Institute states that “political actors are adapting their automation in response to our research”. Essentially it is a cat-and-mouse game between the social media platforms and academics on the one side and bot-makers on the other.

The economic and political interest in developing genuine-looking automated accounts is significant. Estimates of the number of bots on social media platforms are significant, with observers estimating between 9% and 15% of all accounts on Twitter are automated.[36] Based on Twitter’s own data – claiming to have 330 million active accounts – this translates to between 29.7 million and 49.5 million bots active on the platform.[37]

Buying influence investigation

The reach of a tweet is determined by the number of followers and how much it is spread by other users. In turn, this is dependent on the message itself and trust in the account that published the message. There is a lot to be gained from having a large reach on social media and there exists a market for inflating and expanding social media reach.

One of the simplest way of amplifying one’s message is to buy followers and retweets. This is done through services specifically designed for that purpose. On these websites, a user buys a number of followers or retweets by bot accounts that will then follow and retweet the chosen account. These bots usually don’t themselves have many real followers because of the lack of original content that they post. It is often simply retweets of a strange array of advertisements and often posted in multiple languages. Therefore, it is easy to discard ‘bought’ followers and retweets because of the low quality of the accounts and their retweets.

But we should not assume that the practice is not impactful. As part of this report we set up multiple Twitter accounts to examine the possible influence of bought followers and shares of content.

The simplest and most accessible way to make use of bots is to buy followers. This does not give access to the account itself and it cannot be directed to tweet in a particular way or be controlled in any way. But it allows one to inflate one’s number of followers. And it is cheap. One thousand (1,000) followers can be bought for between $5 – $15 depending on the outlet. Some differences in quality and guarantees of retainment are factors that influence the price.

Using a mix of different platforms, we bought 2,500 followers each for our two newly-created fake accounts, posing as an online gamer and a travel enthusiast, to assess the potential of bought influence and characteristics of artificially inflated followers.

Buying followers is generally a very easy process involving paying with a PayPal account (less established and generally cheaper sites encourage customers to pay with digital currency Bitcoin). The followers then arrive gradually over the next 48 hours.

The accounts over time gained a significant amount of followers: more than we had bought. After a week, the accounts had gained between 35% – 45% more followers than we had paid for. Many of these might have been bots but, notably, what seems like real users started to respond to our tweets with genuine, reasonable answers to the tweets we were sending out. This indicates that simply buying fake followers and retweets increases the possibility of reaching and interacting with real users, not only in theory but in practice.

Reply to our fake account

As discussed above, the cat-and-mouse game between bot-detection algorithms and bot makers leads to fast innovation in this area. Guides on how to “spot a bot”[38] are easy to come across and can be helpful in some cases. But there is no perfect way to detect all kinds of bots. Political bots have different characteristics from ‘follower’ bots for example.

When inspecting the total of more than 5,000 followers and retweeters we bought for this report, the results were not encouraging. Indiana University’s Botometer[39], an academic project using machine learning to identify bots, gave our recently created fake accounts and a vast majority of their followers a passing grade. In some cases this was better than the personal accounts of the researchers, exemplifying the difficulty of the task to accurately determine social media manipulation at scale.

Buying followers and retweets does not however give access to the actual account or the software that controls it. The user simply pays for the services of the bots. To build your own bots is somewhat more expensive, but not drastically so. There are two requirements for creating a Twitter bot.

First, you need the software needed to automate or control many Twitter accounts at once is required. These are easily available and there exist both commercial and completely free programmes that allow a user to control thousands of accounts and automate their behaviour to different degrees. One of the commercially available on is called TweetAttacksPro.

Secondly, Twitter accounts themselves are needed. Twitter attempts to block bulk creation of large numbers of accounts, making it difficult to register them in large numbers. Furthermore, to make them trustworthy, older accounts with followers and a filled timeline are better as new accounts are often seen as less trustworthy.

For this purpose, black markets of social media accounts exist on both the dark web and on the surface web. On these forums, ‘aged accounts’ – meaning accounts that were created several years ago – are for sale for as little as £3-5 each and even cheaper if one buys many at once. Many users offer hundreds of accounts to sell in packages.

Buying accounts is easier than you think

And it is possible to specify the characteristics of the account e.g. its age, what types of topic it has previously tweeted, even verified accounts with hundreds of thousands of followers are up for sale for between $500 and $1000. For Facebook, one can specify the origin, gender and age of accounts and buy these with genuine-looking timelines and friend list.

These services thereby obviate the need for the end user to actually create the bot account themselves, and as some of these accounts have been built over a long period of time or are stolen or repurposed real accounts, detecting them becomes incredibly difficult.


Footnotes

[1] Centre for the Analysis of Social Media, Demos, 2016. Islamophobia on Twitter: March to July 2016, Available at: https://www.demos.co.uk/wp-content/uploads/2016/08/Islamophobia-on-Twitter_-March-to-July-2016-.pdf.

[2] Tell MAMA, 2017. A Constructed Threat: Identity, Intolerance and the Impact of Anti-Muslim Hatred, Available at: https://tellmamauk.org/wp-content/uploads/2017/11/A-Constructed-Threat-Identity-Intolerance-and-the-Impact-of-Anti-Muslim-Hatred-Web.pdf.

[3] Tell MAMA, 2017. A Constructed Threat: Identity, Intolerance and the Impact of Anti-Muslim Hatred, Available at: https://tellmamauk.org/wp-content/uploads/2017/11/A-Constructed-Threat-Identity-Intolerance-and-the-Impact-of-Anti-Muslim-Hatred-Web.pdf.

[4]http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/home-affairscommittee/hate-crime-and-its-violent-consequences/oral/48836.pdf.

[5] Centre for the Analysis of Social Media, Demos, 2016. Islamophobia on Twitter: March to July 2016, Available at: https://www.demos.co.uk/wp-content/uploads/2016/08/Islamophobia-on-Twitter_-March-to-July-2016-.pdf.

[6] Runnymede, 2017. Islamophobia, Available at: https://www.runnymedetrust.org/uploads/Islamophobia%20Report%202018.pdf.

[7] Mills, C., Freilich, J. & M Chermak, S., 2015. Extreme Hatred: Revisiting the Hate Crime and Terrorism Relationship to Determine Whether They Are “Close Cousins” or “Distant Relatives.” 63.

[8] Mills, C., Freilich, J. & M Chermak, S., 2015. Extreme Hatred: Revisiting the Hate Crime and Terrorism Relationship to Determine Whether They Are “Close Cousins” or “Distant Relatives.” 63.

[9] Cuerden, G. & Rogers, C., 2017. Exploring Race Hate Crime Reporting in Wales Following Brexit. Review of European Studies, 9(1), p.158.

[10] https://www.hopenothate.org.uk/research/fear-and-hope-2017/

[11] https://www.similarweb.com/website/gatesofvienna.net

[12] https://www.similarweb.com/website/pamelageller.com  

[13] Mills, C., Freilich, J. & M Chermak, S., 2015. Extreme Hatred: Revisiting the Hate Crime and Terrorism Relationship to Determine Whether They Are “Close Cousins” or “Distant Relatives.”.

[14] Data & Society, 2017. Media Manipulation and Disinformation Online. pp.1–106. Available at: https://datasociety.net/output/media-manipulation-and-disinfo-online/.

[15] Anglin, A., 2017. Sweden: McDonald’s Sends Out Ads in Arabic. Daily Stormer. Available at: https://dstormer6em3i4km.onion.link/sweden-mcdonalds-sends-out-ads-in-arabic/ [Accessed November 15, 2017].

[16] Anon, 2017. Swedish McDonald’s Learns to Speak Arabic Due to Popular Demand. Sputnik. Available at: https://sputniknews.com/art_living/201705031053224840-swedish-mcdonalds-arabic/ [Accessed October 18, 2017].

[17] Laila, C., 2017. SWEDENISTAN – McDonald’s in Sweden Sends Out Mailers in Arabic to Accommodate Muslim Migrants. The Gateway Pundit. Available at: http://www.thegatewaypundit.com/2017/04/swedenistan-mcdonalds-sweden-sends-mailers-arabic-accommodate-muslim-migrants/ [Accessed November 2, 2017].

[18] Pei, Sen et al., 2014. Searching for superspreaders of information in real-world social media. Scientific Reports, 4(1).

[19] Laila, C., 2017. SWEDENISTAN – McDonald’s in Sweden Sends Out Mailers in Arabic to Accommodate Muslim Migrants. The Gateway Pundit. Available at: http://www.thegatewaypundit.com/2017/04/swedenistan-mcdonalds-sweden-sends-mailers-arabic-accommodate-muslim-migrants/ [Accessed November 2, 2017].

[20] https://www.hopenothate.org.uk/research/investigations/breitbart-report/

[21] Allcott, H. & Gentzkow, M., 2017. Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2), pp.211–236.

[22] Data & Society, 2017. Media Manipulation and Disinformation Online. pp.1–106. Available at: https://datasociety.net/output/media-manipulation-and-disinfo-online/

[23] Schreckinger, B., 2017. “Real News” Joins the White House Briefing Room. Politico. Available at: https://www.politico.com/magazine/story/2017/02/fake-news-gateway-pundit-white-house-trump-briefing-room-214781 [Accessed November 3, 2017].

[24] Freedom House, 2017. Manipulating Social Media to Undermine Democracy, Available at: https://freedomhouse.org/report/freedom-net/freedom-net-2017.

[25] Ben M Tappin, van der Leer, L. & McKay, R.T., 2017. The heart trumps the head: Desirability bias in political belief revision. Journal of Experimental Psychology: General, 146(8), pp.1143–1149.

[26] Pennycook, G. & Rand, D., 2017. Who Falls for Fake News? The Roles of Analytic Thinking, Motivated Reasoning, Political Ideology, and Bullshit Receptivity. pp.1–72.

[27] HOPE not hate’s Breitbart report, page 29 https://www.hopenothate.org.uk/research/investigations/breitbart-report/

[28] Tell MAMA, 2017. The truth behind the photo of the Muslim woman on Westminster Bridge. Available at: The truth behind the photo of the Muslim woman on Westminster Bridge [Accessed October 12, 2017].

[29] Mortimer, C., 2017. Man who posted image of Muslim woman “ignoring Westminster terror victims” was a Russian troll. The Independent. Available at: http://www.independent.co.uk/news/uk/politics/man-muslim-woman-london-terror-attack-phone-russian-troll-identity-a8052961.html [Accessed November 17, 2017].

[30] Geller, P., 2017. UK: Muslim President of the National Union of Students rails against “islamophobia” and ‘racism’ in wake of attack, no mention of victims. Geller Report. Available at: https://pamelageller.com/2017/03/uk-muslim-student-leader-attacks-in-wake-of-attack-islamophobia.html/ [Accessed October 13, 2017].

[31] See The International Alternative Right: From Charlottesville to the White House http://alternativeright.hopenothate.com

[32] Weill, K., 2017. Alt-Right Frames Protesters as Pedophiles With Fake NAMBLA Sign. Available at: https://www.thedailybeast.com/alt-right-frames-protesters-as-pedophiles-with-fake-nambla-sign [Accessed November 23, 2017].

[33] Twitter, The Twitter Rules. Available at: https://support.twitter.com/articles/18311.

[34] According to Twitter: “Some of the factors that we take into account when determining what conduct is considered to be spamming include: if you post duplicative or substantially similar content, replies, or mentions over multiple accounts or multiple duplicate updates on one account, or create duplicate or substantially similar accounts”

[35] Woolley, S., 2017. Resource for Understanding Political Bots. Oxford Internet Institute. Available at: https://www.oii.ox.ac.uk/blog/resource-for-understanding-political-bots/ [Accessed October 12, 2017].

[36] Varol, O. et al., 2017. Online Human-Bot Interactions: Detection, Estimation, and Characterization,

[37] Twitter, 2017. Selected Company Metrics and Financials. Twitter. Available at: http://files.shareholder.com/downloads/AMDA-2F526X/5580698337x0x961126/1C3B5760-08BC-4637-ABA1-A9423C80F1F4/Q317_Selected_Company_Metrics_and_Financials.pdf [Accessed November 15, 2017].

[38] Shaffer, K., 2017. Spot a Bot: Identifying Automation and Disinformation on Social Media. Data For Democracy. Available at: https://medium.com/data-for-democracy/spot-a-bot-identifying-automation-and-disinformation-on-social-media-2966ad93a203 [Accessed November 24, 2017].

[39] Available at: https://botometer.iuni.iu.edu

SHARE THIS PAGE

Stay informed

Sign up for emails from HOPE not hate to make sure you stay up to date with the latest news, and to receive simple actions you can take to help spread HOPE.

Popular

We couldn't do it without our supporters

Fund research, counter hate and support and grow inclusive communities by donating to HOPE not hate today

I am looking for...

Search

Useful links

                   
Close Search X
Donate to HOPE not hate