Skip to content

Tech & Digitalisation

Social Media Futures: What Is Brigading?


Explainer10th March 2021

This article is part of the tech and public policy team’s series on understanding the futures of social media, including trends in regulation, innovation, safety and harm. Here we explore the phenomenon of brigading, a form of coordinated internet behaviour with a significant impact on online safety.


Chapter 1

Introduction

While many policymakers and politics-watchers worry about the effects of nationalist and foreign disinformation networks and automated bots on citizen confidence and democratic legitimacy,  other bad actors are coordinating abuse and harassment on a much smaller and less obvious scale. “Brigading” is an increasingly common online pattern of behaviour that you really ought to know about as it starts to overwhelm community managers, administrators, communications officers and private individuals.

Brigading” is a term that originated on Reddit for a coordinated attack by a group of users of an antagonistic subreddit (forum dedicated to a particular topic). The brigade would privately agree to “downvote” comments, either on a random or targeted basis, to deprioritise them in users’ feeds and effectively censor them. The meaning of the term expanded to cover all coordinated voting behaviour to make something or someone seem more or less popular than they actually are, and now it means all coordinated abusive engagement behaviour online. This engagement can come in the form of retweets, comments, quote retweets, email campaigns and more.

Where have we seen it before?


Chapter 2

Types of Brigading

Sock Puppetting

“Sock puppetting” is the use of fake accounts to make a user’s position seem more popular than it is or to have false arguments to drive polarisation. Many brigades use these to increase the volume of their attacks and also to post content that may get their main account banned or cause them problems if attached to their usual online identity.

Ratioing

Ratioing” is something that can occur organically on Twitter but is often a coordinated abusive action. If a tweet gets more replies than retweets or likes, it is usually but not always an indicator that the original post has been poorly received. A brigade sees a large ratio as a victory and report this on other platforms, for instance message boards or private chats dedicated to their group or topic of interest.

Quote Retweets

Quote retweets can be used as an updated version of the ratio and are more visible to both the target and the wider Twitter audience. A comment can now be added to a retweet and those comments can be seen by the followers of the commenter, the original tweeter and anyone who looks at the quote retweets on a tweet with high engagement. This can be used to drive further harassment, both organic and coordinated; it is one of the most common harassment techniques currently in use as it is so effective and the target is often caught unawares. Whereas the original poster is not always notified about comments on a popular tweet – particularly when the comments are made by small accounts – and not everyone who sees the original tweet sees the replies, quote tweets have much bigger reach.

Sealioning

Sealioning” is a harassment technique that involves a participant in an online discussion harassing another participant or group of people with incessant questions in bad faith to disrupt a debate and wear the target down. These questions are often asked politely and repeatedly to make the target appear unreasonable or obstructive and the same questions keep arising for particular topics. For instance, “What is a woman?” and “What rights don’t trans people have?” are used to harass trans-inclusive individuals and organisations, and “Why do you want to silence discussion of Israel?” and “What about support for Palestinians?” are used to harass people who talk about antisemitism. This technique is often combined with ratioing and quote tweeting.

Mass Reporting

Mass reporting” is what happens when a brigading group tries to get users who are members of marginalised groups suspended from an online platform by collectively reporting their posts. The brigading group often has some understanding of how algorithms work on the platform to automatically remove reported comments that meet certain criteria and will run searches to find old or humorous content that can trigger sanctions if reported.

Astroturfing

Astroturfing” is a marketing technique that can also be used in a coordinated way by brigades. It involves creating fake posts on a forum or comment section that are designed to appear like genuine grassroots interest in a topic. Brigades may pretend to be former or current customers of a brand or organisation, or members of a group to which they have no connection in order to harass their host.


Chapter 3

Outcomes of Brigading

The main effects of brigading are to harass members of marginalised communities and anyone who speaks up on their behalf, such as a charity or journalist using language inclusive of transgender people or a brand featuring gay or black people in its advertising videos. The sheer volume of content a brigading group can generate can overwhelm anyone managing the target individual or organisation’s online accounts, and the behaviour can spill over into offline effects like telephone calls, harassing letters and demonstrations if incited to do so by the organisers.


Chapter 4

Artificial Behaviour

Brigades can be difficult to spot for those unfamiliar with the technique or the issue that is being targeted, particularly when directed at high-follower accounts rather than users with a lower profile (both are common). The tweets, comments and emails usually come from real accounts, even attached to real names and photographs, which makes the behaviour seem completely organic compared with bot or content-farm activity. It can also be difficult to distinguish from behaviour that has not been coordinated: for example, when a high-profile individual or brand says something particularly controversial or incorrect and many people respond, or a user known to be aggressive online quote tweets smaller accounts which encourages their followers to pile-on to the target. Well-known people across the political spectrum do both these things, but they are also subjected to brigading attacks. Some topics are particularly prone to both coordinated brigading attacks and mass overreaction when big accounts engage with the topic, such as transgender rights and inclusion, antisemitism on the left, Julian Assange and Wikileaks and Modern Monetary Theory. Users search those topics and follow specific accounts in order to engage with debate, but organised groups also coordinate harassment when a member spots something with which they disagree.


Chapter 5

Breaking Up Brigades

There are three main ways to tackle this problem, beyond anyone who manages online accounts for themselves or an organisation being aware of the problem in order to react to it appropriately.

The first of these requires agreement from the online platforms, which could develop better tools for content moderation; these tools could spot brigading even when it is, as is so often the case, organised in a private group chat or by posting links and information on a different platform such as a Mumsnet forum, TikTok video or Twitch stream. Platforms like Twitter are aware that there are users who manipulate the platform via multiple accounts, quote tweeting and other behaviours but only rarely are they sanctioned for these specific harassing behaviours, particularly if the organisers are verified or very popular accounts.

The second is a policy-based approach. There could be defined in law an offence of coordinated abusive behaviour as a particular form of harassment. Hate speech is covered under existing laws, up to a point, but not coordinated online or offline harassment activity which may or may not cross the hate speech line.

The third, more complete response, would take both approaches and also make more of an attempt, preferably via international cooperation, to regulate the online platforms where these behaviours take place. As many of these activities take place on more than one platform, either for brigades themselves or co-ordination of action, this response would mean tackling the problem through more collaboration between platforms, both on intelligence sharing, and on proactive oversight.

One way forward could be for platforms and services to share a single oversight board examining the issue of co-ordinated attacks and abuse. This could be through self-regulation or as part of new national and supranational regulation. This board should be diverse, including a wide variety of perspectives and insight from marginalised groups, and offer cross-platform oversight and intelligence sharing. The common infrastructure providers such as Amazon Web Services could also have a place on the board, given they are increasingly taking controversial content decisions. This form of collaboration would respect that different platforms and services have their own terms and conditions and engagement styles, but it would allow them to broadly agree on codes of conduct and achieve the real goal protecting people online from harm, wherever they are.

 

Phoenix CS Andrews is a writer, researcher and broadcaster with expertise in digital cultures, online communities, politics and fandom.

Article Tags


Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions