Skip to content

Tech & Digitalisation

Social Media Futures: How to Reconcile Anonymity, Abuse and Identity Online


Explainer15th March 2021

This article is part of the tech and public policy team’s series on understanding the future of social media, including trends in regulation, innovation, safety and harm. Here we set out how banning online anonymity is not the answer to tackling abuse, but digital credentials could help make social media safer. Get in touch at @andrewjb_ or @maxjb.

TL;DR

  • Abuse, hate speech and racism are a blight on our online world, but banning anonymity may do little to eliminate them while undermining the positive side of the internet (and not just the protection and privacy afforded to vulnerable people).

  • Social platforms could provide better safety tools based on a secure, user-centric digital identity, where users share distinct ‘attributes’, like ‘real person’ or ‘over 13’, without revealing identifying info

  • Users could potentially block out abusive, harmful content from fake/‘burner’ accounts that aren’t verified, creating an environment that disincentivises abusive behaviour


Chapter 1

Online anonymity: encouraging harmful content or integral to the internets benefits?

The standard defence of online anonymity is familiar: it protects people exposing repression, corruption and hate, and allows stigmatised and abused communities to find safety and support when revealing their real-world identity could expose them to harm. These are all good reasons to preserve anonymity, but there is also a lighter, more cultural case. Anonymity enables people to join new communities and explore the full creativity, levity and absurdity of today’s internet without those identities overlapping or contradicting each other. As writer Aaron Z. Lewis says, “from alt Twitter accounts to finstas to private Snap stories, Very Online people are incubating new models of identity and selfhood”. Crucially, this benefit applies as much to users on Flickr or the comment sections of online newspapers, as it does on Twitter, Instagram or Snapchat.

Setting out this case is important context to recent calls to ban online anonymity in the UK. In the wake of horrific racism and abuse on social media, many with a public profile – from Reece James, Harry Maguire, Kevin Pietersen and football bodies including the FA and Kick It Out, to Paris Lees, BBC/Channel 4 journalists and MPs – have understandably questioned the role of anonymity in enabling online abuse. Now, a petition launched by Katie Price has received over 100,000 signatures and will be considered for debate in Parliament.

What these campaigns have right – and which their critics can fail to acknowledge – is that anonymity clearly has created a sense of perceived immunity from sharing abuse, hate speech and racism. However, other social networks that already have ‘real names’ policies (e.g. Facebook) are not free from abuse, so real-world identity is not the silver bullet it may first appear.

A better way forward should explore two issues: firstly, the lack of any verification of identity on social networks beyond high-profile accounts, which helps embed the misplaced idea that social media communication is a ‘wild west, free-for-all’, and secondly, the inadequate tools available to potential victims of abuse to reduce the risk of harm to themselves. Instead of waiting until victims have read a DM, post or tag – by which time the harm has been done – we should ask how social media design can disincentivise sharing harmful content and limit it from reaching others. ‘Identity’ can play an important role in improving platform design, but it should be an upgraded, digital approach to identity, not an analogue model of identity that prejudices anonymity.


Chapter 2

How could an upgraded approach to ID embed safety into social media without undermining privacy?

Before the internet, ‘identity’ meant driving licences, passports or ID cards: physical documents with lots of sensitive personal information on them collected in one place. When you bought alcohol, you’d not only reveal your full date of birth (instead of just proving you were over 18) but also your home address or even your passport number. Applying this model of identity to the internet would be a mistake. It would undermine important rights while also forcing unnecessary and intrusive sharing of personal data.

Digital identity, however, can be different. Usually stored in a secure, privacy-protecting mobile app that only you can access and control, this information can be split up into individual credentials. This means you can demonstrate you’re ‘over 18’, your citizenship, or that you’ve passed a criminal record check, without also revealing other sensitive data.

With this digital identity infrastructure in place, social media services could verify specific pieces of information – like “are you a real person?” or “are you old enough to use the service?” – without forcing users to reveal any identifying information. In turn, as identity expert Dave Birch has argued, this could enable several different tiers of ‘verified’ accounts, rather than today’s confusing status quo.

Figure 1

Proposed tiers of verification for social media accounts

social-media-futures-anonymity-abuse-and-identity-online - Figure 1: Proposed tiers of verification for social media accounts

This could potentially help tackle abuse by giving users much greater control over the content they see. On Twitter, people can already hide content from accounts without a photo or verified email, but these tools aren’t prominent and still leave significant room for abuse to reach victims. However, just as today’s ‘blue tick’ verified users can block out non-verified users, a more effective approach could allow everyone to curate their timelines only to include accounts verified to the tier they choose. While this wouldn’t remove abusive content at source, it could significantly improve protections for those on the receiving end while also creating a disincentive for abuse.

The further question of whether social media services or the police can find out real-world information about users – either to tackle abuse or to support those in distress (e.g. for those sharing self-harm content) – remains challenging. The technical design of these systems shapes policy options. As we have set out elsewhere, for digital identities to have high take-up, strong security and user control are essential. This could mean that users could not be identified, even if the police had a warrant. But prioritising a system that allowed platforms or the police to deanonymise accounts could also lead to low take-up – in which case platforms would be no safer.

How to resolve this trade-off is an open question, so policymakers and the public are right to continue debating the desired end goal. But it may be better to accept these constraints if it enabled a pragmatic solution – improved content tools underpinned by digital credentials – that makes platforms’ design much safer. In practice this could also be more effective at limiting online harms than simply acting after the fact, which would also remove anonymity for a large class of entirely legitimate uses. Instead of breaking the foundations of online spaces, we need a modern approach to regulation that works with the reality of the internet rather than trying to fight it.


Chapter 3

A different way of tackling online abuse

The internet is a crucible for emerging cultures and banning anonymity on social media would undermine the great benefits it brings. But services’ designs nevertheless expose people to unnecessary risk. Relying solely on prosecution of abuse, given all the nuances of speech, is hard at internet scale (for both police and social media platforms) and by this point the damage has already been done. The guiding principle should therefore be to explore how optionality provided by new online tools could tackle abuse – without sacrificing privacy or anonymity for anyone that wants it.

Table 1: Comparison of different models of identity verification on social media, by impact on elements of online abuse

Impact on:

Status Quo

Digital Credentials

Full, 'Real Names' ID

Anonymity

Protected

Protected

Broken

Abuse

Creates perception of immunity

Enables better tools to filter out abuse

Limited impact (e.g. Facebook)

When action is taken

After victim suffers abuse

Before abuse ever reaches victim

After victim suffers abuse

Justice/penalties

Hard to identify abusers, no other disincentives

Hard to identify abusers, but filtering tools still disincentivise abuse

Abusers fully identifiable, but prosecution limited by jurisdictions and practically challenging

There is no single, easy answer to stopping abuse online, but the opportunity here is to help stop abusers from reaching their targets, in turn creating online spaces that disincentivise abusive behaviour. This is like policymakers and social media platforms thinking in terms of roadblocks – e.g. controls for users, nudges designed into the platform, or the threat of timeouts, bans or prosecution – instead of road closures, e.g. completely removing anonymity online or requiring pre-approval of all posts. The latter would be both more intrusive and less effective. To make progress, a secure, user-centric digital identity combined with new tools and features related to user verification could be a crucial part of the design of a well-functioning social network.

Article Tags


Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions