Skip to content

Tech & Digitalisation

Social Media Futures: Interventions Against Online Unpleasantness


Commentary19th April 2022

This article is part of the tech and public policy team’s series on understanding the futures of social media, including trends in regulation, innovation, safety and harm. Here we explore the potential of design interventions that might promote more positive online environments

In 2020, before the US Presidential Election, Facebook tweaked its news feed algorithm.  The change downgraded hyper-partisan outlets, associated with polarisation and fake news.  The aim was to minimise content deemed “bad for the world”.  Facebook insiders allegedly called this “the nicer news feed”.  Some asked if it could stay.

Niceness is a social issue.  Those who feel uncomfortable in a space are less likely to contribute to it.  The UK’s recently published Online Safety Bill (OSB) notes the risk of “creating toxic online environments and negatively impacting a user’s ability to express themself online.”  The Bill focuses on ‘harmful’ behaviours, a term which has already prompted substantial debate.  Separately to this debate, we should also consider behaviours which are not ‘harmful’ in the manner of hate speech or disinformation, but can nonetheless still negatively impact upon online environments.  Let us call them ‘unpleasantness’. 

Unpleasantness could include, for example, low-level aggression and insults; or behaviour such as ‘quote dunking’, sharing content purely to attract criticism onto its author.  It could also include behaviours largely faced by particular groups, but which fall short of hate speech; for instance the unwanted propositions or demeaning comments which must be navigated by many women online (software engineer and tech influencer Chloe Condon has coined the phrase “being a woman on the internet is a full time job” to describe such experiences).  Context can also create unpleasantness – for example floods of negative posts during particular events, whether elections or a pandemic, can en masse be an unpleasant experience for many. 

Think of unpleasantness as a residual category left over after we put some behaviours into the ‘harmful’ category.  Unpleasantness can still negatively affect participation – but we do not want to treat them in the same manner as harmful content.  Due to the breadth and subjectivity of unpleasantness, that would raise serious freedom-of-speech issues (not to mention practical problems).  Moreover, unpleasantness can play important positive roles.  Is continually posting updates during crises providing helpful information, or inciting panic?  Is a commenter who refers to political opponents as ‘cruel’ too aggressive, or expressing justifiable anger? Is telling the commenter they need to ‘calm down’ fostering positive debate, or tone policing? 

We should not look to the OSB, or similar legislation, to mitigate the negative effects of unpleasantness.  But we should consider how technology, in conjunction with wider social efforts, could help. The problems of unpleasantness are as old as human interaction, and will not be solved by technology.  Nonetheless, technology is supposed to help humans solve problems - and there are already developments worth considering further.

Positivity Prompts

Multiple platforms, including InstagramYouTubeTwitter, and TikTok, have experimented with text prompts which nudge users away from offensive language.  These are appear on posts which breach a certain ‘toxicity score’, algorithmically assigned by software such as Google’s Perspective APIResearch suggests that these do result in users reconsidering their language; and moreover can reduce both the toxicity of a users’ future comments, and of other comments in conversations they are part of.

Regarding further developments, there are three broad issues to consider:

  1. Expanding beyond toxic language.

Other unpleasant behaviours may be harder to classify; however the downsides of erroneously providing prompts should weighed against the potential benefits of the prompts (particularly given the downsides of prompts are less severe than, say posts being algorithmically filtered out or downgraded).

  1. Explanatory prompts.

Current prompts often include broad anti-toxicity explanations, such as “some members of the community may find your comment inappropriate”.  More sophisticated prompts could summarise reasons why a particular behaviour could cause upset, with links to wider discussions.

  1. User experience.

This is somewhat in tension with (1) and (2).  If prompts became more common, and also included an explanatory component, they could become quickly unpopular.  They would also add friction which could discourage people from speaking freely. Such concerns could dissuade platforms from using prompts.  But other, less frictional, design choices are possible. For instance, instead of pop-up text, a section of a comment box could change colour and pattern as a typed comment became more unpleasant.  

Ultimately – and in contrast to addressing harmful content – prompts should not aim to restrict content; but rather to encourage reflection which could minimise unnecessary unpleasantness.

Positivity Filters

Another approach is to help users address their consumption of unpleasantness.  An irony is that modern social media is supposed to hyper-personalise content to users’ tastes, and yet tools available to minimise unpleasantness are blunt and/or burdensome: muting, disabling comments, unfollowing, reporting. 

One can imagine switching social media to a ‘positive mode’, in which algorithms would do all the work instead.  It is not easy for algorithms to accurately determine something as broad as ‘unpleasantness’, and there would invariably be misclassification.  But by selecting positive mode, users indicate that they wish to skew the balance towards more zealous filtering.  This technology could be developed by platforms themselves.[1]  Alternatively it could be developed by external companies as middleware – “software and services that would add an editorial layer between the dominant internet platforms and internet users”, to borrow a definition used by Francis FukuyamaDaphne Keller, and others.

Such an approach gives users freedom to create unpleasant content as they wish, while allowing others to minimise how much of it they see.  During bad news events in particular, it would permit information flow and social connections without forcing exposure to relentless negativity.  This approach is attuned to subjective preferences, including how a person feels at a particular time - the mode could be easily switched on and off if you’re having a bad day.  And behavioural signals from users with positive mode switched on could provide feedback for platforms to further refine negativity filters.

However, this approach could just be a sticking plaster.  It does little to prompt reflection on how we relate to one another online.  More pressingly, would it be good for society if technology helped people shield themselves from the negativity of the world?  The shared space of social media has been a vital tool for raising challenging issues – Pew Research found that 23% of US social media users had changed their views on issues such as Black Lives Matter because of social media.  Given that social media platforms benefit from providing better experiences for individual users, one can imagine a comfortable ‘positive mode’ fitting well into their business models.  But there is a risk, as with much to do with social media, that technology will outrun consideration of ethical and social implications.

Recommendations

As with much debate about technology, where someone stands on these developments will depend on broader social views.  Anti-interventionists may be horrified if platforms try and ‘educate’ users to behave better.  Social justice activists could be deeply concerned if technology helped people shield themselves from the suffering of others.  Tech platforms cannot claim to be neutral, divorced from wider society, regarding the content they support.

It is my view that tech platforms should take an interventionist approach.  Technology can shape wider discourse; social media has exposed many people to a great deal of unpleasantness, alongside new ideas and viewpoints.  This experience should prompt reflections, and improvements, on how we relate to one another – and technology should not be passive in this effort.  From this stance, I make the following recommendations:

  1. Platforms should combine both approaches outlined above.

Users should be given greater controls over their own experience, while also prompted to consider how they impact upon others’ experiences.  The behaviour of users in positive mode should help train algorithms used to prompt against unpleasantness, alongside a wider range of other feedback mechanisms. 

  1. Platforms should take transparent and proactive stances on this work.

Decisions will need to be made around what behaviour should receive prompts, be filtered in positive mode, and similar questions.  For instance, to counteract the aforementioned risk of filtering unpleasant but important content, platforms should continually decide whether certain topics are ‘of social importance’ and allow them some exemption to filtering.  Such discussions should be transparent and collaborative, building on existing governance structures (for example the Facebook Oversight Board).  The methods, data, and models underpinning prompts and positive mode should be available to external researchers, and open to collaborative development.

  1. Government and civil society actors should support platforms in these efforts.

There are already many welcome efforts, by civil society and government-supported organisations, to develop visions of what ‘good’ discourse and media use might look like.  Such work should directly feed into the development of points (1) and (2), for example by providing material for explanatory notes in prompts. 

Civil society actors should also robustly defend platforms against inevitable criticisms that they are being ‘overbearing’ or ‘moralising’, or restricting free speech.  The proposed approaches do not stop speech, or force people to speak in a certain way.  The vision is this: if someone writes something deemed broadly ‘unpleasant’ they will get a UX-friendly flag (for example, colour and pattern change) prompting them to reflect on their language, which they can choose to ignore.  Other users can switch on a ‘positive mode’ which means they see less content which might deter them from wanting to use social media.  There will inevitably be errors and disagreements in classifying something as broad as ‘unpleasantness’ – but these errors would not render users unable to speak.  The downsides of such errors should also be weighed against the risks of platforms doing nothing about unpleasant experiences, or focussing solely on the most harmful content.   I argue that the steps proposed here support more and better speech, by encouraging reflections on how we interact with one another and by creating spaces which are more comfortable for a wider variety of participants.  That is in the interests of platforms, users, and society alike.


[1] Indeed Facebook was already carrying out experiments into positive and negative feeds in 2012, though many aspects of these experiments were ethically dubious and heavily criticised.

Article Tags


Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions