Last week the UK government was right to directly refute the accusation that it had been setting up fake NHS staff accounts on Twitter; it managed to counter the rumour before it spread unchallenged. Brands engaging with users online like this is nothing new, but using an ‘authentic brand voice’ normally escapes government communications. The government still has a lot to learn from social-media and influencer marketing that could help it in the battle against false information online.
The pandemic has turned us all into heavy social-media users. IRL no longer means anything when all of our lives are lived through screens. The democratisation of information has had real benefits in opening up the debate about Covid-19. But such an abrupt shift online has also meant an amplification of online harms with real-world implications. We’ve seen this with mobile-phone masts being attacked as conspiracy theories online linked 5G phone signals implausibly to coronavirus.
The Cabinet Office has launched a rapid response unit to deal with misinformation, but is this the right response? We went into the Covid-19 pandemic with a crisis in trust in UK politics and institutions. A police force for social media is going to be quickly overwhelmed, playing whack-a-mole rather than changing minds. If Facebook, with tens of thousands of moderators, struggled before, how can we expect a civil-service unit to be successful? Nor is it simply enough to put out blanket messages asking people to be more responsible and not to believe what they are seeing. Recent studies show that we all have a truth bias to new information we see, even if we’re explicitly told that information is false.
A 21st century solution to the problem would be to use the massive social-media advertising infrastructure already there.
To their credit, social-media platforms have been very quick to respond, with Twitter, YouTube and Facebook changing their community standards to catch any false information that could endanger public health, including links to 5G conspiracies. But proper rapid takedown processes remain a major challenge with moderation teams working remotely and more people and content online.
Tools and policies for private groups and messaging platforms are long overdue. WhatsApp’s new limit on forwarding highly forwarded messages is a good start, but it needs to be rolled out more widely. There are still tools and behavioural nudges platforms can build into messaging services without compromising the end-to-end encryption they provide.
While fact-checking and information on home screens are useful, the challenge is getting people to take notice and change their behaviours. Community intervention or social mobilisation has so far been missed, but was key in the Ebola response.
Misinformation is a societal problem accelerated through social media where the virality of content is encouraged and paid for. Companies have spent years building advertising platforms capable of micro-targeting, tracking behaviours and delivering for brands. Now is the time to put these tools to work in the service of public health.
Social-media companies and marketing agencies should make all of their influencer and brand tools available to public-health authorities and importantly redeploy advertising and behavioural experts to work alongside stretched government comms teams to develop a new public-health influencer strategy to counter misinformation. Getting this right has three key steps:
First, the right people need to be identified. A recent study found it is prominent public figures who are spreading misinformation, so it will be important to target them, but micro-influencers, or those with follower numbers less than 10,000, are key opinion formers. They are community leaders, experts and religious leaders who can reach communities and build trust. They can do this in the right language and the right cultural and community context. Marketing teams have spent years building up lists and strategies to find these people, especially on WhatsApp. The government can combine this with other local data to be highly targeted both towards people in big community groups, but also influential people spread across multiple smaller chats.
Second, to get into WhatsApp groups these micro-influencers need to be armed with clear public-health information, resources and fact-checking advice. Paid influencer programmes should be avoided, influencers should still be transparent about why they are posting, but ambassadors can be highly creepy. But the priority should be putting government and public-health resource into tailoring information specifically for these influencers once they’ve been engaged. Lessons from fighting counter-extremism suggest some of the best interventions are dialogue-based, encouraging people to voice their anger and fears and communicate with others they trust or can relate to. Telling people bluntly they are wrong can lead to conflict and not change minds.
Finally, these influencers need to be able to feed back into government and social-media platforms. If platforms focus on labelling and fact-checking misinformation in-app, there need to be ways influencers can send the misinformation they’re seeing back to social-media companies for fact-checking and labelling. The trusted-flagger model YouTube and others have used for reporting terrorist content should be repurposed to deal with misinformation.
At this time of crisis, policymakers should be taking advantage of the networked public and repurposing quality data and tools that are already out there. Governments should not be reaching for new rules to restrict freedom of expression, as that can drive further conspiracy theory and fear. Instead they should focus on using community dialogue alongside the existing online advertising industry to get the right and accurate information into communities.