Skip to content

Tech & Digitalisation

Online harms: Our View on the UK Government Plans


Commentary26th February 2020

The Government recently published further work on its Online Harms White Paper – a proposal to “get the balance right between a thriving, open and vibrant virtual world, and one in which users are protected from harm”.

This note sets out the areas of the proposal that work well and where the Government ought to be looking further.

These proposals have been a long time in formation and the Government has consulted widely and balanced the competing demands that introducing an entirely new form of regulation presents. The Government still has a lot of detail to add, but has created a proportionate procedural approach, prioritising the worst and most clear harms. However, any legislative framework must have the necessary teeth around investigatory powers particularly to incentivise responsible design. Given the UK is leading the world in this form of regulation, the Government should also look to set values and principles that can work globally.


Chapter 1

Critical Responses

Many of the criticisms of the White Paper have focused on the threat to freedom of expression and the slow process towards getting legislation on the statute books.

On the former, protecting freedom of speech is a legitimate concern. Any regulation of information, expression or access to information has the potential to restrict fundamental rights and freedoms of people living in the UK. There is a trade-off between freedom of expression and freedom from harm.

Freedom of expression should not come at the expense of victims of abuse, especially as abuse online could be linked to real world increases in physical harm to minorities. It is important to ensure there are structures in place to assess the level and nature of harms, take action to regulate where not enough is done, but not attempt to regulate individual bits of legal content.  

In a paper published in January on China’s tech landscape we set out how China is a pioneer of digital censorship and laws require all user generated content to be monitored to prevent any prohibited discussion or content. The UK Government has been careful to avoid this trajectory. In fact, the focus on procedural regulation and transparency, makes this a poor model on which to build an authoritarian censorship regime. If done well the regulation can make online platforms more accountable for the restrictions on speech they already apply to their users.

In their response the Government recognises that the right approach is not to adjudicate on, or require the removal of, individual bits of legal content, but they and commentators, conflate the freedom of the press with freedom of expression. The Government states that this proposal is designed to not inadvertently catch newspapers or journalistic content; this is important, but there is a blurred line between journalism, comment and social media. This was brought to light recently with the death of Caroline Flack and there continues to be unresolved questions about the responsibility of newspapers and social media users that needs debating.

On the time the process is taking, while there is an urgent need to mitigate online harms, rushing this process would be dangerous, which is why the Government must hold to its commitment in October Queen’s Speech to allow pre-legislative scrutiny. The collapse of the Government’s age verification for online pornography proposals, demonstrates that being bounced into legislating may be marked as a success for campaigners, but does not address the deeper, long-term problem. Successful regulation needs good regulatory principles, consultation and clear structures that work with the businesses being regulated.

While the principles and the values are set, there is little practical detail in the proposal, that’s still to come in draft legislation and a full response due in the Spring.


Chapter 2

What the proposal does well

1. Proportionality and procedures

The duty of care, successfully proposed by Professor Lorna Woods and William Perrin at the Carnegie Trust and supported by other experts, seeks to break the idea that you need publisher-style regulation to hold platforms to account. It sets up a looser, more qualitative style of regulation that seeks to encourage platforms towards the right behaviours and responsibilities where the definitions of harm are more difficult.

Facebook’s recent white paper on online content regulation set out a similar regulatory philosophy using the phrase “procedural accountability” to describe regulation that incentivises and scrutinises particular measures that platforms take around content moderation.

The proposals also recognise the diversity of the challenge given the very different nature of online platforms and the scale. Reddit, for example, has a very different community moderation standard to the highly systematised community standards and moderation of Facebook.

Any legislation must avoid setting standards that can only be met by the largest platforms; a criticism levelled at the GDPR. They should leave room for Ofcom to make proportionate decisions based on the nature of the service and a risk-based approach.

For example, the Communications Act 2003 s319 (4) sets out the areas Ofcom must have regard to in standards regulation including size of service, audience expectation, “of persons who are unaware of the nature of a programme’s content being unintentionally exposed”. There should be equivalents in the new regulation that give Ofcom the scope to make targeted and platform specific decisions.

2. Protecting what works today and focus on illegal harms

The response sets out two priority areas: terrorist content and child sexual exploitation and abuse, with higher expectations set for these areas. This is not to diminish the other areas of harm, but they require a different approach.

The Internet Watch Foundation (IWF), is a good example of where the technology industry, initially the internet service providers (ISPs) in the early 2000s, came together to tackle child sexual exploitation content online. They made significant strides in removing content hosted in the UK. While there is much still to do, the Government’s proposed model would incorporate and strengthen the IWF’s model and support the balance of industry collaboration, technical tools and self-regulation.

The Global Internet Forum to Counter Terrorism (GIFCT) is a good example of tech collaboration on terrorist content that could be supported by the Government's model. Social media and online communications companies, both big and small, have made considerable strides in their approach to extremism since the Christchurch attack last year, but there are still cultural, linguistic, political and geographic blind spots that regulators will want to understand especially in the context of other harms. There is also still considerable definitional ambiguity around extremism and hate speech, which unhelpfully pushes complex decisions onto platforms.

3. The regulator

Ofcom is the obvious choice to be the regulator, and it is well suited to the task for now. It is an independent statutory body, free from political interference. It is a regulator anchored by better regulation principles, has a well-resourced research function and it is subject to judicial oversight. Nevertheless, the devil will be in the detail and the Government and Parliament must not be tempted to put backdoors in the legislation which allows political interference or even Government direction.

It is right that Ofcom is accountable to Parliament, but the Government has been tempted to use regulation to achieve political ends in recent years by introducing strategic directions (the Digital Economy Act 2017 gave the Government powers to designate Statements of Strategic Priorities for telecoms networks, telecoms consumers, spectrum management and postal services).  While not in itself problematic for communications infrastructure it does blur the line between independent regulator and government policy. If this was replicated for content and harms regulation, whether on traditional media or new media it would represent political interference in the regulatory process and a potential threat to freedom of expression.

Independent centres of expertise can be effective in holding complex regulation to account, for example the Communications Consumer Panel is appointed by the Ofcom Board. It may be worth formalising a tier of independent experts to help Ofcom and put a fire breaker between online platforms and direct regulation. We will shortly be publishing analysis on what online regulation can learn from the audit industry to build independent expert scrutiny.

In the longer term the ongoing process of convergence may mean data protection and privacy design are more intrinsically linked to content regulation and online harms than postal regulation and spectrum allocation. This may mean a more fundamental rethink of the regulatory apparatus to avoid Ofcom becoming a single point of failure across network regulation.


Chapter 3

Whats missing from the proposal

1. Assessment, investigation and incentives for design

The focus of the proposals on illegal harms is important and the ‘duty of care’ is a good umbrella to achieve broader online platform responsibility, but it is vital there is regulatory teeth behind disincentivising all harmful content, even if it is not illegal.

To make sure the proposals and the duty of care has bite and work in the interests of those users harmed online, there needs to be two additional focuses:

  1. Incentivising responsible and healthy platform design

  2. Investigatory powers and assessment

These two areas can operate in tandem. There needs to be a new regulatory framework to measure, investigate and incentivise the ‘health’ of online platforms and online communities. Platforms need to be held to account on how they implement their values, and crucially how they design their services. But the framework also needs to recognise the uniqueness of different platforms.

The figure below sets out the areas that govern a platform’s design, operation and management, this then leads to specific platform action, activity and outcomes that the regulator ought to have oversight, while avoiding intruding into the practical and commercial operation of the service.

Figure 1

What features govern an online platform

online-harms-our-view-uk-government-plans - Figure 1: What features govern an online platform

Source: Tony Blair Institute for Global Change

Incentivising platform health

Ofcom should be tasked with setting out what good outcomes look like for a range of content areas in order to give them a framework through which to make proper and transparent assessments of ‘platform health’. The Government has backed away from its 23 harms laid out in the white paper in July, but Ofcom could consider broader assessment areas with overarching principles focused on:

  1. Action – whether there are mechanisms or policies in place to prevent harm

  2. Processes – whether procedurally, platforms are doing what they say and how effective are those processes

  3. Output - how healthy the platform is, measured both from platform results and independently investigated/audited

What is the concept of ‘platform health’?

Platform health is a way of describing the good outcomes that we want to create online.

We want to see healthy communities, public spheres, debates and discussions, connections and interactions, online and on online platforms.

Platform health requires governments, regulators, business and civil society to work together to achieve shared goals based on a shared set of values.

In order to achieve a healthier online environment, criteria for platform health needs to be set, measured and enforced.

Ofcom could take inspiration from the ethical design community to set broad principles of assessment, for example, ethical OS. This would exclude areas already covered by other regulatory bodies such as the ICO.

Investigatory powers and assessment

Regulators need powers of investigation to fully understand how platforms impact their users. Transparency reports are a useful tool to understand what platforms are doing, but they are not a substitute for a regulator that can ask difficult questions and expect reliable, verifiable and accurate answers.

In telecoms and broadcasting regulation investigatory powers are well established and change the way in which regulated companies record and make their decisions. In the financial audit industry scrutiny is a continual and well-accepted process.  

The curatorial power of online platforms and services including amplification and platform design ought to be a core part of the scrutiny that regulators offer. For example, understanding how platforms rank, promote and demote content, how user-sharing and validation tools affect the proliferation of content. This is both about systems (and the human decision points in those systems) as well as the data that informs decision making.

Instagram experimenting with removing likes, Twitter allowing users to hide replies and Reddit giving tools to community moderators are all good examples of how platform design can be used to mitigate harm. Regulation can and should incentivise healthy design principles that prioritise healthy online communities.  

The Centre for Data Ethics and Innovation’s recent report on online targeting sets out a range of recommendations for new regulatory regimes including that it should be developed to promote responsibility and transparency and safeguard human rights by design. They explicitly link safety by design with accountability in also recommending that the regulator needs information gathering powers including “the power to give independent experts secure access to platform data to undertake audits”.

Properly opening up platforms to investigative scrutiny (without compromising the commercial integrity or the security of the platforms) should be an objective. Enforcement mechanisms such as large fines or criminal liability of directors may not be as good a long-term solution to the safety of people online as a close relationship between regulator and platform, which includes transparency and scrutiny on the regulator’s terms. 

2. International collaboration and frameworks

The response rightly points out that “the UK has an opportunity to lead the world in regulatory innovation”. The GDPR’s innovation in privacy protections and global reach has set the standard for online regulation and has been replicated in the California Consumer Privacy Act. Washington and Silicon Valley are watching the UK with interest, in particular because the UK is proposing whole-system regulation instead of piecemeal and targeted interventions such as NetzDG which looks exclusively at terrorism. But the response does not clearly articulate the need for international agreement and frameworks to work alongside the global power of tech platforms.

In 2018 we published a report on the next generation regulator needed for big tech and wrote the following, which is even more relevant now the UK has left the European Union:

The regulator should be designed from the outset to take an international perspective and work across borders. The ideal authority would match the global reach of big tech companies with a global response to the challenges they present. In the current geopolitical environment, however, it is hard to see how a global regulator could come to a meaningful consensus on values or keep pace with the rapid evolution of technology.

The pragmatic solution is therefore to focus first on building a transatlantic consensus. The established liberal democracies of the United States and the European Union (EU) have enough in common to come to a shared view on values and responsibilities for tech companies, and on rights and well-being for consumers. Parallel regulators in these two jurisdictions, with a common forum for analysis and mutual recognition of rulings, would be a good enough first step. In the fullness of time, the two may work ever more closely together.

For the UK, if there is to be any silver lining from Brexit, it may be in the freedom to pivot towards this new approach more quickly than other countries. There is an opportunity to take a global lead in crafting fit-for-purpose regulation that tech firms adopt as a global standard. If the UK retains broad equivalence with other aspects of European policy in areas like data protection, then in time the UK and EU approaches might again be harmonised.

The UK should be seeking to establish and export liberal democratic values within new technology focused trade-deals. These values should:

  • include support for freedom of expression;

  • include protection for human rights, particularly of those marginalised;

  • be against authoritarian control of technology systems and networks; and

  • support sensible and proportionate regulation of online harms and safety.

The EU has coordinated several codes of practice on online harms and more effectively the international momentum and collaboration established by the Christchurch Call are good building block on which to form more consistent regulatory and legislative responses around the world.


Chapter 4

Conclusion

Now that the principles are clearly set out, the Government must produce draft legislation that meets those principles. The Government should hold its nerve and not be tempted to add in prescriptive regulation in the legislative process. The Online Harms Bill should seek to empower Ofcom to make proportionate decisions and set out high level principles about the values of a healthy online world. It should focus more on giving the regulator the toolkit it needs to create positive incentives rather than being tempted to prescribe specific interventions that could risk undermining freedom of expression.

Article Tags


Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions