Skip to content

Tech & Digitalisation

Privacy, Security, Citizen Rights and Transparency in Web 3.0


Briefing24th March 2022


Chapter 1

Context

This project investigated the problems posed by closed platforms and closed use of open platforms in Web 3.0 for transparency, interventions and innovations balanced with the need for privacy, the protection of minorities and the right of citizens to organise. The spread of misinformation, public figures avoiding accountability, governance conflicts and groups coordinating abuse are all common issues.

My research asked three key questions:

  1. How do we balance privacy, citizen rights, security and transparency in Web 3.0?

  2. What do pivotal moments of organisation look like before they become established movements in this new context?

  3. How do we update rights and online harms regulation and legislation to meet these new challenges, especially for minoritised groups?


Chapter 2

Definitions

Web 3.0

The semantic web, the third generation of the internet or the decentralised internet all mean the same thing. Instead of linking information or documents for humans and computers to read, which is how the web mostly works now, Web 3.0 links data or objects with highly structured semantic connections. To explain, we have to go back and explain how it differs from where we are now.

Web 1.0

The first generation of the World Wide Web, invented by Tim Berners-Lee in 1989. Think of it as the read-only web. Individuals and companies could broadcast information to audiences. There was some streaming and downloadable multimedia, and limited forms of interactivity such as chatrooms, forums and browser-based games. Members of the public could make their own websites, but it required either a level of skill with HTML, specialist software (such as Dreamweaver) or very limited platforms (Geocities, Tripod and other kit sites).

Web 2.0

The second generation, starting in 2004 and taking over at the end of the 2000s as more people in the Global North had access to broadband and mobile internet. Think of it as the read-write web. Web 2.0 was all about harnessing network effects to share, create and interact with content as well as consume it. Blogging, social media, platforms, APIs, web services, apps and online communities made it easy for everyone to take part. Convenience and monetisation also made it easy for big tech companies to centralise and control everything. The interoperability of Facebook or Google products makes people forget that those companies decide what we can and cannot do, and that they profit from our data.

What Do People Mean by “The Broken Internet”?

The biggest companies control the internet and it doesn’t work for everyone. Facebook does not put enough resources into understanding Global South languages and culture; Twitter does not remove racists and transphobes; finance companies decide that selling weapons online is fine but nudity is not; and so on.

What Does Decentralised Mean?

At its most basic, it implies a distributed infrastructure for the internet. It could mean that instead of relying on tech companies, governments and payment processors, users control their own data and use a single personalised account for all their internet activities (including their phone and smart home). The blockchain could act as an anonymous public ledger to record all that activity – not the individual’s government ID, not their credit card, nor their biometrics. The state and Silicon Valley have no control over the user’s digital identity. ID systems like Aadhaar in India demonstrate how not to do digital identity.


Chapter 3

Problem

The original vision for the web was for it to be much more open and transparent, with no corporate control. Web 3.0 is supposed to take us back there. It uses smart contracts for automated trust between users, to control user data and transactions, instead of centralised organisations. We should expect to see more use of open source apps and advanced AI so that people build their own tech services without needing to code or pay someone else to do it. Imagine a world where “new social networks, search engines and marketplaces crop up that have no company overlords”. Or it could just be more of the same but with less regulation: fragmentation, monetisation, exploitation, oligopolies and unsustainable use of finite resources. The problems we see in current and emerging platforms could be magnified rather than alleviated.

What Is Happening as We Transition (or Not) to Web 3.0?

Lots of gambling and grifting, as with any new technology. NFTs and cryptocurrencies are always in the news, but both are niche forms of financial speculation. Moving away from centralised social media platforms promotes the interests of those excluded by current rules, for good and bad: it protects minoritised groups, and grants more access for countries with less developed economies, but also shields bad government actors, conspiracy theorists, organised criminals and militias. There are opportunities to build something new from the internet and not just add to what we already have. Silicon Valley may be less dominant in the Web 3.0 ecosystem than it was in Web 2.0. Asian and MENA countries challenge the US for investment and innovation, as we have already seen with the growth of Chinese apps in the West – TikTok’s success surprised too many people – and digital investment to entice Global South startups in the Middle East.

Governance in Web 3.0

DAOs (decentralised autonomous organisations) and other Web 3.0 projects have their own governance issues. Based on their share of ownership of a platform or organisation, users can sometimes hold governance tokens that entitle them to vote on the rules that govern it (e.g. working conditions and restrictions, the definition of misinformation, what content or views, if any, should be moderated, if deplatforming individuals or organisations is merited, and whether or not to apply sanctions to countries or pull out of certain markets). These rules are then executed by smart contracts.


Chapter 4

Comparative Analysis

As we move from Web 2.0 to Web 3.0, what will be the impact on digital rights? Decentralisation is both pluralism and fragmentation in action.

 

Online harms/safety

Civic participation

Privacy

Governance

Web 2.0

Heavy reliance on content moderation, both automated and user reports; platforms geographically based; limited cooperation with law enforcement

Big tech and government ownership and/or control of data; open platforms allow more citizen right to reply and network effects enable citizen activists to gain authority

Companies are responsible for privacy via regulation; users have high expectations of privacy and security

Privatised, for-profit oversight boards mostly for show; celebrity/celebrated founders; published rules, decision-making and principles; no transparency of algorithms

Web 3.0

Limited restrictions on content or behaviour; meant to be self-governing; distributed infrastructure so no geographic link for duty of care

DAOs and governance tokens promise citizens can take back control of technology and the internet, but this is heavily caveated

Individuals responsible for maintaining their own privacy, regardless of their capacity for this or need for support (e.g. vulnerable users); transparency overrules privacy; difficult to hide past identities and actions even for legitimate reasons (witness protection, transgender, stalking); regulation cannot be geographically ringfenced

Mostly privatised and for profit; often celebrity founders; governance tokens allow some citizen involvement; rules and decision-making often not published; limited transparency of algorithms

Open platforms

Able to see direct or indirect signs of coordinated behaviour and platform manipulation; rules against this; can see signs of radicalisation and political movements forming and acting

Originally users on an equal footing and ordinary people could have influence; now celebrities and companies have more power, but public can still get involved as open to all

As Web 2.0

As Web 2.0

Closed platforms

Coordinated behaviour happens out of sight so harder to spot; radicalisation ditto; free speech concerns overrule need for moderation; easily geoblocked, however, e.g. Telegram

To be part of a network, you need to know about it; easier to allow a small group of people to control involvement by controlling access

Encryption and privacy make it more difficult to report infractions or see where private images and data have been shared without consent

As Web 2.0

Mark Zuckerberg’s Georgetown speech in 2019 epitomises the big tech/Web 2.0 view of free speech. He said that “voice and inclusion go hand in hand” and “with Facebook, more than 2 billion people now have a greater opportunity to express themselves and help others.” Zuckerberg also talked about the necessary restrictions on free expression: blocking pornography, terrorist propaganda and the bullying of young people. In practice, “female-presenting nipples” or any hint of sexuality or nudity posted by and for consenting adults are quickly and automatically removed from platforms, whereas hate speech, networked harassment and images of child sexual exploitation stay up. Policies dealing with platform manipulation and coordinated activity are not consistently applied and are easily gamed. People from dominant groups complain they are silenced while marginalised people self-censor to avoid harassment and violence.

Internet protocols (such as TCP/IP and HTML) are governed via the multistakeholder model. Web 3.0 protocols might have the same reach and play a similar infrastructural role as these Web 1.0 protocols, but they have privatised governance structures like the tech companies of Web 2.0. The foundations of the new internet are in private control and can prioritise specific use cases, be sold to repressive regimes or be used to exclude particular groups or individuals from participation. There is no commitment to the open internet or public good. This means Web 3.0 could have all the idealism of Web 1.0, all the problems of Web 2.0 and no means of regulating or enforcing neutrality.

Decentralised governance can be difficult, time-consuming work and feel like a lot of responsibility. Recent experiments in collective ownership of physical items and real-world sports teams show that investors and founders do not always know what they are doing or what rights, responsibilities or opportunities they will ultimately legally hold. For example, Spice DAO purchased a physical copy of a director’s vision for filming the novel Dune, to which they had no IP rights. The founder of Ethereum Name Service/ENS (a Web 3.0 version of DNS, and the organisation behind all “.eth” usernames) was fired by his own DAO for expressing unacceptable reactionary views, as have other leading investors, influencers and community managers in Web 3.0. As a result, many token holders have re-delegated their governance rights to other entities, namely Coinbase and Rainbow, who have their own incentives as asset management and trading entities.

When users disagree with founders and realise that they do not know enough or are not keeping bad behaviours in check (as shareholders of traditional companies also often do not), they want to prioritise ethics and values but are unsure how best to do it. Delegating decision-making to expert individuals, private companies like Coinbase and Rainbow, or specialist agencies has pros and cons: they all have their own agendas. It is similar to outsourcing your current affairs and policy positions to a political hero, faction or party. The cryptocurrency trading platform Coinbase blocked 25,000 wallet addresses related to Russian individuals or entities in March 2022, in response to the invasion of Ukraine, but have not banned all Russian users or shut down their Russian operations. That decision was made top-down by Coinbase, who claim that their actions to comply with Western sanctions increase trust in crypto. Users of the platform had no say. The decision was both centralised and privatised: supposedly the opposite of Web 3.0 intentions.

Coordinated inauthentic behaviour is a real problem that will only grow with distributed networks, reduced content moderation and more platforms to monitor. The use of “sock puppet” accounts to boost and share posts from main accounts is common, especially to make topics visible on Twitter – where influential journalists and politicians with motivated reasoning for interest in the topic may well see the posts. Researchers have found that a small number of accounts will retweet responses over and over to push their hashtags into a trend. Groups privately coordinate brigading attacks on individuals and organisations, trying to make it look like organic public outrage. The law has not caught up with this, leaving the issue to the platforms.

Militias in the UK and US have recruited and organised via Telegram, WhatsApp, Gab, Parler, Twitter, Facebook and Instagram. The British white supremacist group Patriotic Alternative offers physical training, documented on unmoderated platforms such as Telegram and Rumble. Many terrorist threats and programme referrals in the past few years have not been for Islamist, neo-Nazi or typical far right radicalisation. The majority (51 per cent in 2021) come from individuals and groups with what the UK government’s Prevent programme calls “mixed, unstable or unclear ideology”. These movements are anti-government, anti-authority and anti-system. Their activities have been visible for a long time to online watchers who understand the new environment in the use of shared hashtags, username conventions, emojis and code words. They are often “accidentally” promoted by misinformation journalists who broadcast their messages while reporting on them.

Content moderation is unlikely to be the long-term answer to the problem, given the volume of content and emerging platforms without moderation. However, more culturally aware resources in more languages would reduce some problems: consider, for example, Facebook’s contribution to escalating violence in Myanmar, Twitter’s trends promoting anti-trans hate groups and TikTok’s anti-Black algorithms. New rules show that existing platforms do not fully understand the problems. Twitter’s policy on sharing private information by users not recognised as journalists shows that they are not prepared for mass reporting of researchers and minorities by extremists and other bad actors, for example. New movements do not trust the system and cannot be taken down by removing or deplatforming leaders.

It is not unusual to use group chats to spread misinformation and radicalise people, and yet policymakers often see this as a niche extremist pursuit. “What was so smart and unprecedented about this was the way they moved through small group chats where no one would expect to find an Iranian agent,” said Achiya Schatz, director of FakeReporter, who uncovered an Iranian misinformation network. “They really gained people’s trust and slipped under the radar of Facebook, Twitter and all the other tech companies. In these closed messaging groups, people tend to trust one another and share more freely because there is a feeling that they share the same politics, and that the app itself is secure and safe.” With Web 3.0, the illusion of community governance and transparency could also make people feel secure with no regulatory safeguards.


Chapter 5

Risks and Opportunities

Many issues emerging for Web 3.0 present both benefits and challenges. More power is given back to the individual user instead of big tech, and they have more ownership and control over their personal data and private messages. However, social engineering attacks do not go away: the most vulnerable point in any Web 3.0 transaction is the owner of the token or the account, a familiar threat from online banking and retail.

Recording transactions and contracts on the blockchain could enable greater transparency for businesses and organisations, but the complexity of this and the move to more private and secure communication methods attract those keen to obfuscate instead.

There is an opportunity for greater civic involvement, but as with the worlds of Open (open source software, open data, open government etc.) and citizen science, such involvement assumes that citizens have the time, knowledge and skills to capitalise on the benefits they are sold. Often open source software is harder to use than proprietary software and requires skills or outsourced labour to maintain or adapt; relatively few private individuals can do anything useful themselves with open data; and so on. The governance crises in crypto finance and success of third-party management companies show that owning a governance token does not make for informed or meaningful involvement in governing an organisation.


Chapter 6

Policy Gaps

Current legislation needs to be updated to deal with emerging issues from Web 3.0. There is a real risk of criminal and destructive activities escaping detection or prosecution when decentralised environments can mean nobody takes responsibility for them. Harmful practices such as “libel tourism” (where cases are pursued or threatened in England and Wales that would not meet the legal threshold in other jurisdictions, even following the Defamation Act 2013) will not only continue but be replicated to evade sanctions in one country or find more favourable judgments in another.

Online Safety

The duty of care principle in the UK government’s Online Safety Bill categorises platforms by size. It does not consider risks built in to the design of new and emerging platforms (e.g. launching with no content moderation or blocking facilities, or appeal of inappropriate features to vulnerable people). Gambling mechanics within games and productivity apps are not regulated. In a decentralised environment, with whom does the duty of care reside? If one government decides certain content is harmful or even the platform’s founder does, what happens if governance is decentralised and hosting distributed? Laws regulating protest and privacy also assume fixed and trackable geographical locations for both users and platforms.

Money Laundering and Crypto

There are multiple ways in which money laundering happens in Web 3.0 (such as chain hopping, shell VASPs and programmatic money laundering). These can currently be detected via techniques such as cross-chain analytics, owner analytics and behavioural analytics, but the potential to act is limited. Current financial crime, anti-money laundering (AML)/counter-terrorism funding (CTF) laws and Financial Action Task Force (FATF) requirements are not regularly updated in sufficient detail to take account of the rapidly changing global landscape. They should move towards recourse for consumers, harmonisation and broader geopolitical consequences for sanctions avoidance.

For example, FATF requirements (updated October 2021) state that “To manage and mitigate the risks emerging from virtual assets, countries should ensure that virtual asset service providers are regulated for AML/CFT purposes, and licensed or registered and subject to effective systems for monitoring and ensuring compliance with the relevant measures called for in the FATF Recommendations.” It is difficult to see how virtual providers can be regulated, and by whom, if a global approach is not taken: decentralised providers cannot be easily tied to a single jurisdiction. The paragraph on new technologies (section 15) is too general. It allows countries to pass the buck to financial institutions, which are permitted to assess the new technologies they are considering using, not those used to exploit and bypass their systems.

Harassment

Most platforms and jurisdictions define harassment, stalking and malicious communications as one-on-one crimes. This does not take into account networked harassment involving coordinated behaviour by a group of people where the acts could be many-on-one or powerful entities acting one-on-many or many-on-many, inciting violence or emotional damage towards marginalised groups.

Harassment by a network is not something policymakers really understand, as demonstrated by the UK government’s Online Harms white paper, which tries to legislate for a “pile-on” and does not understand that some users make deliberately provocative posts and others are attacked by organised groups. The definition of harassment in most jurisdictions does not account for multiple perpetrators making individual comments at a single person or minority group. Laws relating to inciting hate speech or hate crimes are restricted to attacks on specific characteristics and limited presentations of “incitement”.

Libel, Defamation and Dispute Resolution

Our legal systems are based on geography, which does not work in a decentralised world. Location and identity can be more fluid online and a decentralised organisation cannot even be said to be owned by individuals based in one physical place, nor a distributed entity hosted in a single physical server location. Guarantees offered by platforms or a specific country’s laws can be difficult to uphold. Disagreements between users over content or ownership and disputes between users and platforms cannot simply be resolved using smart contracts or existing laws.


Chapter 7

Recommendations

Prioritisation and additional information are needed to provide clarity on what policy is attempting to achieve. Prioritise recommendations; then link specific challenges with the policy levers that you want to use.

To stop harmful actors from evading detection:

  • Invest in automatic detection of distributed and networked harassment

To capture pivotal moments of organisation before they become established movements, and spot extremism and radicalisation via mainstream and emerging platforms:

  • Connect to those monitoring platforms in your country – often not the usual anti-hate NGOs or law enforcement, but academic disinformation researchers and citizen researchers from marginalised groups, such as the Discord Leaks

  • Implement the Ada Lovelace Institute’s auditing methods for online harms

  • Implement stronger guidance on lawmakers’ use of social media platforms, both publicly and privately

  • Develop a global understanding of subcultures and ideologies beyond the usual extremism focus

To protect transparency, media freedom and freedom of information:

To protect privacy and security for ordinary users, and protect dissidents and the right to protest:

  • Resist dilution of end-to-end encryption

  • Defend anonymity, privacy and security globally

  • Support development of user-friendly decentralised encrypted messaging and other peer-to-peer platforms with stronger moderation and safety policies

  • Work with marginalised groups and dissidents to enable lower-risk autonomous organising

To manage hate speech, harassment and disagreements about freedom of expression:

  • Encourage platforms to ban the mass reporting of accounts – it does not mean that an account is harmful, it can indicate coordinated harassment

  • Encourage implementation of social nudges and friction (e.g. warn users if they are retweeting a viral tweet from a new or inauthentic account)

  • Invest in automatic detection of distributed and networked harassment

  • Invest in content monitoring and moderation human expertise

  • Invest in AI moderation improvements: trigger human monitoring of situation rather than automated suspension; understand role of bias

To avoid platform manipulation and misinformation:

  • Develop cross-platform oversight boards

  • Support platforms used in your country to close security loopholes as a platform grows in popularity

To prevent financial exploitation and irregularities:

  • Support stronger antitrust laws and interoperability requirements

  • Demand that Web 3.0 infrastructure is passed into multistakeholder public ownership and control with open licences

  • Regularly update regulations and policies to reflect actual technological landscape and speed of change

To protect low- and middle-income countries (LMICs):

  • Monitor and block channels and individuals rather than apps

  • Make new platforms available to the Global South with culturally appropriate guidance and safeguards

  • Work with all platforms on global approaches to the detection and moderation of hate speech, misinformation and threats (avoiding Facebook’s focus on the Anglosphere)

  • Protect media freedom, whistleblowers, social movements and dissidents regardless of background, tactics and their support for/from Global North regimes

  • Foster a global approach to understanding what trust, safety and harms look like to minoritised groups without prioritising specific characteristics or populations


Chapter 8

Tradeoffs
  • Supporting encryption and privacy vs ensuring transparency and security

  • Blocking new technologies deemed harmful or blocking platforms in specific countries without adversely affecting LMICs or dissidents

  • Monitoring harms vs over-surveillance of racialised and otherwise vulnerable groups

  • Regulating a globally distributed industry vs privileging Global North definitions and priorities

  • Binary thinking about platform regulation – utopian or catastrophising; blanket statements about social media, particular countries or specific apps

Article Tags


Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions