Skip to content

Geopolitics & Security

The Fundamentals of Tech Transformation: Cloud Policy

Paper14th June 2021

The tech revolution is transforming economies and societies as well as individual access to work, education and information. Who it benefits and whether it levels the playing field or deepens the digital divide will depend on the fundamental building blocks that countries put in place. Following our report on the progressive case for universal internet access and our paper on digital identity, we set out other key prerequisites for a tech transformation that works for all. In this paper, we explore cloud policy – specifically, what cloud computing can offer and the steps that governments can take to access its benefits safely.

Chapter 1


Data offer enormous potential for improving lives, empowering individuals and driving economic growth. How a country stores and accesses data determines the value that it can generate for its citizens. Less than 20 per cent of low- and middle-income countries (LMICs) have modern data infrastructure such as colocation centres and direct access to cloud-computing facilities, and this is a major barrier to making these benefits equally accessible.

Hyperscale cloud technologies provide greater computing and data-storage capacity, higher performance and better security at lower costs than on-premises alternatives. But the adoption and growth rates in the use of hyperscale cloud technologies are uneven across countries. Concerns about data security and jurisdiction, and about the reliance on a small number of powerful global players, have meant that many countries – in particular LMICs – have continued to use on-premises infrastructure, storing data locally.

In part one of this paper we set out what cloud technologies are and their potential benefits for development. In the second part, we outline the key policy challenges, and set out short-, medium- and long-term recommendations for government policymakers seeking to access the benefits of cloud technology securely and on terms that maximise these benefits. In the long run, data governance must be reimagined for the 21st century.

Chapter 2

Part I: What Is Cloud Technology and What Does It Offer?

What is cloud computing?

The term cloud computing refers to the delivery of computing services – including servers, storage, databases, networking, software, analytics and intelligence – over the internet. It provides on-demand availability of computer system resources, without direct active management by the user, through data centres shared by many users simultaneously. Clouds can be private or public. Using a private cloud is like renting a house: the cloud provider manages and secures the environment, which is for the exclusive use of one user. A public cloud is like renting a flat in an apartment block: each flat is for the exclusive use of one user, with utilities (storage, compute power and so on) shared among all tenants and the overall environment managed and secured by the cloud provider. The term “hyperscale” is used to refer to scalable cloud-computing systems in which many servers are networked together. The number of servers used at any one time can increase or decrease to respond to changing requirements. This means the network can efficiently handle both large and small volumes of activity and under- or over-utilisation can be avoided, resulting in cost savings.

What do hyperscale cloud technologies offer?

Hyperscale cloud technologies offer significant economies of scale and potential data-security advantages, and they also lower barriers to innovation, enabling governments and businesses to affordably increase their computing capacity many times over. This plays a critical role in enabling the development of local technology ecosystems. As Covid-19 has accelerated digitisation around the world, we have seen demand for cloud services increase: for 2021, worldwide spending on public cloud services is expected to rise by 18 per cent to $305 billion, up from $258 billion in 2020.

In technical terms, hyperscale cloud technologies provide ready access to data storage and computing capabilities that vastly exceed the functionality, scale and performance of on-premises alternatives. This means that developers can focus instead on building new products and services, which in turn can easily cope with huge user growth or the addition of more demanding features.

In economic terms, they perform a particular sort of value exchange that is only possible because of the internet. A cloud provider bears significant fixed costs (for power, bandwidth, memory and processors) and generates an ongoing stream of revenue by selling different services. Conversely, a company using cloud services reduces its capital-expenditure (capex) requirements and instead incurs operating-expenditure (opex) costs that scale with its growth.

These two complementary phenomena play a fundamental role in internet-era innovation and growth. Sophisticated apps with lots of active users are not possible without cloud infrastructure – and the companies that build these apps find it much easier to sustain themselves when they are able to scale operating expenses in line with revenues.

It is important to recognise that although cloud services rely on physical hardware, the process of constructing them is significantly more advanced than simply replacing an on-premises computer with a better one at the other end of an internet connection.

Cloud services can scale seamlessly thanks in large part to an approach known as virtualisation. Rather than each application running on its own machine, resources are pooled together and dynamically allocated according to what each application requires. This avoids spare capacity going to waste and provides the flexibility to handle sudden peaks in demand.

Redundancy is also a key benefit of cloud technologies. Cloud services help to minimise the impact should any individual component suffer an unexpected problem, with applications “failing over” – that is, switching to alternative “redundant” resources – without end users experiencing any disruption to their service. In fact, this model can be applied not just within data centres but also across them, so that even if a whole data centre goes offline another one on the network will simply take over automatically.

One implication of this approach is that applications and data running in the cloud are not usually tied to any specific piece of hardware – in other words, it only really makes sense to talk about them running within a specific data centre or region. The dynamic allocation of resources between different applications is handled entirely in software, enabling multiple applications to coexist on the same infrastructure without ever getting intermingled.

The combination of scale and automation provides significant security benefits. Cyber defences and threat detection can be reinforced to standards that would never be viable for a regular on-premises configuration. Software can be kept up-to-date without manual intervention, ensuring that any vulnerabilities are patched immediately and eliminating the risk that local administrators neglect routine maintenance.

The concentration of infrastructure in a purpose-built location also makes physical security easier and more cost-effective. Because data centres are not doubling up as offices or other premises, the number of people on site can be far lower and entry restrictions can be imposed without causing inconvenience to others. The facilities can also be configured differently, optimising structures and layouts for resilience rather than accessibility.

Where is data stored?

The huge economies of scale enabled by cloud technology favour concentrating the necessary hardware in large data centres located in places with reliable infrastructure and a stable political and economic environment. The US currently accounts for almost 40 per cent of the major cloud and internet data-centre sites, but Europe, the Middle East, Africa and Asia-Pacific have faster growth rates. Today, China, Japan, the UK, Germany and Australia collectively account for another 29 per cent of the total. Global, hyperscale cloud data-centre providers operate in just a handful of larger emerging markets such as Brazil and South Africa. They are mainly located in high-income and upper-middle-income countries. Among the hyperscale operators, Amazon, Microsoft and Google collectively account for over half of all major data centres, with other key players including Oracle, IBM, Salesforce, Alibaba and Tencent. The major cloud providers operate data centres around the world, segmenting customers into different regions that may span many countries or even whole continents. Customers typically choose which regions to use for their data out of those available.

Figure 1

Figure 1 – Locations of Amazon Web Services, Microsoft Azure and Google Cloud data centres

What are the potential benefits for LMICs?

Countries without local hyperscale data centres are often the ones whose businesses, governments and citizens have the most to gain from opening up access to cloud infrastructure. Accessing world-class infrastructure and services with no upfront investment and a payment structure per use is a more promising model for developing countries than seeking to build their own. Using hyperscale data centres as an alternative to traditional on-premises infrastructure speeds up deployment of basic services, reduces up-front capital costs, and enables high performance and scalability.

Governments and businesses in LMICs face challenges in gaining access to low-cost, efficient infrastructure as well as in implementing extremely complex software to maximise computer power and storage, and minimising maintenance and security concerns. Hyperscale cloud technologies can address these concerns. They lower barriers to entry, enabling businesses and governments to affordably increase their computing capacity many times over, and therefore supporting the rapid development, deployment and adjustment of new offerings. Typically, users only need to pay for the cloud services they use, helping to lower operating costs, run infrastructure more effectively and scale as their needs change. By significantly reducing sunk costs and the cost of failure, hyperscale cloud technologies foster innovation, and grant businesses and governments more options to be agile: services and products can be rapidly prototyped, tested and adapted in response to shifts in market dynamics and citizens’ needs.

Cloud represents a significant opportunity for entire economies. India is one of the largest public cloud markets in Asia, and one of the fastest growing, projected by BCG to grow from $2.6 billion to $8 billion with a CAGR of 25 per cent between 2018 and 2023. According to BCG, the cumulative impact of cloud on the Indian economy will be $102 billion between 2019 and 2023, directly creating 240,000 jobs. The majority – 157,000 – will be jobs for digital and IT specialists. The remaining 83,000 jobs will be in non-digital roles such as sales, marketing, human resources, finance, logistics and operations.  

Cloud technologies allow organisations to effectively manage and share vast amounts of data in a way that is secure and protects against hacks. Cloud can enable governments and businesses in LMICs to access artificial intelligence (AI) and machine-learning (ML) technologies that may otherwise be prohibitively expensive. Algorithms including AI and ML made available through the cloud can be used by governments and businesses to extract valuable and actionable insights from data, supporting decision-making.

The current pandemic has demonstrated that data and our ability to analyse them are critical to public-service delivery in the 21st century. The absence of comprehensive, real-time data has impeded the world’s ability to fight Covid-19. Many countries have struggled to coordinate testing regimes and implement effective contact tracing. Cloud-based data storage and software can enable health-care workers and medical devices to record data quickly, securely and accurately in a portable way. As countries roll out vaccination programmes, they must have in place a rigorous health-management system for documenting vaccinations and tests. Inevitably, being able to provide credible proof of test or vaccination status will be a prerequisite for reopening economies and restoring global travel. Health-tracking technologies will need to become more mainstream and can only be supported at scale by cloud platforms.

But the relevance of hyperscale cloud technologies for public-service delivery goes far beyond health care. They can enable governments to deliver new public services quickly and effectively across a range of sectors. For example, as learners were forced to stay at home during the pandemic the Egyptian government successfully leveraged Microsoft cloud services to bring schools online.

Hyperscale cloud technologies not only support governments in the delivery of public services and facilitate the development of technology ecosystems in LMICs, they can also play a significant role in levelling the playing field between mature and developing markets. If a business can host its application in the cloud, it can not only scale for the local market but it can also benefit from easy access to a global market. Analytical tools available in the cloud enable businesses to broaden their geographic scope and cultivate international markets. 

Chapter 3

Part II: Policy Challenges and Recommendations

Policy challenges

Despite the big advantages of hyperscale data centres – performance, reliability and security – there are often tensions with national regulations. There are three main challenges that policymakers need to unpack to mitigate the risks.

First, as the internet has enabled more communication and trade across borders, it has also raised questions about where economic activity is taking place and what rules it operates under. As a result, governments are taking a much-closer interest in questions of jurisdiction, particularly in relation to the data that modern services generate and rely on.

This is particularly relevant when personally identifiable information is part of the equation. Governments typically have legally defined powers to access certain data in certain circumstances, for example when conducting criminal investigations. Extraterritorial data centres are a concern for governments because they raise questions about whether their jurisdiction over their citizens’ data and online activities still applies. The solution too often taken is to require all data to be stored locally, even if this limits the services available.

Second, the economies of scale inherent in hyperscale data centres favour a small number of large players able to manage the huge capital investment required. In practice this means the large US tech companies are the key global players (competing, in some regions, with Chinese counterparts). In an increasingly fractious geopolitical environment, there is growing political pressure to counter this, and to use policy to create local alternatives that cannot yield the same benefits without extensive investment. While this approach can undoubtedly help local technology-infrastructure companies by shutting out foreign competition, this comes at a price. There is less pressure to deliver the best service at the lowest price point, which in turn affects the features and performance of the apps and services offered to end users – be they consumers, businesses, or government itself. Longer term, local talent is incentivised to attempt the duplication of technologies and business models already commoditised in other parts of the world, rather than leveraging what is already available globally to expand into new fields, innovate and use services such as AI.

Third, storing and processing data in global data centres is assumed by many policymakers to introduce extra security risks compared to storing it locally. These sorts of concerns typically fall into two subcategories: that compared to an on-premises alternative at a known physical location, data stored in the cloud is (a) more likely to get lost or be unavailable, and (b) more likely to be deliberately stolen.

This assumption correlates with the mental models we all carry around for living our lives in a physical world: “seeing is believing” and so on. But there are good reasons to think that this is wrong when it comes to the internet, where the primacy of the virtual over the physical often leads to counterintuitive results.

The combination of scale and automation means that hyperscale cloud technologies can provide security benefits that on-premises infrastructure cannot. The same technologies that make cloud infrastructure so flexible also bolster redundancy, making it easier to keep data backed up and keep systems running seamlessly even if part of the network is down for maintenance. And the risk of successful exfiltration is lowered dramatically by keeping data on systems that employ strong encryption by default, and are behind cyber defences capable of detecting and neutralising cyber-security risks that pay no heed to geography.

Policy solutions

Governments are right to be mindful that cloud technologies and internet-based services are an emerging model, bringing new risks as well as opportunities.

To access the significant benefits of global cloud infrastructure while mitigating the risks – both real and perceived – there are several practical steps that governments should take over time. One of the key challenges to the adoption of cloud infrastructure is that national, regional and global norms around data governance have not kept pace with evolving technologies and practices. First, governments need to ensure secure use of cloud technologies. In the medium term, countries should come together to cooperate around data governance and infrastructure. In the longer term, a reimagining of global data governance and data sovereignty is necessary.

Immediate steps: pragmatic adoption of cloud infrastructure

Governments should consider putting the following basic conditions in place for digital products and services to make immediate use of global cloud infrastructure securely and on terms that protect countries’ rights and access – while granting exceptions to rules that currently bar the use of this infrastructure.

  1. Public-sector organisations should carefully consider cloud-based options when procuring new or existing services. They should put in place a robust and clearly defined process for evaluation. Where public cloud solutions are used to support digital public services, particular attention should be taken to ensure that the country’s data is secure and can only be accessed by the government of that country, their nominated agents and/or citizens. Given the significant benefits of cloud technologies, some countries are prioritising cloud adoption in the public sector. The UK, for example, has chosen to adopt a cloud-first strategy meaning that public-sector organisations need to first consider and fully evaluate cloud solutions before considering any other option. If a government agency selects an alternative to cloud, they need be able to demonstrate that it can offer better levels of security, flexibility and/or value for money.

  2. All sensitive data, including personally identifiable data, should be protected with strong encryption, both in transit (for instance when being transmitted from a user’s phone to the cloud) and at rest (stored in a data centre). Properly implemented encryption is the single-best security measure for protecting citizens’ data.

  3. Private keys for decrypting personally identifiable data should be managed by citizens, their nominated agents and/or the government (with appropriate legal controls and regulations for use). Under this model, data-centre operators will know how much storage is being used and may know some metadata (for instance how many rows and columns there are in a database) but will not have access to raw data without permission.

  4. The principle of data minimisation – that data collected shouldn’t be used or kept longer than what is necessary for its original stated purpose – should be enshrined in regulation and verified through audits, certifications and attestations. This will ensure compliance with regulation and industry best practice. By default, third parties should not be granted access to full sets of raw data stored, with only answers to specific queries being communicated.

  5. If possible, the data centre where the data are processed and stored should be located in a regional jurisdiction where the country has a stake in the rules governing it. African governments, for example, might prefer to use a data centre based in South Africa to one based in Europe or the US. There are pragmatic reasons for this. The first relates to better technical performance as data centres in the region will typically have lower latency, enabling faster data transfers. Latency is a measure of the time it takes a packet of information to travel between two points. It can be thought of as the delay that taxes any data transfer, no matter how fast the connection otherwise is. The second relates to the issue of data management and control: where data can flow, who has potential access and under what circumstances. Regulatory and compliance obligations to ensure data protection and privacy can vary between nations; however, policy cooperation and alignment may be more attainable at a regional level than at a global one through regional bodies.

These are pragmatic steps that recognise that both building new data-centre infrastructure and securing fundamental legislative reforms take time. With these conditions in place, countries can maximise the immediate benefits of existing cloud infrastructure while giving citizens confidence that their data is safe.

Medium-term steps: intergovernmental cooperation

With a little more time and willingness to cooperate, governments can work together to provide additional comfort around broad cloud adoption. The key steps for policymakers to focus on are:

  1. Investing in enabling infrastructure – in this case, primarily power and internet connectivity – to increase the number of viable sites for the rollout of new hyperscale data centres around the world. The requirements of the centres are likely to be extensive, necessitating both high performance and significant redundancy to ensure continuous operation. Rather than every country racing to meet these in isolation, it may often make more economic sense to partner with neighbours to share the investment required.

  2. Working more intensively on regional policy cooperation, including common regulations to cover things like cloud use, data sharing and data protection. Achieving this sort of alignment helps set a clear precedent for countries to follow when questions arise – and gives all participating governments comfort that they have a stake in the rules governing data centres located outside their borders but within their wider region. From the operators’ perspective, regional policies that help to define markets and reduce ad-hoc barriers increase their confidence to make the huge investments required to grow their regional footprint. In Africa there are a number of discussions underway, including the Cloud and Data Centers for Africa initiative hosted as part of the Smart Africa portfolio, and the African Union Convention on Cyber Security and Personal Data. There have also been moves in Europe, with the General Data Protection Regulation (GDPR) harmonising data-protection rules and member states signing a joint declaration to create a European cloud with shared technical solutions and policy norms. In addition to coordination of regulation, the World Bank 2021 World Development Report argues that countries should coordinate on infrastructure. It proposes that countries with hyperscale data centres should encourage the provision of cloud on-ramps so that data are transmitted to the cloud through private connections, via domestic internet exchange points (IXPs) located in local colocation data centres, rather than the internet. The benefits to countries without hyperscale data centres include improved performance, as well as greater security and reliability because data are not transmitted to the cloud over public infrastructure. Facilities such as colocation data centres can be shared at the regional level, provided fibre-optic connectivity exists between countries and there is sufficient regulatory harmonisation. Despite the potential benefits, on-ramps are currently prevalent in only 10 per cent of middle-income countries and not at all in low-income countries, in comparison to 80 per cent of high-income countries.

  3. Cooperating at a global scale. International trade requires data to be shared between countries, but doing so raises concerns about privacy, data-flow governance and trust. This requires data-protection regulators to work together, international bodies to establish standards and trade deals to be set on mutual expectations. Data-sharing and protection are appearing in international trade negotiations: the UK–Japan free trade agreement, for example, supports the free flow of data between the two countries. However, it is critical that policymakers consider the implications and potential trade-offs, especially as they relate to privacy.

Longer-term ambition: developing a new approach to 21st-century data governance

In the longer term, a more ambitious rethink of the rules governing global cloud use is required. Everything described up to this point has its roots in the history of the physical world where, in the end, location on a map is everything. But the new reality of the 21st century is that the internet defines the operating environment for governments – and the best and most sustainable policies are ones that work with it, rather than trying to fight it.

In practical terms, forward-looking countries and coalitions should be focused on developing a new approach to 21st-century infrastructure that breaks with the traditions of the past. The World Bank’s 2021 World Development Report argues that forging a new social contract around data and creating more equitable access to the benefits of data are two of the key development priorities of the digital age. Such an approach would have two key components:

  1. The hierarchy of rules governing how data is stored and processed needs to be inverted so that the owners of the data – be they citizens, businesses or governments – have primacy over the authorities that happen to administer the physical territory on which a data centre is built. Of course, there must be mechanisms in place to provide a degree of transparency, and protect against criminal use and other abuse, and independent forums to arbitrate disputes. But the countries hosting data centres should work with others to improve global governance, and not just rely on their ability to impose rules and regulations unilaterally based on an accident of geography.

  2. A new global convention needs to accommodate new notions of sovereignty in cyberspace. The internet and the global digital commons are one of humanity’s greatest achievements, and all countries should fight to keep the internet open and prevent it from splintering apart. Solid foundations for privacy and data protection are an important part of this. Just as there is a longstanding convention that the ground on which physical embassies abroad are built is treated as sovereign territory, so a new convention should treat properly defined digital spaces as sovereign territory in the digital realm. This will require both policy and political imagination as well as technical work on standards and definitions. The notion of new virtual jurisdictions defined by software rather than by lines on a map is critical for the development of a common 21st-century approach, on a par with the great global treaties and conventions of the 20th century.

The good news is that innovative approaches to data regulation are starting to be tested. To address the challenge of cybersecurity, Estonia has launched the world’s first “data embassy” in partnership with the government of Luxembourg. Estonian data and related systems are stored in Luxembourg’s government-owned data centre. The data embassy is an extension of the Estonian government cloud, meaning that the Estonian state owns server resources outside its borders. These will be used not only for data backup, but also for operating critical services. While the founding agreement does take into account the Vienna Convention on Diplomatic Relations, it is something completely new under international law. It is fully under the control of Estonia and has the same rights as physical embassies including immunity. Estonia has full jurisdiction over the data and Luxembourg officials are prevented from entering unless they have permission. This could set a precedent for other countries concerned about data security.


Cloud technologies are a critical enabler of effective e-governance and flourishing tech ecosystems and can support stronger service delivery in sectors such as agriculture, transport, health, finance, education and energy. They allow data to deliver much greater value for individuals and communities. For example, cloud services can help farmers make better decisions for managing their crops, revolutionise supply chain and logistics management, or support governments to manage large-scale vaccination programmes. Access to the global cloud is a critical enabler of a digital future; political leaders must act now to embrace it on terms that safeguard their users and their data, and which deliver maximum value for individuals and communities.

Lead Image: Getty Images

Charts created with Highcharts unless otherwise credited.

Article Tags


Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions