Skip to content

Tech & Digitalisation

What Is Artificial Intelligence?


Explainer5th November 2018

Artificial intelligence (AI) is not a new concept. Although the term has surged into the public consciousness in the last few years, academics have been studying the field since the 1950s. What has changed is the technology to turn theoretical insights into practical applications. Advances in storage and computing power, and the explosion of data after the worldwide expansion of the Internet, have moved algorithms and automation out of universities and R&D departments and into the mainstream, affecting all parts of society.

This explainer is for politicians and policy professionals who want to understand more about where things currently stand. It includes a short primer on the basic terms and concepts, and a summary of the main issues and challenges in the policy debate.


Chapter 1

The Basics

The explosion of interest in AI has been accompanied by an explosion of buzzwords and jargon. Many of these terms are used interchangeably, and it’s not always clear how they relate to each other. This explainer starts with the basics: first, the broad concept of artificial intelligence; second, some of the main approaches to machine learning; and third, how deep neural networks can be used to handle even very complex problems. These three concepts can be conceived of as subsets of one another (see figure 1).

Figure 1

A Schematic Representation of AI, Machine Learning and Deep Learning

what-artificial-intelligence - Figure 1: A Schematic Representation of AI, Machine Learning and Deep Learning

Artificial Intelligence

AI is a large topic, and there is no single agreed definition of what it involves. But there seems to be more agreement than disagreement.

Broadly speaking, AI is an umbrella term for the field in computer science dedicated to making machines simulate different aspects of human intelligence, including learning, decision-making and pattern recognition. Some of the most striking applications, in fields like speech recognition and computer vision, are things people take for granted when assessing human intelligence but have been beyond the limits of computers until relatively recently.

The term “artificial intelligence” was coined in 1956 by mathematics professor John McCarthy, who wrote,

The study is to proceed on the basis of the conjecture that every aspect of learning and any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

The approach to achieving this ambitious goal has changed over the years. Researchers in the 1950s focused on the direct simulation of human intelligence, for example attempting to explicitly specify the rules of learning and intelligence in computer code. Research today tends to be based on less precisely defined systems that can improve based on past experience, with the aim of building intelligence that can benefit society and help solve human problems, by understanding the principles of human intelligence even if not replicating the exact structure of the brain.

A good place to gain an intuitive understanding of the difference between these two approaches is the progress made by AI researchers on the games of chess and Go. In 1997, IBM’s Deep Blue system beat the then world chess champion, Garry Kasparov. Deep Blue was a rules-based system (sometimes described as hard coded) that took the rules provided by its programmers and used its immense computational capacities to look as far ahead as possible in the set of potential moves and countermoves to weigh up the best course of action.

That approach has been far less successful in Go, a game that has significantly more potential futures and requires a higher level of strategy and intuition. In 2016, DeepMind’s AlphaGo beat the world champion, Lee Sedol. In contrast to Deep Blue, AlphaGo did not start with pre-structured knowledge of the game. It still used immense computational force but learned by developing its own structure for understanding the game, based on previous matches played by humans as well as itself. AlphaGo was subsequently defeated by a new iteration called AlphaGo Zero, which was trained entirely through playing against itself, with no data on human matches.

Machine Learning

AlphaGo is an example of an approach to AI known as machine learning (ML). This approach was formalised in 1959 as the field of computer science dedicated to making machines learn by themselves without being explicitly programmed. ML systems progressively improve in specific tasks, based on experience, previous or historical data. In the seminal paper that first defined this term, computer scientist Arthur L. Samuel explained the motivation behind the approach:

There is obviously a very large amount of work, now done by people, which is quite trivial in its demands on the intellect but does, nevertheless, involve some learning. We have at our command computers with adequate data-handling ability and with sufficient computational speed to make use of machine-learning techniques, but our knowledge of the basic principles of these techniques is still rudimentary. Lacking such knowledge, it is necessary to specify methods of problem solution in minute and exact detail, a time-consuming and costly procedure. Programming computers to learn from experience should eventually eliminate the need for much this detailed programming effort.

The basic idea that guides machine learning is that armed with enough data, a programmer can train an algorithm to achieve any goal. Rapid advances in data storage and computational speed, and the curation of better and larger data sets on which to train ML systems, have resulted in significant progress in applying this technique to an ever-growing range of problems.

Each ML application has three main components: representation, evaluation and optimisation. Representation involves picking the right model (for example, neural network or decision tree) to represent knowledge of the problem. Evaluation is the estimate used to tell how good a model is at a certain task while training or testing it (for instance, how many false positives or false negatives the system had). Finally, optimisation means choosing from multiple techniques to improve the model against the evaluation standard chosen in the previous step.

There are also three main approaches to ML: supervised learning, where a system is trained based on human input; unsupervised learning, where a system is trained on data but without human input; and reinforcement learning, where a system improves iteratively based on a system of rewards and punishments.

Supervised Learning

When it comes to the task of studying from past data, a common approach would be to ask a computer to sort data into predefined categories. It is assumed that there is a right or wrong answer when the computer is asked to sort a case into the right pile, and the aim is to teach the computer to do just that: given a case, it should say where to put it. This approach thus starts with a training set, which includes data and pre-placed labels that denote which pile the data need to go into. The model is then left to adjust itself to learn to give the right answers to that training set with as few errors as possible.

It is important to not overfit, which is the danger of building a model that is highly accurate when providing answers for the pre-labelled training data but fits these original examples so specifically that it will fail miserably when it sees new examples not included in the original training set. Classification problems, as well as regression models, are the two most common examples of this approach of supervised learning.

Many popular machine-learning applications are examples of supervised learning. Spam e-mail filters were trained just like that: the pre-labelled training data consisted of spam e-mails known to be spam and normal e-mails the user did not want to end up in the spam folder. Models were then trained to answer correctly: should a new e-mail go into the spam folder or remain in the inbox? This question is a classification problem. The learning does not have to end with the original training set, however. With every e-mail you find in your spam folder and bring back into your inbox, or find in your inbox but manually mark as spam, the models keeps on adjusting itself, and thus learning.

Similarly, teaching a model to distinguish benign-looking medical scan results from dangerous ones is engaging in supervised learning. Many supervised models can also sort data into many more categories (answering questions with more than yes or no), such as models that can name a disease based on a patient’s symptoms.

Unsupervised Learning

The questions that unsupervised learning can help answer are slightly different. Sometimes the categories that data need to be sorted into are not defined. It is just important to understand which general categories exist in the data to begin with—which cases are more similar to, or dissimilar from, one another. Unsupervised learning therefore does not require a pre-labelled training sets: it takes the raw data as they are, and merely groups them into piles based on similarity. The only other direction the model needs is how many piles or categories it should get to at the end of the process.

At first glance this might seem an appealing option. After all, it takes less effort, because there’s no need for an initial phase of labelling data. However, the results of these models would be fittingly more vague as well, requiring the resulting piles to be labelled after the fact. Thus, an unsupervised model can go through big amounts of data quickly and help with initial sorting that could allow for qualitative interpretations. But it will not be able to give a concrete answer to a question like supervised models can.

Unsupervised learning is not as popular as supervised learning but could be a better choice at times. When researchers try to go through copious amounts of data to understand their nature, unsupervised learning could be a much better choice than toiling over the generation of pre-labelled data sets.

To put this in concrete terms, consider a researcher who wants to understand common themes in newspaper articles. Instead of predefining for the model which themes it is looking for, an unsupervised approach would be to look through possible piles generated by an algorithm. Start with five piles, consider six and four, and readjust that number until a clear and satisfying picture emerges: one pile for the economy, one for foreign affairs, one for sports and another for entertainment.

Similarly, this method could be applied to look for common themes in speeches by politicians or, when combining multiple unsupervised models, to understand the difference in the coverage of the same news stories across multiple news sources.

Reinforcement Learning

The last prong of machine learning is potentially the most ambitious. Rather than learning patterns in a data set and deducing an answer from them, the aim of reinforcement learning (RL) is to teach a system a set of steps or, more generally, a desired behaviour. RL relies on giving a model rewards on taking the right steps and optimising for the highest reward possible. While supervised-learning models learn when trying to minimise error, reinforcement-learning models try to optimise for the highest reward. An agent or model can be trained through RL, with a combination of exploration and exploitation moves. Exploration would mean taking novel steps without knowing whether they will lead to a reward, while exploitation would involve taking known steps that have led to rewards in the past.

This may sound a bit like a computer game, where the goal is to collect as many rewards as possible. Uncoincidentally, RL is an easy choice when trying to teach a system how to play a game. However, this approach can often have merit in the real world as well, for example when teaching a robot how to walk, climb stairs or get up.

Deep Learning

Because the aim of AI and ML is often to perform tasks associated with human intelligence, it was only natural that scientists began to explore whether our understanding of the brain could help develop artificial reasoning capabilities. (Indeed, the view of the brain as a complex computational system has been a prominent theme in neurobiology and cognitive psychology.)

In fact, the first attempt to build a computational model based on the structure of neurons in the brain, an artificial neural network (ANN), predates the first discussions of AI and ML.

Deep learning (DL) systems are advanced forms of ANNs. The term was first introduced to ML research in 1986, and the word “deep” reflects the scale and complexity these systems contain compared with earlier iterations. In an ANN, each artificial neuron takes a piece of data as an input, processes it based on a simple rule and produces a new piece of data as an output. In a deep-learning model, multiple neurons are organised to work in parallel in a layer, and multiple layers work sequentially, using the output of the previous layer as an input (see figure 2).

Figure 2

A Deep-Learning Model

what-artificial-intelligence - Figure 2: A Deep-Learning Model

Connected in this way, a collection of individual units each capable of performing a small calculation can accomplish tasks with considerable complexity. For example, an image-recognition model attempting to identify an image of a cat works as follows: in the first layer, each artificial neuron detects the simplest possible shapes; the second layer takes these outputs and processes them in combination to detect possible cat features (fur, whiskers, ears); and subsequent layers process combinations of combinations, with more layers allowing for more accuracy until the model can fully identify the image as that of a cat.

Deep learning has been applied to many aspects of machine learning and has dramatically improved the performance of many ML applications. It has also given rise to more specific applications. Convolutional neural networks (CNNs), in which all processing layers are organised in three dimensions (height, width and depth) and each layer is not fully connected to the next but only to a small portion of it, are more efficient and suitable for handling complex data. CNNs are therefore highly suitable for image recognition and natural language processing.

Generative adversarial networks (GANs) use two neural networks, where one tries to fake examples that look like data from a training set, while a second network acts as a judge and tries to tell apart real training examples from ones faked by the first network. GANs were introduced in 2014 and became incredibly popular, accomplishing tasks like creating photorealistic images and inferring 3D shapes from 2D images. They are also expected to push unsupervised learning forward, by being able to generate data for themselves and automate the process of labelling data for learning models.


Chapter 2

Policy Concerns

AI, ML and DL, together and separately, have shown much progress in accomplishing tasks that were once understood to be exclusively in the realm of human capacity. But alongside the promise and excitement, they also give rise to debates and deliberations affecting society as a whole—questions that seep into policy and ethics.

Algorithmic Transparency

The growing complexity of the techniques described above, and of deep learning in particular, can make it very hard to explain exactly how systems get from the data provided as inputs to the conclusions they produce as outputs. Consider, for example, a supervised deep-learning system trained to identify pictures of cats. It is possible to input a new picture and get an output “cat” or “no cat”, but it is hard to ascertain from the model what in that particular picture led to that specific conclusion.

Researchers have begun to develop new tools to make popular algorithms more accountable for auditing and more accessible in the explanation of their decisions, but this is still far from perfect. And so for policymakers, as these techniques become harder to audit, there is a real question about the extent to which AI can be employed without it affecting due process. While misidentified cats are unlikely to be bothered, mortgage applicants who believe they were mistakenly given a higher interest rate probably would. This has led scholars in the growing field of AI transparency and fairness to come up with rules of conduct for the use of such systems, including recommendations against using black box algorithms—computational models with reasoning that can’t be audited and assessed—for government-led decision-making.

Algorithmic Fairness

Growing concern about the transparency of algorithms is part of a wider debate about the more fundamental fairness and ethics of using machine learning or AI techniques in decision-making that can have grave social implications. This has manifested itself in cases of algorithms involved in sentencing that have come under fire for potential racial bias, and in health insurance schemes targeted at people based on data unknowingly harvested from individual or groups.

As AI continues to advance, policymakers and researchers will have to contend with some difficult questions, on everything from the form and amount of data collected to train new systems through to how to design systems that avoid amplifying existing bias and discrimination. And if algorithms that help shape the future rely at their heart on past data, entirely new solutions may be needed to escape from historical trends. It is not yet clear how algorithmic ethics will evolve, but many researchers now accept the need to include different disciplines and perspectives in the process of developing AI- and ML-based techniques.

Artificial General Intelligence

One possible goal of AI is to achieve a level of intelligence that is more akin to human intelligence—one that can generalise across tasks and reasoning capacities, rather than being focused on specific tasks and receiving training restricted to narrow domains.

Artificial general intelligence (AGI) is currently being researched by at least 45 bodies, the most prominent of which are DeepMind, The Human Brain Project and OpenAI. Although many people are optimistic about the potential for AGI to improve the world, many others take a more cautious view. Physicist Stephen Hawking and entrepreneur Elon Musk have warned that an AGI that improves on itself in an ever-faster feedback loop could pave the way to human extinction. A similar notion known as the singularity, popularised by author and computer scientist Ray Kurzweil, emphasises discontinuities associated with exponential growth, and is often associated with the idea of rapid acceleration from AGI to artificial superintelligence—for better or worse.

There is little consensus on when to expect AGI: practitioners’ estimates range from less than ten years to more than 100, while some argue it should never be built.

AI and Geopolitics

There have been many reports of an AI arms race, primarily between the US and China but also encompassing a second rung of competitors. It has become almost fashionable for governments to announce a national AI strategy, but as tech entrepreneur Ian Hogarth has written, competition between strategies risk escalating into a new form of geopolitics he calls “AI nationalism”.

Machine learning will transform economies and militaries, risking instability at both the national and the international level. As such, it may catalyse tensions in an already difficult international order. It will be the job of global negotiators and policymakers, therefore, to assess how to respond to AI nationalism and how a truly cooperative, global movement to treat AI as an omni-use public good.


Chapter 3

Related Topics

Discussions about AI often wander into other, closely related topics, particularly in relation to the future of work and jobs and to the data that are amassed by many digital products and services as the basis for training AI systems. Here are a few terms that crop up frequently but are not always well understood.

Automation

Automation is a general term used to describe the process by which tasks that used to be done by people, or even animals, are increasingly done by machines. This does not necessarily relate to AI—a washing machine or dishwasher also fits this description—but in the current debate it usually refers to the kind of transformation powered by ML and AI. Algorithms that curate news feeds, recommend content or dispatch taxis are all examples of tasks that no longer require human input.

Automation can be a controversial topic and is often associated with job losses and unemployment. The long-term impact is unclear, however, with PwC noting that AI may boost global GDP by $15 trillion by 2030 yet 30 per cent of jobs will potentially be at risk of automation. What is certain is that automation will continue, and the capabilities of AI and ML systems will advance into ever-wider applications; driverless cars are a high-profile example. Many commentators thus talk about AI as an essential factor in the advance of the “fourth industrial revolution”.

Big Data

An explosion in information and data has been under way and acknowledged for at least 50 years, but the term “big data” came to the foreground in the latter half of the 1990s, with the advance of the World Wide Web. Definitions vary, but broadly speaking the term is used to describe data sets so big that traditional desktop tools like spreadsheets are no longer useful for working with them.

Although big data can be used for simple analyses and descriptive statistics, they have also powered some of the most significant advances in AI and ML by massively increasing the data available to train learning algorithms. This often leads to the word “model” being used interchangeably with “systems” or “algorithms”, because many ML and DL techniques rely on mathematical modelling of real-world objects and questions, based on these training data.

The term “big data” also crops up in relation to the Internet of Things—smart objects, often in the home, that are connected to the Internet and can talk to one another—and cloud computing—the delivery of computing services such as servers, storage, databases, networking and analytics over the Internet (cloud)—given the scale of data produced through these services.

Personal Data

In many cases, the data accumulated in the past few decades are not only more extensive than they were traditionally; they are also far more personal. More than ever before, large organisations including businesses and governments hold individual profiles, action logs and preferences of users that are personally identifiable and often linked directly to real-world identities.

This can be hugely beneficial for technology companies, many of which know their customers better than most traditional businesses could ever dream of. This poses challenges as well, however, with many people concerned about targeted advertising, privacy and data breaches. (The furore around Cambridge Analytica, a consulting firm that used a Facebook app to harvest the profile data of millions of users, which it then incorporated into aggressive political advertising campaigns, is a good case in point.)

The growing use of personal data has also led to many questions about the ownership of those data—complicated by the distinction between data that individuals provide and data based on observing their behaviour—and who has the final say over how it is used for training AI systems.

Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions