Web Summit is an annual gathering for people from all around the world with an interest in the technological revolution. Mastering this revolution – both mitigating its risks and accessing its benefits - is critical for policymakers, and requires a structured dialogue between the people changing the world with new technologies and those seeking to respond with policy and regulation.
The conversations at Web Summit were many and varied. This post shares some reflections on three topics that we focused on in Lisbon this year: the connection between politics and tech, and specific policy challenges around tech in law enforcement and in healthcare.
A dialogue between politics and tech
A better and more structured dialogue between technology leaders and policymakers is an essential precondition if the world is to make the most of the benefits that new technologies offer. It will also be critical for figuring out the right answers to questions like regulation of tech companies and how government should be reconfigured for the modern era.
These themes and more were discussed by Tony Blair and Congressman Ro Khanna in their session on tech, politics and equality, and explored further in a wide-ranging interview.
Technology and law enforcement
For our first policy roundtable at Web Summit we focused on the challenges and opportunities for technology in law enforcement. The discussion brought together a range of perspectives from different sectors and countries to inform an upcoming report on how law enforcement needs to evolve to realise the potential of technology to help reduce crime.
New technologies present opportunities to reduce crime considerably. Algorithms can help law enforcement better manage and gain insights from their data: from assessing risk from offenders and to victims, identifying the locations where resources are most needed, to the use of facial recognition technology to find matches on watch lists. This requires more than practical and technical capabilities. This round table explored the key ethical concerns and the framework needed to secure public trust.
The balance between privacy and public safety came to the fore. For law enforcement their attempt to access data may be vital to catching criminals or revealing the truth in a case. However, participants identified some significant trade-offs. For example, the use of facial recognition technology to scan for matches on watchlists changes the nature of surveillance from targeted interventions to mass, blanket action. Law enforcement access to data held on the personal devices and social media accounts of victims can be highly invasive and risks turning the victims into the ones being investigated and put on trial.
Participants also raised concerns that data can reinforce bias and discrimination already found in society. Law enforcement may find themselves using incomplete, inaccurate or illegitimate data. For example, they only have access to data for crimes that are reported and some crimes such as domestic abuses are regularly underreported. Poor data quality can lead to inaccurate predictions and dead-ends. In some cases, it can also lead to discrimination against minority groups, reinforcing human bias. For example, participants discussed how some predictive policing repeatedly sent officers to neighbourhoods with a high proportion of people from racial minorities, regardless of the true crime rate in those areas. Some participants argued that law enforcement having access to data does not necessarily mean that they will investigate further – particularly if they have limited resources, or how these resources are used could be subjective or susceptible to prejudice.
There was agreement that a robust legal framework and a system of oversight was needed in order to secure public trust for the use of technology by law enforcement, and that ethical codes of conduct alone are unlikely to be enough. Participants felt that laws should aim to minimize the impact of discrimination and create a common experience of law enforcement investigation nationally. It was argued that before technology is deployed it should be thoroughly and transparently assessed and should meet minimum thresholds of accuracy to ensure that it works.
Finally, education was viewed as essential. Popular culture skews the public debate and feeds public concern, but law enforcement has an obligation to be more transparent but also to educate the public about their use of technology. Education matters on both sides, with many participants feeling that those in law enforcement itself would benefit from help to avoid technology pitfalls and mitigate against discrimination and privacy intrusions.
Technology and healthcare
Our second policy roundtable at Web Summit focused on the challenges and opportunities for technology in healthcare, to inform an upcoming report on shifting to a more personalised and preventative model for global health.
Participants discussed how technology can radically transform healthcare around the world. At the heart of this is the deep progress in AI in recent years. This has resulted in algorithms interpreting medical scans, skin lesions and retina scans. AI has also been applied to predict clinical outcomes from electronic health records, process massive datasets from genome sequencing and for use in drug discovery. The potential to revolutionise healthcare delivery is clear: the model can shift towards prevention and a far more personalised and precise service for citizens. However, for the benefits of this this to be realised, policy-makers need to address questions at different stages of the health data process. Data must be fairly collected, representative of gender and ethnicity, properly structured and labelled, and used appropriately.
Those attending included many working in healthcare, and the key issue raised was the need for policy to facilitate data sharing. There was a view that health records today are often fragmented, when well-designed digital records provide an opportunity for medical professionals to see an individual’s history and improve efficiency in treatment. When mapped against broader patient data, it can be used by doctors to improve diagnoses, more effectively prescribe medicines and minimise variation in service quality. There are several ways that policy could facilitate these applications, for example by allowing anonymised but similar cases to be reviewed.
One solution that was put forward to tackle silos was via an API that enables integration of data sets globally for better unification. But fundamental questions on data governance and privacy were brought up, as well as the need for incentives for data sharing by countries and non-governmental organisations.
The need to secure public consent was another key theme. There was a clear call for dialogue with the public, particularly from those in positions they would trust regarding their health. The structures put in place would also need to be secure: it would be necessary to have a trusted system for data management to ensure that it is not abused, sold or misused. Access logs and punishment for improper access was suggested as a way of ensuring transparency and accountability. Finally, there was a view that targeting was crucial at first, helping build participation by working with those most at risk, or who have rare diseases where medical breakthroughs are needed.