Skip to content

Politics & Governance

Inferential Biometrics: Towards a Governance Framework


Paper25th January 2022

In January 2021, police in the Indian city of Lucknow announced a radical new plan to curb the harassment of women in the city. They would install five new facial analysis cameras that monitor women on the street for signs of distress; if detected, these cameras would send a notification to the local police force, who could intervene accordingly. Perhaps unsurprisingly, this plan was widely condemned as it proposed introducing a highly invasive technology that targeted the victims of harassment rather than directly focusing on perpetrators or addressing structural problems. This initiative is just one recent example of a growing experimentation with inferential biometric technologies:[_] systems that analyse an individual’s physiological data to deduce their state (e.g. emotion or intent) or characteristics (e.g. age or propensity for disease).[_]

Inferential biometric technologies are not a new phenomenon. Polygraph machines (commonly referred to as lie detectors) have been around since the early part of the 20th century and systems that make inferences from genomic data have been developed over the past few decades. However, rapid improvements in artificial intelligence (AI) and a democratisation of its use have led to a new generation of inferential biometric technologies, which seek to make inferences about individuals in everyday contexts. These systems are being used to assess job applications, check the age of customers at supermarkets, target advertising and improve patient care. This new form of inferential biometric technologies, which will be the focus of this report, creates opportunities and risks for society, and raises distinct governance challenges.

Because of cases like Lucknow, existing literature on this new form of inferential biometrics has generally been negative and has focused on the potential harms or infringements to fundamental rights by these systems. Indeed, few reports have meaningfully engaged with how to best govern inferential biometrics to minimise potential harms and promote benefits, beyond calling for an outright ban on emotion recognition as a solution. While such an approach might be successful in helping to protect fundamental rights, it would likely also curtail innovation that could be beneficial for both individuals and society at large.

This report moves beyond narratives calling for a blanket ban on the technologies and offers practical, policy-focused recommendations that can be adopted by those seeking to effectively govern inferential biometric technologies. It highlights the potential benefits that these technologies could bring about, while also acknowledging and outlining the well-documented associated risks. The analysis and recommendations in this report are jurisdiction-agnostic and speak to policymakers globally. It is hoped that through providing clear policy recommendations, this report will help guide policymakers in enacting governance mechanisms that protect fundamental rights without unnecessarily curbing beneficial innovation.

To do this, the remainder of this report will be structured as follows: Section 1 will discuss the present-day landscape of inferential biometric technologies, outlining what they are and how they are currently being used. Sections 2 and 3 will consider the opportunities and the risks posed by biometric technologies. Section 4 will outline some of the governance measures that already cover inferential biometrics in different jurisdictions globally. Section 5 will conclude the report by offering recommendations on governance measures for these technologies.


Chapter 1

1. Present-Day Landscape

Before considering the opportunities and risks brought about by “inferential biometrics”, it is important to fully define what is meant by the term. Biometrics is an ambiguous term, with traditional definitions focusing on statistical and mathematical analyses of biological features; more recent legal understandings adopt a narrower view. The European Union (EU)’s General Data Protection Regulation (GDPR), for instance, defines biometric data as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person” (italics added for emphasis).

The narrow, legal definition of biometrics has been adopted in many jurisdictions and has to date been largely adequate, as governance has mainly concerned biometric identification (e.g. facial recognition technology in surveillance cameras) and verification (e.g. fingerprint recognition for smartphones). However, this definition is inadequate for governing inferential biometric technologies that do not necessarily require the unique identification of a person. Take polygraph machines: these systems analyse an individual’s physiological features, such as blood pressure, to make inferences about whether they are telling the truth; however, this could not be used to identify an individual. Accordingly, for this report, the term “biometrics” is understood in the broader sense, which focuses on the analysis of physiological data. Thus, behavioural data that is not directly associated with the body is not within scope,[_] nor is there a requirement for the unique identification of an individual.

Figure 1 – Biometric Typology

 

Biometric identification

Biometric verification

Biometric inferences

Definition

“One-to-many” identification systems compare an individual’s biometric profile against an existing database to identify whether there is a match among the stored profiles.

“One-to-one” verification systems authenticate an individual’s biometric data by checking whether it matches their information that is already stored on a system.

Inferential and classificatory systems analyse an individual’s physiological data to deduce their state (e.g. emotion, intent) or characteristics (e.g. age, propensity for disease).

Example

Retrospective or live facial recognition used to identify an individual from a watchlist.

Voice verification used for telephone banking ID authentication.

Facial analysis software used to determine a customer’s age when buying age-restricted products.

Source: Huw Roberts

Biometric identification, verification and inferences are not necessarily mutually exclusive categories. In some cases, an inference might be made about an individual without identifying them, such as when determining an individual’s age at a supermarket checkout using a facial classification system. In others, it may be beneficial to identify an individual and then make inferences about them, such as if you want to detect a specific pupil’s level of attention.

Biometric identification and verification, without the use of inference, have become commonplace. Fingerprint and facial recognition systems are regularly used to unlock phones and other electronic devices. Likewise, live facial recognition cameras used for surveillance are regularly making headlines around the world. Inferential biometrics, used with or without identification, are less mature. Nonetheless, the number of applications being researched, developed and deployed has been increasing in recent years. Alongside the examples provided above, facial analysis systems have been developed to help assess job applicants in video interviews, to determine levels of consumer engagement with different products at supermarkets, and to assess an individual’s level of suspicion for crime prevention and counterterrorism purposes.

HireVue recruitment

HireVue is a digital recruitment company that provides services for many large multinational firms. One of its products analyses the facial expressions and speech of candidates during a video interview to discern certain characteristics. Based on individuals’ behaviour, speech and intonation, they are assigned qualities and traits. These assigned features are then used in the decision-making process for either hiring a candidate or progressing them to the next round of recruitment. In January 2021, HireVue halted the use of facial analysis as part of this product due to concerns about reputational damage.

This is not to say that inferential biometrics are solely being developed based on facial movements. Systems that analyse heart rate, skin conductivity and other modalities are being developed to infer an individual’s emotional state while driving. Emotion recognition using voice is also being employed in call centres to identify the moods of customers on the phone and to provide information to staff on adjusting how they handle the conversation in real time. Finally, these systems have been developed for detecting pain in patients, for those who are unable to express pain (e.g. because of sedation) and/or who are not being frequently monitored. This is not an exhaustive list, with many new technologies being developed based on single or multimodal approaches.


Chapter 2

2. Opportunities

Inferential biometric technologies, such as those systems outlined above, have frequently been described as scientifically flawed or as a techno-solutionist approach to problems that require a more nuanced response. These criticisms are valid and raise the question of whether any benefits can be derived from inferential biometric technologies of this kind. Indeed, the most frequent response to these systems has been to call for an outright ban. However, if governed properly, some inferential biometric technologies could be applied in beneficial ways for individuals and society. These benefits currently fall into four broad categories.

Convenience and Efficiency

A potential benefit of inferential biometrics is the increased convenience they could bring about to everyday life for individuals, and improved efficiency for businesses. Verification biometrics on smartphones and laptops have allowed people to access their devices more quickly and securely; facial recognition systems at airports have reduced the time it takes to wait at border control; and voice recognition has been developed for call centres to quickly authenticate the caller by cross-referencing it with the number dialling. Inferential biometrics hold the same promise.

Take the use of age classification systems at shop checkouts. Some of the UK’s largest supermarket chains are currently trialling these technologies for the purchase of restricted goods, such as alcohol. Yoti, one of the companies offering this technology in the UK, claims an average accuracy rate across gender and skin tones of +/–2.19 years. Allowing individuals to have their age checked by an inferential system as a checkout would allow for an improved consumer experience and for staff time to be better utilised. It would also act as an equaliser for the 24 per cent of UK adults who are over the age of 18 but do not have photographic ID. To ensure this convenience is achieved without associated risks (e.g. the underage purchase of restricted goods, and excessive data collection), complementary measures would have to be introduced to ensure adequate projections are in place (e.g. human checks for borderline cases).

Augmented Support

Inferential biometrics could also be used to augment humans and provide a support role in a number of different sectors. The health and social care sector seems particularly noteworthy in this regard. Many health systems around the world are strained in terms of their doctor-to-patient ratios, which can lead to poorer care and health outcomes. As mentioned above, inferential biometric systems have been developed to try and identify pain experienced by patients, who may otherwise struggle to communicate their needs to doctors (e.g. those who are sedated or babies). Facial analysis systems that seek to identify signs of pain or ECG machines which detect signs of stress may prove to be fruitful avenues for augmenting care and providing improved outcomes. This is not to say that all uses will necessarily prove fruitful. Some systems are being developed in an attempt to understand more complex patient needs, such as whether an individual has mental health issues from their facial expressions. As will be discussed in greater length in section 3, the ability of inferential biometrics to support complex, high-risk decisions is questionable.

Another area often discussed for augmentation is learning and education. Existing literature shows that positive and negative emotions are linked with higher achievement levels and decreased motivation respectively. Emotion recognition has thus been proposed as a way to improve educational outcomes. In particular, it has been suggested that inferential biometrics hold the potential for improving e-learning platforms to ensure that the educational experience is more engaging and effective. In light of the growth of e-learning, which has been particularly rapid since the outbreak of Covid-19, these systems could offer notable benefits. However, great care would need to be taken to ensure informed consent is present to avoid harmful privacy invasions, such as those seen from the recent uptake of remote proctoring software that surveils test takers.

Safety and Security

Improved safety and security is another area of potential benefit, with driver safety particularly noteworthy in this regard. The European Commission passed the revised General Safety Regulation in 2019, which specified that new vehicles should include drowsiness and distraction detection. This measure, which will come into effect in 2022, is expected to save more than 25,000 lives and help prevent at least 140,000 severe injuries by 2038. This feature of new vehicles will almost certainly be reliant on inferential biometric technology.

Age classification systems could also offer safety and security benefits, particularly for ensuring that children are protected online. Many spaces on the internet are age-restricted, such as adult content, gambling, dating sites and social media. Currently, few meaningful checks are in place to ensure that children cannot access these services. Anonymous age classification systems may provide a workable solution for this issue through verifying the age of users. However, this would also raise serious ethical questions over whether individuals need to show their faces to access these services online, particularly when it comes to sensitive content.

Perhaps the most contentious safety “benefit” could be achieved through enhancing surveillance capabilities in public and private spaces. Lucknow police are not alone in considering how inferential biometric technologies can enhance current surveillance capabilities; a UK-based company has developed an emotion-recognition system that seeks to infer risk levels within a crowd, and such systems have already been deployed in Xinjiang, China. Those developing and deploying these systems are doing so in the hope of achieving enhanced security, yet it is difficult to see how the benefits of such systems could outweigh the significant ethical risks, something that will be returned to in section 3.

Empathetic AI

The opportunities outlined above are all fairly immediate and tangible. When considering the impact of inferential biometrics in the longer term, it is important to consider the idea of interactive empathetic AI. A significant ethical question going forward is whether society wants AI systems that are designed to interact with humans on a sophisticated emotional level. Here, what is being referred to is not current-day adaptive customer assistant chatbots or virtual assistants like Siri; rather, in scope would be highly developed carebots, for instance. While it is beyond the scope of this report to consider in detail whether such a future is desirable,[_] inferential biometric systems would likely be necessary for achieving this goal.

Another potential benefit from developing empathetic AI is that it could mitigate potential AI alignment risks. If AI were trained to have empathic tendencies, this may help to ensure alignment with societal norms and values. Creating systems that can more accurately model human emotions and align to values may stop them taking logical shortcuts and finding unintentionally harmful solutions. In the narrow sense, this could help AI perform more effectively in narrow tasks where issues have previously emerged (e.g. language models harmfully stereotyping) and even alleviate some concerns about AI alignment posing an existential risk. Inferential biometrics alone will be incapable of achieving this lofty aim, but could form a key building block.


Chapter 3

3. Risks

While these systems may offer benefits, they also pose significant risks if governed incorrectly. The lack of specific governance measures addressing inferential biometrics means that many of these risks are currently materialising in jurisdictions around the world.

Bias and Discrimination

One risk of inferential biometric systems is that decision-making could be harmfully biased or discriminatory if systems are designed poorly. Taking the example of emotion recognition, academic work has shown that emotions differ by culture, suggesting that systems may be more likely to misidentify emotions of individuals from cultures beyond where the technology was developed. Likewise, potential ethnic and gender biases have been shown to emerge in the implementation of inferential biometric systems if they are not trained on sufficiently diverse data. This could lead to unfair outcomes for multiple groups in society when inferential biometrics are used to make decisions about them (e.g. whether surveillance systems flag an individual as looking threatening). In fact, the risk of cultural, ethnic and gender-based biases is what has led many “Big Tech” companies to reconsider developing these systems in the short term.

The above examples highlight the risk of discrimination based on protected characteristics, but there is the potential for unfair decisions to be made based on other physiological features that inferential biometric systems analyse. Inferences about an individual will often be made by placing them into ad hoc groups based on specific physiological features that the system is analysing (e.g. face movements). These might not be traditionally protected characteristics, such as race and gender; they may not even legally be defined as personal data. However, decisions could still be considered unfair if this algorithmically defined group receives a disproportionately worse outcome than another algorithmic group. The lack of transparency that characterises many advanced AI systems means these biases risk not being scrutinised or going unnoticed.

Privacy

Inferential biometrics could also pose a significant risk to privacy. Aside from the common privacy risks associated with biometric data collection, such as excessive collection without consent and data insecurity, the type of inference these systems make could also pose a threat to privacy. Many inferential biometrics attempt to understand the inner state of an individual (e.g. their emotion or intent), which could be considered innately private to a person. Surveilling involuntary physiological reflexes and making inferences from this may lead individuals to monitor their own behaviour and consciously adapt their facial movements or tone in order to game these systems. This self-censorship could lead to the erosion of many other fundamental rights that are tied to privacy, such as freedom of thought and expression. Given the privacy risks posed by inferential biometrics, it is perhaps unsurprising that a survey of UK citizens found that 50 per cent of the sample stated that they were “not OK” with the use of emotion recognition systems.

Arbitrary and Unscientific Decision-Making

Currently, many inferential biometric systems are based on simplistic or unscientific foundations. Emotion recognition systems, for instance, are most commonly based on “basic emotion theory”, which considers there to be a small number of basic human emotions. This theory has been thoroughly discredited by academics who emphasise that the reality of human emotion is far more complex; however, because of the relative ease of coding systems that recognise facial actions associated with these emotions, it has become popular among emotion recognition providers. Another high-profile example of inferential biometric technologies being grounded in questionable science is a Stanford research paper that claimed to be able to infer an individual’s sexual orientation from their facial data, with an 81 per cent accuracy rate for men and 74 per cent for women. Leaving ethical questions about developing such a system aside, the claim that differences are based on characteristics of the facial structure is flawed. Instead, it is likely that the system based its predictions on stereotypical looks of heterosexual and homosexual individuals.

Developing and deploying systems based on unscientific foundations is problematic as it leads to decisions being made about individuals on a largely arbitrary basis or based on spurious correlations. This is clearly ethically problematic when systems are used in high-risk situations, such as for law enforcement or in recruitment. The use of sexual orientation recognition systems provides a clear example of how problematic these systems could be, with the potential for them to lead to an individual being (incorrectly) outed.

Techno-Solutionism

A final risk of inferential biometrics is of them being developed and deployed as a silver bullet. Investing in inferential biometrics rather than trying to address the underlying causes of problems can lead to funding being diverted from where it is most useful. In fact, if the appropriateness of deploying inferential biometrics is not adequately scrutinised, then these systems could end up doing more harm than good, given the risks outlined above. Consider the Lucknow example, rather than targeting the perpetrators of harassment, women find themselves scrutinised by increased surveillance. This is ethically unacceptable. As will be discussed in section 5 of this paper, this risk, as well as those outlined above, can be mitigated by the introduction of an effective governance framework.


Chapter 4

4. Current Governance Measures

To date, efforts to explicitly govern inferential biometrics have been limited, although the EU and UK are currently consulting on legislative changes that address these systems. The European Commission’s draft Artificial Intelligence Act, published in April 2021, is the only primary legislative effort to directly address emotion recognition (Art 3, 34) and what it terms “biometric categorisation”: “an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data” (Art 3, 35). If the current proposal is passed, companies deploying emotion recognition and biometric categorisation systems would be obliged to inform the affected individuals when they are exposed to these systems, with the exception of law enforcement uses for the detection, prevention and investigation of criminal offences (Art 52, 2).

The EU’s Draft AI Act

In April 2021, the EU Commission released a draft Artificial Intelligence Act that sought to develop harmonised rules for governing AI across the EU. This document proposed a risk-based approach for governing AI, with those systems deemed to be of “unacceptable risk” banned, restrictions placed on those deemed “high risk” and codes of conduct encouraged for minimal-risk systems. Significant fines are proposed for those who do not follow this legislation. Before being passed into law, the Act will need to be approved by the EU Parliament and the Council of the EU.

The UK’s recent Data Consultation publication, which seeks to solicit opinions on reforming the UK’s data protection regime, also addresses the issue of inferential biometrics. Specifically, the consultation considers the issue of group profiling and how sensitive information can be inferred about individuals without necessarily identifying them, including through their biometric data (para 102). The consultation is sceptical about introducing a specific protected category of “inferred data”, but does point to the potential benefits of enhanced explainability and accountability requirements.

This is not to say that individuals in the EU and UK are afforded no protection from inferential biometric technologies, with a number of regulatory provisions indirectly regulating the use of these technologies. The GDPR (and UK GDPR) specifies that biometric data are “special category data”, meaning extra safeguards need to be met for the collection and processing of this information. This provides foundational protections to individuals in terms of ensuring they have control over how their data are collected and used. In addition, many provisions within EU and UK law prohibit discriminatory practices, including the European Convention on Human Rights and the UK’s Human Rights Act (1998) and Equality Act (2010).

The same is true in many other jurisdictions, where inferential biometrics are indirectly governed by legislative measures. In particular, there has been a flurry of data protection measures being introduced globally over the past five years, including in China, some US states, much of South America, and at least 27 African countries. As in the EU and UK, these measures provide a foundation for ensuring that protections are in place for the collection and processing of personal data. Some degree of equalities law is also commonplace globally. This indicates that in many jurisdictions, there are baseline governance measures that guide the development and use of inferential biometrics.

Limitations

While these measures provide a good starting point for governing inferential biometric technologies, with the explicit consideration by the EU and UK particularly noteworthy, they are currently insufficient for ensuring that the potential benefits outlined above are achieved and harms alleviated. Some commentators have questioned whether the transparency requirements outlined in the EU’s draft AI Act would sufficiently mitigate the harms associated with inferential biometrics. Consider the Lucknow police force’s distress recognition system: telling women that they are being surveilled by these systems would be difficult in practice and would also fail to curtail their invasiveness.[_] Indeed, it is questionable whether the transparency requirements would even apply to all inferential biometric systems. The definition of biometrics followed in the GDPR and by extension the EU’s draft AI Act relates to those physiological or behavioural traits that can explicitly identify an individual.[_] Accordingly, “soft biometric data”, which can be used to make inferences about an individual but does not identify them, may not qualify as biometric data.[_] These perceived flaws in the AI Act led the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) to state that the measures proposed in the AI Act are insufficient and that a ban on biometric categorisation and emotion recognition was desirable.

Considering the broader governance landscape reveals that both data protection and equalities law help to alleviate some of the potential harms of inferential biometrics, gaps are still present. The GDPR generally[_] requires individual consent when personal or special category data are being processed; however, as mentioned above, “soft biometric data” may not be afforded the same protections. More pressingly, the opacity of inferential biometric systems—in both their design and use—means that applying equalities law may be difficult. There is a notable lack of specific transparency or auditing requirements for the underlying algorithms, which limits scope for detecting algorithmic bias in inferential biometric systems. This means that many of the biases that could emerge in algorithmic systems may go unnoticed or unchallenged, neutering the effective enforcement of existing regulatory measures.

A final limitation of current regulatory regimes, touched on above, is that there are seemingly few protections for algorithmic groups that are not defined by pre-existing protected characteristics. Algorithmic systems often make inferences based on ad hoc group-level characteristics (e.g. based on groupings of certain types of facial movements) and there is a risk of unfair decisions being made against individuals based on their membership of these groupings. This is a particularly contentious point, given the difficulty of defining what is fair among ephemerally created groups. Nonetheless, various academics and commentators have argued that additional safeguards are required to protect against such group-level discrimination that does not align directly with protected characteristics and equality laws.

 


Chapter 5

5. Recommendations

The development and use of new inferential biometric technologies is a growing phenomenon, which will likely continue to increase in the coming years. These technologies could have the potential to benefit society through improving efficiencies and safety, augmenting human capabilities and, in the longer term, assisting in the development of more empathetic AI. At present, these benefits are not being fully realised. Indeed, inadequate governance mechanisms globally have led to the development and use of a number of unscientific systems that infringe individual privacy and produce harmful biases. The following jurisdiction-agnostic governance recommendations seek to provide guidance to policymakers who are interested in curtailing these harms and ensuring that inferential biometric systems are deployed in a manner that is beneficial for individuals and society more widely.

Define a Risk-Based Framework for Inferential Biometrics

Not all inferential biometric systems carry the same level of risk. With the right data collection and processing limitations, developments such as affect-aware gaming can enhance users’ gaming experience, while carrying low risks for users if the systems are not completely accurate. Proposals to ban emotion recognition systems appear disproportionate when applied to developments such as these. In other cases, stronger governance measures are needed. Remote biometric surveillance that seeks to infer intent through facial analysis software would fall into this category, due to the threat it poses to fundamental rights. This indicates that a nuanced approach to governance is needed.

Introducing a risk-based framework would provide a strong foundation for governing inferential biometrics. The EU’s draft AI Act provides a good example of the type of framework that could be introduced. Systems deemed to be unacceptable due to the threat they pose to fundamental rights could be banned (e.g. those inferring sexual orientation); those considered to be high risk but still desirable could have comprehensive restrictions placed on their development and use; and low-risk systems could be subject to guiding codes of conduct. In developing this framework, public engagement would be necessary to ensure that governance measures in place reflect the interests of society.

Develop Robust, Scientific and Auditable Models

For all inferential biometric systems, and those used in high-risk cases in particular, ensuring that scientifically sound models are used is essential. By their very nature, inferences are not entirely factual, but more complex and robust models would be likely to improve accuracy. For emotion recognition technology, this includes going beyond basic emotion theory to ensure that models align with the latest scientific thinking and are culturally aware. For inferential biometrics more generally, the use of multiple modalities wherever possible could help improve the models.[_]

Practically, these aims could be achieved through mandating minimum scientific standards for models, including guidance on the types of data that should be used. Guidance should include requirements such as using a sufficient number of modalities when making higher-risk inferences; however, requiring increased amounts of data collection could prove a dual-edged sword, due to the potential privacy harms that this would entail. Accordingly, another important element of any guidance is an emphasis on privacy-enhancing technology to ensure that sufficient protections are in place for individuals.

Alongside these standards for models, it is also important to ensure that inferential biometrics can be properly scrutinised. A failure to do so would mean that regulatory requirements could be avoided or poorly implemented by organisations. Therefore, algorithmic auditing requirements are needed to check that standards are being met and that flawed models or harmful biases are not going unchecked. Given regulator capacity and the number of organisations that are considering developing or using these systems, it is unlikely that this auditing could be done by government alone. At the same time, the credibility of internal company audits could be called into question. Accordingly, developing regulatory markets for AI that rely on an industry of third-party auditors could be a good way of ensuring that these systems are properly scrutinised. While the market for auditing inferential biometric systems alone is probably too small, algorithmic audits are a promising direction for the wider AI ecosystem.

Opt-Out Options for Individuals

Enhanced transparency measures for inferential biometric technologies have been considered by the EU and the UK. This is a good starting point, yet on its own, it is inadequate. Telling an individual that they are being subjected to an inferential biometric system (e.g. having their attention analysed when shopping or having their mood monitored in work) does not provide them with a meaningful choice over whether they wish to use the service. Likewise, depending on the level of explanation provided to an individual, this may do little to alleviate concerns or empower a response. Indeed, it may only be technically adept individuals who are able to understand these explanations and in turn game the systems that benefit.

Providing individuals with simple and engaging information on the systems they may be subjected to and, importantly, offering them some recourse is a crucial step for ensuring that the deployment of inferential biometrics is ethical. More often than not, this could be achieved through a chance to opt out of being subjected to inferential biometrics; however, in certain cases this may not be appropriate. For instance, when biometric inferences are in the public interest, exclusions may apply.

Given the speed at which inferential biometrics are being introduced across sectors, and the potential promise and peril these systems pose, policymakers should act with urgency to update current governance measures. The recommendations offered in this report would act as a strong starting point for ensuring that inferential biometrics technologies are properly governed in the future.


Chapter 6

Acknowledgements

I would like to thank Martin Carkett and Benedict Macon-Cooney for their mentorship on this piece, as well as Sacha Babuta and Chris Thomas for their extremely helpful comments on an earlier draft.

Lead Image: TBI

Footnotes

  1. 1.

    The terms “inferential biometric technologies” and inferential biometrics will be used synonymously in this report. Other

    reports

    and

    draft legislation

    have referred to this collection of technologies as Physiognomic Artificial Intelligence and Categorical Biometric Technologies respectively.

  2. 2.

    This definition may prove contentious for two reasons: firstly, it does not include the requirement of identification through these features, which many legal and traditional definitions do. Secondly, it excludes behavioural biometrics, which are common in many definitions. The reasons for this will be discussed later in this paper.

  3. 3.

    For instance, gait analysis would be considered physiological biometric data and thus within scope, while internet browsing would not.

  4. 4.

    These questions have been discussed extensively

    elsewhere

    .

  5. 5.

    It should be stressed that the norm in most jurisdictions allows for individuals to be informed through labels such as “CCTV in progress”.

  6. 6.

    The EU Council’s response to the draft AI Act proposes dropping the identification requirement in the definition. While this would be beneficial for encompassing inferential biometrics, it would also raise challenges given that this definition contrasts with that adopted in the GDPR.

  7. 7.

    Whether these data are afforded protections as personal data under the GDPR will depend on the extent to which these soft biometric data can, when combined with other data, lead an individual to be identifiable.

    Further guidance would appear necessary

    to clarify the protections provided to soft biometric data in circumstances where such data cannot be used to identify a natural person (either directly or indirectly).

  8. 8.

    In

    some cases

    , consent may not be the most appropriate lawful basis for processing. For instance, legitimate interest may be relied on for the use of systems which analyse biometric data remotely, such as live facial/emotion recognition.

  9. 9.

    Most applications currently rely on a single modality (e.g.

    the face

    ).

Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions