5.4 Use of biometry
TABLE OF CONTENTS
- 5.4.1. What are we talking about?
- 5.4.2 What is at stake?
- 5.4.3 How NGOs can tackle the topic
- 5.4.4 Key resources
Keep in mind
Biometric data is sensitive personal data that requires real consent, and exceptional security.
Biometrics are generally “imposed” on NGOs for reasons of efficiency, integrity and to reduce the risk of fraud. However, the systems raise major ethical questions, which NGOs must ask themselves to ensure that their programmes respect the “Do no harm” principle. They cannot therefore turn a blind eye to the subject despite its technological complexity.
Seeking to understand and dissect the associated “black box” is necessary, to engage in discussions that will collectively enable to improve the systems.
When using biometrics in cash transfer, it is crucial that NGOs be extremely vigilant on setting up and respecting data sharing agreements with service providers.
The use of biometrics has become increasingly prevalent in the humanitarian sector over the past decade sector over the past decade, notably to identify people, give them access to aid, allow them access to services, and for cash transfers.
Driven by donors, UN agencies and governments, as indicated by the Engine Room in a landmark study commissioned by Oxfam in 2018, its benefits are mainly “to identify the people targeted by assistance (identifiability and traceability), reduce fraud and duplication (accuracy and integrity), and simplify registration and identification (simplicity and efficacy)”. However, many questions yet remain about the use of biometrics to be addressed by NGOs.
Biometric data are “any automatically measurable, robust and distinctive physical characteristic or personal trait that can be used to identify an individual or verify the claimed identity of an individual, such as (but not limited to) iris scanning, facial recognition and fingerprint scanning” (Source: Oxfam’s biometric policy) “they have, for the most part, the particularity of being unique and permanent (DNA, fingerprints, etc.)” (source: CNIL). This is why they belong to the category of sensitive personal data and therefore require stronger protection.
Their use therefore represents challenges and carries risks with regard to the rights of the populations from which they are collected. This is all the more the case in view of the contexts where they are most used – generally their use is effective in large-scale crises, where the vulnerability of the data subjects is often the most pronounced, and therefore where the sector must be even more vigilant on its use.
The most prominent example of large-scale implementation is the context of Syrian/Palestinian refugees in Jordan, as presented in the ODI and Humanitarian Policy Group report, where iris scanning has been required for about a decade by both the Jordanian Ministry of Interior and UNHCR and WFP for access to their humanitarian services.
The use of biometric data may put a strain on and pose risks to affected populations, which are detailed in the Oxfam and The Engine Room report on the use of biometric data in the humanitarian sector, which also provides examples of concrete cases that have occurred. The document is a bit dated but remains relevant on many topics and was drafted following an Oxfam moratorium on participation in biometric projects pending better knowledge on potential consequences.
The top five risks identified are:
- Reliability: the possibility of false matches, i.e., a match between the data and an unrecognised person or conversely a misidentification. Some facial recognition technologies have even played a role in creating and continuing to amplify discrimination (the Gender Shades study by researcher Joy Buolamwini in 2018 on the extent of bias in AI technology),
- Their reuse: the sharing and reallocation of this data is accessible, granting the private sector and the States the option to reuse it for other reasons.
To give two very telling examples:
- The biometric data of the 800,000 stateless Rohingya refugees transferred by UNHCR to the government of Bangladesh, which in turn transferred them to the government of Myanmar (See this Human Rights Watch study).
- The retrieval of biometric data of thousands of Afghan people (from US military systems) by the Taliban regime in August 2021, putting the people at risk of being targeted and subjected to various forms of repression (See this Human Rights Watch article).
- Numerous abuses perpetrated by cash service providers have been noted over the course of many years and crises (for example, cross-referencing databases to find former clients with debts and “repay” their humanitarian financial aid outside of any legal framework).
- the centralization of data for populations in vulnerable situations makes data leakage more serious for them,
- the use of technologies that process biometric data requires a high level of maintenance and therefore represents a financial cost.
For example, the provision of Wi-Fi networks for Syrian refugees in a Greek camp resulted in more than 80,000 cyber-attacks per day.
- Reputation: En in the event of a biometric data leak, trust in the organisation that retained and used them would be affected and jeopardise its activities in the field; reputational risk also includes a risk of perception, that is to say based on misinformation (rumours for example).
- some people can or do not want to submit their biometric data, which excludes them from the aid provided, subject to their being made available by these populations,
- the use of biometric data may lead to the reproduction or aggravation of disparities due to this exclusion (for example a refusal linked to a cultural norm, or an impossibility linked to a disability),
- the risk of “dehumanising” people by treating their identity only through their biometric data.
In general, the image of a person is part of the personal data. The facial image is a biometric data because it allows the “unique identification” of a person. So, beyond all of the registration systems set up by United Nations agencies or in partnership with private companies as part of cash transfer, NGOs should also take into account that a simple photograph used to inform or testify about the activities of a project can be biometric data. The person then participates in the communication of the organisation, which has a responsibility of transparency and accountability towards them on the use that is made of the photo, all the more so with the development of new technologies such as artificial intelligence that are not very well mastered today but that allow and will increasingly allow the large-scale processing of photographic content.
As for other types of personal data collected, the data subjects have rights to their image. The right to image is a prerogative that allows authorisation or refusal of the reproduction and public dissemination of one’s image**.
Given that it is a sensitive data, it is advisable to obtain the informed consent of the person. Indeed, the challenge of image collection is the unique identification of a person that it represents, but also the sharing of the image, that is to say its dissemination. The image can have strong risks and impacts on a person’s privacy, especially in relation to their community.
- Precision: the consent of a person is necessary when the person on the image is recognizable (i.e., identifiable) & alone. If the image depicts a host of unidentifiable individuals, then consent is not required,
- Regarding children: when an image of a child is taken, their consent and that of the legal guardian(s) is required
If a doubt persists in obtaining consent to the image, it is preferable not to collect it for the sake of transparency and respect. This principle is emphasised in the Bond Institute’s Ethical guidelines for the collection and use of content (images and stories) in the humanitarian Sector, which provides insights into how to communicate with people in this situation.
Biometry is not a simple topic, and NGOs have limited leeway, compared to the collection and management systems they set up themselves.
Indeed, the technologies associated with biometrics are often imposed by donors / UN agencies within the humanitarian programs of NGOs - the latter thus do not always have a real choice regarding whether to use them or not. It is nevertheless important to be aware of the associated issues, in order to be able to question the organisations promoting or implementing them as necessary (testify if consent does not seem to be respected, for instance), to refuse a project or the use of associated biometric systems if the conditions of “do no harm” do not seem to be met, as Oxfam did a few years ago.
Nevertheless, as discussed in the previously mentioned ODI and Humanitarian Policy Group report, it would be illusory for NGOs to totally refuse their use irrevocably. These are clearly expected to be increasingly used, well beyond the simple humanitarian sector. It is, on the other hand, up to NGOs to mobilize to better understand the workings of these systems in order to seek to make them more respectful of the rights of the affected populations.
- The ICRC uses biometric data in its projects and has developed an internal biometric policy that provides recommendations to respect humanitarian principles,
- To go further, check out the Engine Room website for research and case studies dedicated to the subject. The Engine Room also published a landmark study commissioned by Oxfam in 2018 on the challenges and ways in which biometric data are used in the sector, as well as the risks and opportunities that such use represents,
- The Bond Institute, a UK network of organisations working in international development. The latter has developed Ethical guidelines for the collection and use of content (images and stories) in the humanitarian sector, which provides advice on how to collect the testimonies of people involved in their communication,
- And finally, you can read this article from The new Humanitarian, which illustrates a shift by humanitarian actors towards the use of biometric data as part of their interventions in Ukraine.