Link Search Menu Expand Document
Responsible data management toolbox

8.3 Artificial intelligence


TABLE OF CONTENTS


Keep in mind

Artificial intelligence is a technology that has very interesting potential (or even current) practical applications.

Nevertheless, the “Do not harm” principle must prevail above all - NGOs have a responsibility to know the subject well enough to be able to verify in each potential use that it will not go against this principle.

In a context where technologies evolve very quickly, it is not always easy to understand the ins and outs of each new tool.

NGOs must therefore reflect on it more broadly, implement an associated strategy that guides its use to avoid anarchic and irresponsible ownership.

The development of artificial intelligence in the sector reinforces NGOs’ need to better know this technology, as we will discover in this subsection, and in particular the opportunities and threats it represents, in order to reduce the risks of its different uses.

8.3.1 What are we talking about?

  • Another definition of AI from the Montreal Declaration for a Responsible Development of Artificial Intelligence: “Artificial Intelligence refers to the series of techniques which allow a machine to simulate human learning, namely to learn, predict, make decisions and perceive its surroundings. In the case of a computing system, artificial intelligence is applied to digital data.”,
  • AI is linked to machine learning, which is “the branch of artificial intelligence that consists of programming an algorithm so that it can learn by itself.” (Montreal Declaration for a Responsible Development of Artificial Intelligence).

Although artificial intelligence is only in its relative infancy compared to what it will represent in the international solidarity sector (as in all other sectors) in a few years’ time, there are still many current and intended relevant uses.

Like all other sectors, NGO members already use the mainstream tools, such as ChatGPT created by the Open AI, for needs such as translation or production or synthesis of documents / meetings to be more than efficient. The specific counterpart on data management, to create collection or visualisation products for instance, or to classify data, is only just starting.

Yet, as for other sectors, a number of relevant uses related to the activities implemented with populations are under study (see some examples in ICTworks blog posts on the subject), such as for medical diagnosis, predictive crisis analysis, context evolution analysis, or agricultural productivity, to name a few.

At this stage, the uses and positions diverge between the different international solidarity actors:

  • The UN agencies, which widely use them in their interventions as indicated in the latest report of the International Telecommunication Union grouping all AI activities by the various UN agencies, 281 AI projects in support of the Sustainable Development Goals of 40 UN agencies were presented. All SDG themes are concerned, but particularly institutional strengthening, infrastructure and industry, economic development, reducing inequalities, and health. Their work can be followed through the AI for good initiative.
  • The NGOs, which are more cautious or rather at the stage of experimentation.

A good example of this last point is the WhatsApp chatbot (called Solis Bot) set up by Solidarités Internationale in refugee camps in Lebanon to provide information to refugees about the services they can benefit from. This application, which is highly appreciated by affected populations, has made it possible to meet the needs of a highly digitised population, and thus refocus the teams’ “human” time on a more personalised follow-up of the most vulnerable populations or with less access to technology.

8.3.2 What is at stake?

As we have seen, the increasingly widespread and inevitable arrival and use of AI is contributing to the digital transformation of the world and is also impacting the humanitarian sector. It is clear that its actors will not be able to curb its long-term use, and that they will have to adapt. Properly used and well-defined, AI has many things to offer the industry, in terms of new techniques for data analysis, efficiency, data quality, and scaling up interesting initiatives in an increasingly complex context with less and less resources.

Nevertheless, it seems necessary for NGOs to take part in the ethical and practical reflections to define the uses of AI and to participate in the orientation that can be given to AI in the sector, rather than having it imposed by the very rapid technological advances of the private sector and what is being tested and implemented by the United Nations.

What needs to be taken into consideration, in the ethical reflection on the use of AI, according to NetHope:

  • Data is not neutral,
  • Biases are omnipresent,
  • Impartiality is complex,
  • Intention is essential,
  • Responsible innovation is a role that belongs to all.

Indeed, there are many risks associated with its use, which may be contrary to humanitarian principles, whether in terms of respect for the rights of individuals (mass surveillance, social rating, etc.), data protection or increased inequalities or discrimination, not to mention its carbon footprint.

Here is a summary (from this Humanitarian Practise Network article, ICRC blog post by Christopher Chen, paper of Sarah Spencer about the Humanitarian AI):

  • Biases: AI is designed by human people, guided by their own social and cultural norms, fed by mainly Western datasets, involving interpretation of data, in other words biases. They are reproduced in the operation of different AI techniques, even amplified by the reliability granted to machines and the belief in the automatic more effective exclusion of fraud. These biases can lead to discrimination.

For example, an AI used by the Louisiana State Police in April 2023 (Source: Next Impact article, April 2023, only available in French) misidentified a person and confused him with a theft suspect, which led to his incarceration for six days before the facial recognition error was acknowledged. The article also states that it is common knowledge that “the darker a subject’s skin, the less facial recognition algorithms work.”

  • The lack of supervision: supervision of the uses of AI technology remains unclear, insufficient and fragmented, and can weaken the protection of populations’ personal data for instance, AI searching for anything it can find, without asking ethical questions. There is no international framework to regulate the use of AI and there are no safeguards prohibiting misuse. Risks are not sufficiently identified and the need to establish responsible data management does not exonerate organisations when they use digital technology.
  • Accountability: when AI technology makes a mistake that impacts one or more people, such as excluding services from humanitarian assistance, how do we determine accountability? It is often the human team who will be blamed and not the people who created the tool. AI tools should be designed to support human decision-making processes and not to directly make the decision electronically only.
  • The production of false information: AI technologies train on data that can sometimes be incomplete, erroneous or lacking context. In this case, the tool will reproduce errors and disseminate false information.

For instance, the company Newsguard, which fights disinformation, found that “in 80% of the cases submitted by the company […], ChatGPT produced very convincing lies and violent speeches” (Source: Next Impact article, April 2023, only available in French).

8.3.3 How NGOs can tackle the topic?

Since Artificial Intelligence remains a very new and as yet little-known topic, especially by NGOs, it is their responsibility to stay informed on the subject in order to be able to make decisions about its use or not. Beyond that, it is necessary for them to make a very thorough risk assessment before implementing this type of technology to ensure that the tools will have the expected benefits without risk for the people, and also to implement measures to counter the potential negative effects. Similarly, when technology is “imposed” on them by other stakeholders (donors, states, etc.), they are responsible for learning about the risk analysis carried out and to question it if necessary.

No general and binding framework yet exists on the use of AI throughout the world: there are policies and recommendations from some organisations. For instance, the UN has developed 10 Principles for the Ethical Use of Artificial Intelligence in the United Nations System.

To manage the risks discussed above, there are five key elements that need to be implemented at organisational level, according to the National Institute of Standard and Technology (NIST):

  • Have an AI risk management policy,
  • Assess risks at all levels (organisational as well as individual),
  • Assess and prioritise all risks related to the AI system and prioritise the highest,
  • Document roles and responsibilities,
  • Link AI reliability and risks.

image info

Source: Risk Management Framework NIST

8.3.4 Key Resources

  • Other resources are useful on the subject. You can read the international review of the Red Cross article on the potential of AI in humanitarian action, the opportunities and risks it represents; it also sheds light on how AI can be used safely and responsibly in the field,
  • We recommend listening to podcasts from “humanitarian AI”, a community of exchange on AI applications in the humanitarian sector, and particularly that of ACAPS, where the operational uses of AI are discussed,
  • You can also explore ICTworks resources including this article on the development of ethical principles guiding the use of AI, contextualising and presenting the GSMA Artificial Intelligence Ethics Playbook,
  • DataKind is an NGO that supports civil society in the field of data science and AI. In this context, it has developed the Datakind Playbook for its volunteers, which also gives recommendations for assessing and weighting the risks related to a project’s data during the design phase.