A short guide on how AI can be ethically applied in the development sector

A short guide on how AI can be ethically applied in the development sector

Artificial intelligence (AI) opens up a whole new set of digital solutions for the development sector. However, as we pointed out in a previous article, the massive potential of AI utilization brings equal concerns about its possible misuse and unintended consequences. In this article, DevelopmentAid presents a set of guidelines for practitioners to successfully use digital solutions to solve developmental problems and avoid potential ethical risks. The advice relies on existing research regarding the ethics of AI use published by USAID and the Journal of Public and International Affairs of Princeton University.

To see open and forecast tenders in the development sector connected to artificial intelligence, visit the DevelopmentAid website.

Development practitioners should be directly involved in the creation of AI tools. Even without formal training and knowledge, development practitioners can play a key role in contextualizing the use of tools. In general, experts possess different types of experience and information than developers and should therefore be able to point out the genuine priorities of an AI tool and any potential gaps. This diversity of perspectives can be both enriching and challenging as it can cause development to be less straightforward in “translating” the concepts and needs across various disciplines. However, such interdisciplinary effort can help to create effective, inclusive and fair AI tools.

Consider whether an AI solution is really needed. Implementers should first consider whether an AI tool is needed by determining if the problem can be solved using simpler technology or even no technology at all. Sometimes the simplest solution is preferable on a cost-benefit basis. To determine whether an AI solution is needed, implementers can follow certain steps: 1) Determine whether an AI intervention is applicable; 2) Ensure the intervention is feasible; 3) Assess whether the system could produce biased outcomes and identify the potential consequences of those outcomes; 4) Consider any unintended consequences; 5) Conduct a cost-benefit analysis; 6) Conduct a risk assessment. Depending on the final results of the analysis, implementers should be prepared to walk away from AI if it does not stay true to both the development problem and the ethical aspects involved.

Involve stakeholders throughout the process and build relationships. Building effective AI tools requires consideration being paid to a vast range of voices and perspectives. This is especially valid for the people who will be directly affected by the outcomes and decisions made by a digital system. The users and the members of the target community should be involved at every step of the process by providing input and voicing concerns. This regular interaction is key to a well-informed contextual analysis and can help to flag up any potential issues early in the process. When possible, implementers should include local tech talent in the development of an AI system. Furthermore, collaborating with local organizations can open doors to essential local, accurate and timely data. Beyond the practical aspects of these interactions, investing in relationships can introduce more trust and better productivity over the course of time.

Incorporate privacy and security aspects into the system by design. Using a risk assessment framework should enable implementers to recognize all the potential dangers AI tools could bring. This requires considering the system as a whole, not simply the data that is being used. Implementers should factor in political and security aspects and should consider the local legal context relating to privacy and data laws. For example, an investigation should be undertaken to establish what the data protection laws need to be adhered to and whether there are any laws that prohibit the use of encryption or that would enable the government or other actors to access sensitive data. Implementers should define ownership and access before collecting and analyzing data. Informing users regarding the aspects of privacy and security is crucial: they should provide their informed consent, should be told how their data will be used and shared and how they can access it and change it. Keeping the best interests of the user and their data should be the guiding principle.

The framework should also model errors and potential bias which should be further tested. AI can be biased at the system level and at the data level. System level bias means that developers have, intentionally or unintentionally, built their own personal biases into the parameters they consider or the labels they define. Input level bias means that the data itself is biased. Implementers should discuss potential model errors and bias beforehand and make sure they understand how these have been assessed. For example, the team could identify potential subsets of the population (urban/rural, female/male, low income/middle, income/high income) and test all the errors and biases that may potentially exist. What would the real-life consequences of the errors and bias be? It is crucial to ensure that error testing and performance monitoring continues after the deployment of the technology.

Clearly establish the roles of each stakeholder and create a protocol for the transfer of responsibility. When creating an AI system, clear roles should be defined and shared across the stakeholders – who is responsible for which component of the AI tool? Also, a detailed protocol should be discussed and created as to how to transfer the technology from the developing entity to the implementers. The implementer should be able to maintain the privacy and the security of the tool and retain control over the system. This might include a training process that involves educating the implementer about how to use the AI tool and also discussing the potential ethical issues that could arise during implementation as well as how to monitor progress.

The development sector can foresee the issues that already exist regarding the use of AI technology in development and address these before its actual deployment. Strengthening local technical capacity could be one further step towards the possibility of introducing the local perspective to technology development while reinforcing relevant local governance structures and advising about responsible data practices could help partner countries to become more self-reliant in terms of AI governance.

DevelopmentAid publishes all the latest news regarding how AI technology is transforming the development sector. To stay informed, become a member and Sign up for our newsletter.