Using AI ethically in a pandemic

Taking a principled approach is crucial to the successful use of AI in pandemic management, say Stephen Cave and colleagues

CREDIT: This is an edited version of an article that originally appeared on The BMJ

In a crisis such as the COVID-19 pandemic, governments and health services must act quickly and decisively to stop the spread of the disease. Artificial intelligence (AI) which, in this context, largely means increasingly powerful data-driven algorithms, can be an important part of that action—for example, by helping to track the progress of a virus, or to prioritise scarce resources. 

In order to save lives it might be tempting to deploy these technologies at speed and scale; however, the deployment of AI can affect a wide range of fundamental values, such as autonomy, privacy and fairness. AI is much more likely to be beneficial, even in urgent situations, if those commissioning, designing and deploying it take a systematically ethical approach from the start.

Ethics is about considering the potential harms and benefits of an action in a principled way; for a widely deployed technology, this will lay a foundation of trustworthiness on which to build. Ethical deployment requires consulting widely and openly, thinking deeply and broadly about potential impacts and being transparent about goals being pursued, trade-offs being made, and values guiding these decisions. In a pandemic such processes should be accelerated, but not abandoned; otherwise, two main dangers arise – firstly, the benefits of the technology could be outweighed by harmful side-effects and, secondly, public trust could be lost.

Ethical decision-making is, of course, already an integral part of healthcare practice, where it is often structured according to the four pillars of biomedical ethics: beneficence, non-maleficence, autonomy and justice. When considering the use of AI in a public health setting, such as a pandemic, it might, therefore, be useful to consider how the distinctive challenges posed by AI pertain to these four well-established principles.


It might seem obvious that the use of AI in managing a pandemic is beneficent; it is intended to save lives. A risk exists, however, that the vague promise that a new technology will ‘save lives’ can be used as a blanket justification for interventions we might not otherwise consider appropriate, such as widespread deployment of facial recognition software. Those developing or deploying such a system must be clear about who their intervention will benefit, and how. Only by making this explicit can one ensure that the intervention is proportionate to its benefit.


In order to avoid unintended harms from the use of AI in the management of a pandemic, it is important to carefully consider the potential consequences of proposed interventions. Some interventions – for example, imposing self-isolation – may cause mental health problems for those who are already vulnerable, or carry high economic costs for individuals. We must remember that AI systems seek to optimise a particular, objective function – that is, a mathematical function representing the goals it has been designed to achieve; any potential harms not represented by this function will not be considered in the system’s predictions.


The benefits of new technologies almost always depend on how they affect peoples’ behaviour and decision-making, from the precautions an individual chooses to take, to treatment decisions by healthcare professionals, and politicians’ prioritisation of different policy responses. Respecting peoples’ autonomy is, therefore, crucial. Designers can help users to understand, and trust, AI systems so that they feel able to use them with autonomy. For example, diagnostic support systems used by healthcare professionals in a pandemic should provide sufficient information about the assumptions behind, and uncertainty surrounding, a recommendation, so that it can be incorporated into their professional judgment.


Data-driven AI systems can differentially affect different groups, as is well-documented. When data of sufficient quality for some groups are lacking, AI systems can become biased – often in ways which discriminate against already disadvantaged groups, such as racial and ethnic minorities. For example, smartphone apps are increasingly heralded as tools for monitoring and diagnosis – such as the MIT-Harvard model for diagnosing COVID-19 through the sound of coughs – but access to smartphones is unevenly distributed between countries and demographics, with global smartphone penetration estimated in 2019 to be 41.5%. This limits both whose data are used to develop such apps and who has access to the service. If care is not taken to detect and counteract any biases, using AI for pandemic management could worsen health inequalities.

AI has the potential to help us solve increasingly important global problems, but deploying powerful new technologies for the first time in times of crisis always comes with risks. The better placed we are to deal with ethical challenges in advance, the easier it will be to secure public trust, and quickly roll out technology in support of the public good.

Don’t forget to follow us on Twitter like us on Facebook or connect with us on LinkedIn!

Be the first to comment

Leave a Reply