11 May 2023

Vets

'Ethics of using artificial intelligence (AI) in veterinary medicine' by Simon Coghlan and Thomas Quinn in (2023) AI and Society comments 

This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. Veterinary medicine is a socially valued service, which, like human medicine, will likely be significantly affected by AI. Veterinary AI raises some unique ethical issues because of the nature of the client–patient–practitioner relationship, society’s relatively minimal valuation and protection of nonhuman animals and differences in opinion about responsibilities to animal patients and human clients. The paper examines how these distinctive features influence the ethics of AI systems that might benefit clients, veterinarians and animal patients—but also harm them. It offers practical ethical guidance that should interest ethicists, veterinarians, clinic owners, veterinary bodies and regulators, clients, technology developers and AI researchers. ... 

AI—i.e. digital systems that perform tasks normally requiring human intelligence (Russell and Norvig 2021)—is poised to transform human medicine (Topol 2019; Wilson et al. 2021) and may prove equally transformative of veterinary medicine (Basran and Appleby 2022; WIRED Brand Lab 2022). Like human medical AI (AstromskÄ— et al. 2021; Dalton-Brown 2020; Keskinbora 2019), veterinary AI raises important ethical issues. Although several papers touch on ethical aspects of veterinary AI (Appleby and Basran 2022; Ezanno et al. 2021; Steagall et al. 2021), including its implications for ‘livestock’(Neethirajan 2021),a more detailed ethical evaluation of companion animal AI is wanting. Our analysis of AI’s ethical implications for companion animal medicine should interest ethicists, veterinarians, clinic owners, veterinary bodies and regulators, clients, technology developers and AI researchers. 

Veterinary practice raises unique ethical issues that stem from the client–patient–practitioner relationship. Companion animals are potentially more exposed to harms from AI than are humans because they lack the same strong social, moral and legal status. For example, the law does not effectively protect animals from wrongful injury or from clients who seek unwarranted or unjustified ‘euthanasia’ (Favre 2016). These conditions are relevant to the ethics of veterinary AI. At the same time, medical AI raises its own distinctive ethical issues—issues like trust, data security and algorithmic transparency—which we also discuss in the veterinary context. 

AI in veterinary medicine might be used for business purposes and hospital logistics like booking appointments. Technology that affects practitioner workflow could have ethical implications, as could other AI, such as language translation apps that enable communication with linguistically diverse clients. However, AI for triage, diagnosis, prognosis and treatment raises the most distinctive, complex and consequential ethical questions. We concentrate on AI for such medical decision-making. 

Currently, AI enjoys massive public and private investment, propelled by stories like algorithms defeating Jeopardy and Go masters (Mitchell 2019). Another indication of AI’s rapid ascent are recent large language models like ChatGPT and text-to-image generators that demonstrate remarkable, though sometimes strange and biased, outputs (see Fig. 1). Yet most people are bewildered by the technical jargon of artificial neural networks, deep learning, computer vision, random forests and natural language processing (Waljee and Higgins 2010). Veterinary practitioners too may not always understand, for instance, the ways in which AI learns from data and autonomously updates its algorithms to draw inferences about previously unencountered data (e.g. from patient radiographs or medical records)—and this may create uncertainty about its use in healthcare. 

This issue of trust in technology is important. To some degree, medical AI remains just as much an art as a science (Quinn et al. 2021b), and AI developers are only now exploring how to apply modern machine learning (ML) methods successfully in medicine. This involves experimenting with how data are collected and pre-processed, how AI models are applied and optimised and how model performance is evaluated. Each step contains many nuances that could affect model operation in clinic settings and unintentionally harm patients and clients. While busy practitioners cannot be expected to understand all these nuances, they will increasingly need at least a basic understanding of the ethical risks and benefits of AI. This paper identifies and examines these ethical issues. 

The paper runs as follows. Section 2 outlines medical AI in veterinary practice. Section 3 introduces ethical principles of AI, human medicine and veterinary medicine. Section 4 identifies and examines nine ethical issues raised by veterinary AI. Section 5 discusses important ethical norms in veterinary medicine and AI’s distinctive implications in that realm, as well as providing some practical guidance for AI’s use.