'Harm to Nonhuman Animals from AI: a Systematic Account and Framework' by Simon Coghlan and Christine Parker in (2023) 36(25) Philosophy & Technology comments
This paper provides a systematic account of how artificial intelligence (AI) technologies could harm nonhuman animals and explains why animal harms, often neglected in AI ethics, should be better recognised. After giving reasons for caring about animals and outlining the nature of animal harm, interests, and wellbeing, the paper develops a comprehensive ‘harms framework’ which draws on scientist David Fraser’s influential mapping of human activities that impact on sentient animals. The harms framework is fleshed out with examples inspired by both scholarly literature and media reports. This systematic account and framework should help inform ethical analyses of AI’s impact on animals and serve as a comprehensive and clear basis for the development and regulation of AI technologies to prevent and mitigate harm to nonhumans.
... This paper provides a systematic account and a ‘harms framework’ for understanding how artificial intelligence (AI) technologies could damage the interests of nonhuman animals (hereafter ‘animals’). Technology has sometimes greatly benefitted animals, such as via modern veterinary medicine or agricultural machines that relieved ‘beasts of burden’ (Linzey & Linzey, 2016). Yet, technology has also profoundly harmed nonhumans. Construction of the Chicago stockyards and its assembly-line systems in the 1800s, for example, enabled the mass slaughter and processing of animals (Blanchette, 2020; Sinclair, 2002). Around the 1950s, specialised factory-farming technologies like sow stalls, battery cages, and automated sheds further amplified intentional harm to farmed individuals. The Chicago stockyards also soon led to Henry Ford’s assembly-line automobiles, the modern ancestors of which unintentionally kill and injure millions of animals annually (Ree et al., 2015).
Today, in the twenty-first century, AI has significant potential to harm animals. AI refers to digital technologies that perform tasks associated with intelligent beings like classifying, predicting, and inferring (Copeland, 2022). AI’s growing power owes much to increasing data from, for example, the digital economy, online life, and manifold and integrated sensors in the environment and on or in human and animal bodies (e.g. as wearables)—the so-called Internet of Things or IoT. Its power also stems from modern machine learning (ML), including machine vision, natural language processing, and speech recognition.
In ML, a system is trained on data from which it learns to make new classifications and inferences beyond its explicit programming. We shall in this paper side-step human-level or general AI (and AI that is arguably sentient), concentrating instead on narrow (and non-sentient) AI that is developed and used for specific purposes (Russell, 2019),Footnote 1 which is arguably of more pressing moral concern than the emergence of very human-like AI.
Some existing technologies used to manage animals, such as automation in chicken sheds and dairies, may be augmented by AI. Moreover, some robots, drones, and vehicles incorporate AI in ways that may benefit or harm animals. Often the intention in developing and using AI is to positively benefit animals. For example, smart home applications for animal companions (Bhatia et al., 2020) and smart agriculture (Makinde et al., 2019; Neethirajan, 2021b) are often marketed as boons for animal welfare through better monitoring and control of the conditions in which they are kept. Another use that might benefit animals is AI image recognition to help detect illegal wildlife trafficking (O’Brien & Pirotta, 2022). Yet, as we show in some detail, AI can also act—both independently and with existing technologies—to create and amplify harms to animals (Sparrow & Howard, 2021; Tuyttens et al., 2022).
A tendency exists to see advances in AI as inevitably bringing ‘improvements across every aspect of life’ (Santow, 2020). For example, autonomous machine intelligence can seem more objective and less prejudiced than human intelligence. Nonetheless, society is increasingly recognising AI’s potential for ill (Pasquale, 2020; Tasioulas, 2022; Yeung, 2022). Despite this, the burgeoning scholarship in AI ethics (Bender et al., 2021; Buolamwini & Gebru, 2018; Eubanks, 2018; O’Neil, 2016), while vital and sometimes courageous in critiquing Big Tech power and algorithmic injustice, has largely ignored animals. While some ethicists, including Peter Singer (Singer & Tse, 2022), have recently begun to correct this oversight (see also, e.g.Bendel, 2016, 2018; Bossert & Hagendorff, 2021a; Hagendorff, 2022; Owe & Baum, 2021; Ziesche, 2021), dedicated work on AI and animals is relatively rare.
This paper’s systematic account of animal harm helps address that gap by setting out the breadth of contexts and plurality of ways in which animals may be harmed by AI. Drawing on the work of animal scientist David Fraser (Fraser, 2012), we develop a harms framework that includes intentional, unintentional, proximate, and more distant impacts of AI. While we do not propose specific ethical or legal responses, the framework provides a comprehensive and clear basis for crafting design, regulatory, and policy responses for animals.
The paper runs as follows. Section 2 outlines why concern for animal harms is warranted despite a general neglect of animals in AI ethics scholarship, explains the plural range of harms animals can arguably experience, and introduces a practical five-part harms framework or typology that recognises different types and causes of harm to animals from AI. The framework includes intentional harms that are legal or condemned, direct and indirect unintentional harm, and foregone benefits. Section 3 then uses the framework to identify and illustrate actual and possible AI harms to animals in each of the five categories, based on a narrative review of literature. Section 4 concludes by considering implications of our framework and suggesting directions for further research.
'Restating Copyright Law’s Originality Requirement' by Justin Hughes in (2021) 44 Columbia Journal of Law & Arts 383 notes
The Comments and Reporters’ Notes to § 6 devote an unusual amount of space to human authorship. The draft Restatement takes the view that “[t]o qualify for copyright protection, a work of authorship must be authored by a human being,” and “not, for example . . . works created by nonhuman animals.” The limited case law in this area is sufficiently nuanced as to make one wonder if the Reporters are trying to eliminate preemptively the possibility of “authorship” by artificial intelligence, but this is apparently not their intent. Recognizing that “[a] computer program might someday produce an output so divorced from the original human creator,” the “Restatement does not take a position on” authorship by artificial minds.
The case law on nonhuman authorship is basically of two sorts. First, there are the cases in which the literary work in question was allegedly authored by sentient beings of a divine, celestial, or spiritual nature; I will call these the “spiritual being cases.” Second, there is one case—the 2018 Naruto v. Slater decision—in which the visual works in question (photos) were arguably authored by a nonhuman primate. The Naruto decision was a fairly singular exercise. People for the Ethical Treatment of Animals (PETA) attempted to bring suit on behalf of Naruto, a crested macaque monkey, against the publisher (and copyright claimant) of a book called Monkey Selfies.
Both the district court and Ninth Circuit panel concluded that animals do not have standing under Title 17 using “a simple rule of statutory interpretation” previously crafted by the Ninth Circuit: “[I]f an Act of Congress plainly states that animals have statutory standing, then animals have statutory standing. If the statute does not so plainly state, then animals do not have statutory standing.” This does not really strike me as a principle of copyright law. It was a ruling that nonhuman animals do not have standing under federal law when the law is silent on that issue, not a holding that, as the draft Restatement represents, “[t]he photographs taken by [nonhuman animals] do not qualify for copyright protection because they were not authored by a human being.” Moreover, the connection between the Naruto fact pattern and the spiritual being cases was only made by the Naruto trial court, not the Ninth Circuit.
It is true that the spiritual being cases have pondered the question of whether a work “claimed to embody the words of celestial beings rather than human beings[] is copyrightable at all.” But we are adrift in terms of direct answers that are holdings and not dicta. Instead, when originality can be attributed to combined activities of humans and sentient nonhumans, courts will conclude that the human participant(s) added enough original expression to support a copyright. For example, in the Ninth Circuit’s 1997 Urantia Foundation v. Maaherra decision, the panel found that humans “pos[ing] specific questions to the spiritual beings,” then selecting and arranging the spiritual beings’ answers was sufficiently creative to confer a copyright.
Similarly, in the 2000 Penguin Books v. New Christian Church of Full Endeavor decision, a judge in the Southern District of New York considered a “defense of lack of originality” based on the human originator of a book—Helen Schucman— testifying that “she began to hear a ‘Voice’ that would speak to her whenever she was prepared to listen”; that the Voice told her to take notes; and that, for seven years, “she filled nearly thirty stenographic notebooks with words she believed were dictated to her by the Voice".
But she also made revisions with a (human) collaborator, William Thetford. In addition, “at least some of the editing and shaping of the manuscript was initiated by Schucman; the manuscript went through two additional drafts, one edited by Schucman, one edited by Schucman in collaboration with Thetford; and during this process sections were “rewritten so that the test would flow smoothly and communicate clearly its intended message.” Another colleague, Kenneth Wapnick, later made additional editorial suggestions.
Concluding that the arrangement of the materials had been determined by the human contributors, that the text “reflect[ed] many of Schucman’s personal interests and tastes,” and that all the editorial changes “were initiated by Schucman, Thetford, or Wapnick,” the court found that there was enough creativity to support human authorship (regardless of whether there was divine joint authorship). But the Penguin Books court went further, offering the alternative reasoning that the work was, plain and simple, “a literary work authored by Schucman” and that, “[as] a matter of law, dictation from a non-human source should not be a bar to a copyright.
Perhaps the only other spiritual beings case of note is a 1941 district court decision, Oliver v St Germain Foundation, in which the copyright owner Frederick Spencer Oliver, describes himself as the amanuensis to whom “letters” were dictated by Phylos the Thibetan, a spirit. But the court does not directly hold that the work is uncopyrightable because of the spiritual being source of the expression. Instead, the court treats the spiritual being’s words as “facts” being reported by Frederick Spencer Oliver, analogous to an author of a series of interviews (with humans), who would not have copyright over the words of the interviewees. The Oliver court also reasons that the defendant copied neither prose nor style and arrangement of the plaintiff’s work, intimating that those might be protected as original expression from the human contributor to the project.
Does any of this belong in a Restatement of Copyright? I doubt it. The Copyright Office Compendium says that the office will not register works by nonhumans, but we do not need an ALI Restatement to regurgitate an agency regulation that is not binding on courts. The day sentient refugees from some intergalactic war arrive on Earth and are granted asylum in Iceland, copyright law will be the least of our problems. But I am confident that once those sentient aliens are “nationals” in a Berne country, nothing in Naruto, Urania, Penguin Books, or Oliver will keep them from being treated as “authors” under American copyright law.
Similarly, once some AI is sentient enough to demand its own civil rights and protection under the Thirteenth Amendment, my guess is that “person” in copyright law will not be limited to homo sapiens. (Since the Reporters apparently agreed todefer to the future on the question of AI authorship, some bits and pieces of the 2020 draft—like Illustration 6 to § 6—should probably be dropped.). Same for the day when a chimeric half human/half horse is proven to be sentient; “person” in copyright law will include them. These issues are fun conjecture for academics, but such issues are so rarefied as to wonder why the draft Restatement discusses them at all