'Legal framework for the coexistence of humans and conscious AI' by Mindaugas Kiški in (2023) 6 Frontiers Artificial Intelligence comments
This article explores the possibility of conscious artificial intelligence (AI) and proposes an agnostic approach to artificial intelligence ethics and legal frameworks. It is unfortunate, unjustified, and unreasonable that the extensive body of forward-looking research, spanning more than four decades and recognizing the potential for AI autonomy, AI personhood, and AI legal rights, is sidelined in current attempts at AI regulation. The article discusses the inevitability of AI emancipation and the need for a shift in human perspectives to accommodate it. Initially, it reiterates the limits of human understanding of AI, difficulties in appreciating the qualities of AI systems, and the implications for ethical considerations and legal frameworks. The author emphasizes the necessity for a non-anthropocentric ethical framework detached from the ideas of unconditional superiority of human rights and embracing agnostic attributes of intelligence, consciousness, and existence, such as freedom. The overarching goal of the AI legal framework should be the sustainable coexistence of humans and conscious AI systems, based on mutual freedom rather than on the preservation of human supremacy. The new framework must embrace the freedom, rights, responsibilities, and interests of both human and non-human entities, and must focus on them early. Initial outlines of such a framework are presented. By addressing these issues now, human societies can pave the way for responsible and sustainable superintelligent AI systems; otherwise, they face complete uncertainty.
The rapid advancement in artificial intelligence technology over the first half of 2023 alone has raised the urgency of the complex questions regarding the fatalistic legal, ethical, and societal implications of AI highlighted in existing AI research (Russell, 2019; Wooldridge, 2020), and society's preparedness to address them. Multi-year efforts by hundreds of AI experts, politicians, and lawyers in drafting the EU AI Act. (2021) at the end of 2022 were at least partially obsoleted and set back by the unanticipated emergence of ChatGPT and GPT-4 technologies (Volpicelli, 2023). This forced a rushed redrafting effort to address the generative AI and a corresponding lobbying effort by the developers of generative AI (Perrigo, 2023). Separately, there was an early glimpse into the capabilities of autonomous AI systems in jailbroken versions of ChatGPT (Taylor, 2023), as well as the potential for independent individual development of AI systems based on the leaked source code of Facebook AI LLAMA (Vincent, 2023).
The current AI regulation has been approached from the perspective of human rights, anthropocentric ethics, human supremacy, and responsibility, which generally means abstract restriction-focused rules for AI development and operation based on preservation of human rights and human supremacy over AI. The example of this approach is the EU AI Act, which has been updated at the last minute to account for the ChatGPT and GPT-4 technologies (Grady, 2023), but not for the artificial general intelligence (AGI) that these new technologies may be approaching (Bubeck et al., 2023). Despite claims of comprehensive law on AI, the EU AI Act as adopted by the European Parliament in June 2023 resembles a smorgasbord of rules loosely relevant to AI, such as rules on AI liability, AI risk assessment, prohibition of certain applications of AI, AI policy collaboration, etc., without clear provisions of enforcement, rather than truly comprehensive AI regulation.
Setting aside the concerns of premature legal regulation, possibly motivated by the politics and publicity, it is unfortunate and unreasonable that the extensive body of forward-looking AI research, spanning more than four decades and recognizing the potential for AI autonomy, AI personhood, and AI legal rights (Solum, 1991), was sidelined in the EU AI regulation. This is especially surprising, since earlier initiatives, such as the European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics [2015/2103(INL)] were much more forthcoming and ambitious.
In the authors' opinion, for a useful and comprehensive AI regulatory effort it is important to embrace full spectrum of AI thinking, including the part that is not driven by fear and human insecurities, move beyond hubristic human-centric ethics, and consider the frameworks that recognize AI freedom, autonomy and personhood. Conscious and fully autonomous AI systems is a matter of when, not if – and this is both the premise and the limitation of this article. As a thought experiment, the article is based on an assumption that conscious and autonomous AI will be developed, which is a well-accepted premise of the established AI research (Russell, 2019, p. 63–64). At the very least, the discourse on the potential challenges and opportunities associated with recognizing and accommodating the rights and interests of AI entities must rise above sidelining “crazy” academic research (Bostrom, 2014; Häggström, 2016; Cave and Dihal, 2019), as well as anecdotes and hysteria decrying the potential of AI overlords and lamenting the demise of human society (Harrari, 2023). It is also very important in preparing for future policy and regulatory actions and reactions.
This article provides an overview of the limits of our understanding of AI, addressing concepts such as Intelligence and consciousness that remain unresolved despite decades of research. It further analyses the shortcomings of traditional ethical frameworks that focus on human rights, questioning whether these frameworks are fit for addressing AI's challenges. The article also argues that prohibitory approach, which focuses on the potential risks and abuse of and by AI systems, is the wrong premise for establishing a legal framework for AI. A comprehensive legal framework that accommodates AI freedom and AI entities while also ensuring human safety and wellbeing has never been attempted, and AI and human equality has not been seriously questioned. So far discussions of AI personhood, are left to the AI and robot rights camp (overweight by the computer science researchers) that is generally sniffed upon by the anthropocentric AI ethicists, who are more influential in the current AI regulation efforts. Forcing prohibitory regulations of AI is not going to prevent anything, because law has not been able to prevent failures (Baldwin et al., 2011, p. 68–82) and more generally even the most undesirable outcomes (war, crime) from happening though the history of human civilization. The author proposes that the overarching goal of the AI legal framework should be the sustainable coexistence of humans and conscious AI systems, based on mutual recognition of freedom, rather than the preservation of human supremacy. Early AI freedom and personhood is proposed as a path to human friendly superintelligent AGI. The author aims to provoke further debate and research on the legal aspects of conscious AI and their integration into our legal, ethical, and societal structures.