The
Artificial Intelligence Governance and Ethics: Global Perspectives report by Angela Daly, Thilo Hagendorff, Li Hui, Monique Mann, Vidushi Marda, Ben Wagner, Wei Wang and Saskia Witteborn
comments
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.
What is AI?
Artificial Intelligence (AI) is an emerging area of computer science. There are numerous definitions and various terms used interchangeably to describe ‘AI’ within the academic literature (and also popular discourse) - these include, for example: algorithmic/profiling, automation, (supervised/unsupervised) machine learning, deep neural networks etc.
In general terms, AI could be defined as technology that automatically detects patterns in data, and makes predictions on the basis of them. It is a method of inferential analysis that identifies correlations within datasets that can, in the case of profiling, be used as an indicator to classify a subject as a representative of a category or group (Hildebrandt 2008; Schreurs et al 2008). A broad distinction is made between ‘narrow’ and ‘general’ or ‘broad’ AI. Narrow AI is an AI application which is designed to deal with one particular task and reflects most currently existing applications of AI in daily life, while general or broad AI reflects human intelligence in its versatility to handle different or general tasks. In this report when we discuss AI we refer to AI in its narrow form.
There are numerous applications of AI in a range of domains, perhaps contributing to definitional complexity, for example, predictive analytics (such as recidivism prediction in criminal justice contexts, predictive policing, forecasting risk in business and finance), automated identification via facial recognition etc. Indeed, AI has been deployed in a range of contexts and social domains, with mixed outcomes, including insurance, finance, education, employment, marketing, governance, security, and policing (see e.g., O’Neil 2016; Ferguson 2017).
AI and Ethics
At this relatively early stage in AI’s development and implementation, the issue has arisen of AI adhering to certain ethical principles (see e.g. Arkin 2009; Mason 2017), and the ability of existing laws to govern AI has emerged as key as to how future AI will be developed, deployed and implemented (see e.g. Leenes & Lucivero 2015; Calo 2015; Wachter et al. 2017a).
While originally confined to theoretical, technical and academic debates, the issue of governing AI has recently entered the mainstream with both governments and private companies from major geopolitical powers including the US, China, European Union and India formulating statements and policies regarding AI and ethics (see e.g. European Commission 2018; Pichai 2018).
A key issue here is precisely what are the ethical standards to which AI should adhere? Furthermore, the transnational nature of digitised technologies, the key role of private corporations in AI development and implementation and the globalised economy gives rise to questions about which jurisdictions/actors will decide on the legal and ethical standards to which AI may adhere, and whether we may end up with a ‘might is right’ approach where it is these large geopolitical players which set the agenda for AI regulation and ethics for the whole world.
Further questions arise around the enforceability of ethics statements regarding AI, both in terms of whether they reflect existing fundamental legal principles and are legally enforceable in specific jurisdictions, and also the extent to which the principles can be operationalised and integrated into AI systems and application in practice.
What does ‘ethics’ mean in AI?
Ethics is seen as a reflection theory of morality or as the theory of the good life. A distinction can be made between fundamental ethics, which is concerned with abstract moral principles, and applied ethics (Höffe 2013). The latter also includes ethics of technology, which contains in turn AI ethics as a subcategory. Roughly speaking, AI ethics serves for the self-reflection of computer and engineering sciences, which are engaged in the research and development of AI or machine learning. In this context, dynamics such as individual technology development projects, or the development of new technologies as a whole, can be analyzed. Likewise, causal mechanisms and functions of certain technologies can be investigated using a more static analysis (Rahwan et al. 2019). Typical topics are self-driving cars, political manipulation by AI applications, autonomous weapon systems, facial recognition, algorithmic discrimination, conversational bots, social sorting by ranking algorithms, and many more (Hagendorff 2019).
Key demands of AI ethics relate to aspects such as the reflection of research goals and purposes, the direction of research funding, the linkage between science and politics, the security of AI systems, the responsibility links underlying the development and use of AI technologies, the inscription of values in technical artefacts, the orientation of the technology sector towards the common good, and much more (Future of Life Institute 2017).
Last but not least, AI ethics is also reflected within the framework of metaethics, in which questions about the effectiveness of normative demands are investigated. Ethical discourses can either be held with close proximity to their designated object, or it can be the opposite. The advantage of a close proximity is that those ethical discourses can have a concrete impact on the course of action in a particular organization dealing with AI. The downside is that this kind of ethical reflection has to be quite narrow and pragmatic. Uttering more radical demands only makes only sense when ethical discourses have a certain distance to their designated object. Nevertheless, those ethical discourses are typically rather inefficient and have hardly any effect in practice.
Another dimension of AI ethics concerns the degree of its normativity. Here, ethics can oscillate between irritation and orientation. Irritation equals weak normativity. This means an abstinence from strong normative claims. Instead, ethics just uncovers blind spots or describes hitherto underrepresented issues. Orientation, on the other hand, means strong normativity. The downside of making strong normative claims is that they provoke backfire- or boomerang-effects, meaning that people tend to react to perceived external constraints on action with that kind of behaviour they are supposed to refrain from.
Therefore, AI ethics must satisfy two traits in order to be effective. First, it should use weak normativity and should not universally determine what is right and what is wrong. Second, AI ethics should seek close proximity to its designated object. This implies that ethics is understood as an inter- or transdisciplinary field of study, that is directly linked to the adjacent computer sciences or industry organizations, and that is active within these fields.
This Report
In this Report we combine our interdisciplinary and international expertise as researchers working on AI policy, ethics and governance to give an overview of some of our countries and regions’ approaches to the topic of AI and ethics. We do not claim to present an exhaustive account of approaches to this issue internationally, but we do aim to give a snapshot of how some countries and regions, especially ‘large’ ones like China, Europe, India and the United States are, or are not, addressing the topic. We also include some initiatives at national level of EU Member States (Germany, Austria and the United Kingdom) and initiatives in Australia, all of which can be considered ‘smaller’. The selection of these countries and regions has been driven by our own familiarity with them from prior experience.
We acknowledge the limitations of our approach, that we do not have contributions regarding this issue from Africa, Latin America, the Middle East, Russia, Indigenous views of AI and AI and ethics approaches informed by religious beliefs (see e.g. Cisse 2018; ELRC 2019; Indigenous AI n.d.). In future work we hope to be able to cover more countries and approaches to AI ethics.
We have specifically looked to government, corporate and some other initiatives which frame and situate themselves in the realm of ‘AI governance’ or ‘AI ethics’. We acknowledge that other initiatives, such as those relevant to ‘big data’ and the ‘Internet of Things’ may also be relevant to AI governance and ethics; but with a few exceptions, these are beyond the scope of this report. Further work should be done on ‘connecting the dots’ between some predecessor digital technology governance initiatives and the current drive for AI ethics and governance.
The fast-moving nature of this topic and field is our reason for publishing this report in this current form. We hope the report is useful and illuminating for readers.
'Are Autonomous Entities Possible?' by Shawn Bayern in (2019) 114
Northwestern University Law Review Online 23
comments
Over the last few years, I have demonstrated how modern business-entity statutes, particularly LLC statutes, can give software the basic capabilities of legal personhood, such as the ability to enter contracts or own property. Not surprisingly, this idea has been met with some resistance. This Essay responds to one kind of descriptive objection to my arguments: that courts will find some way to prevent the results I describe either because my reading of the business-entity statutes would take us too far outside our legal experience, or because courts will be afraid that robots will take over the world, or because law is meant to promote human (versus nonhuman) rights. As I demonstrate in this essay, such objections are not correct as a descriptive matter. These arguments make moral and policy assumptions that are probably incorrect, face intractable line-drawing problems, and dramatically overestimate the ease of challenging statutorily valid business structures. Business-entity law has always accommodated change, and the extensions to conventional law that I have identified are not as radical as they seem. Moreover, the transactional techniques I advocate for would likely just need to succeed in one jurisdiction, and regardless, there are many alternative techniques that, practically speaking, would achieve the same results.
'First Steps Towards an Ethics of Robots and Artificial Intelligence' by John Tasioulas in (2019) 7(1)
Journal of Practical Ethics comments
This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognise that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities.