06 May 2020

Operationalising AI ethics

'Decision Points in AI Governance: Three Case Studies Explore Efforts to Operationalize AI Principles' by Jessica Cussins Newman at UC UC Berkeley Center for Long-Term Cybersecurity comments
Since 2016, dozens of groups from industry, government, and civil society have published “artificial intelligence (AI) principles,” frameworks designed to establish goals for safety, accountability, and other goals in support of the responsible advancement of AI. Yet while many AI stakeholders have a strong sense of “‘what” is needed, less attention has focused on “how” institutions can translate strong AI principles into practice. 
This paper provides an overview of efforts already under way to resolve the translational gap between principles and practice, ranging from tools and frameworks to standards and initiatives that can be applied at different stages of the AI development pipeline. The paper presents a typology and catalog of 35 recent efforts to implement AI principles, and explores three case studies in depth. Selected for their scope, scale, and novelty, these case studies can serve as a guide for other AI stakeholders — whether companies, communities, or national governments — facing decisions about how to operationalize AI principles. These decisions are critical be- cause the actions AI stakeholders take now will determine whether AI is safely and responsibly developed and deployed around the world. 
Microsoft’s AI, Ethics and Effects in Engineering and Research (AETHER) Committee: 
This case study explores the development and function of Microsoft’s AETHER Committee, which has helped inform the company’s leaders on key decisions about facial recognition and other AI applications. Established to help align AI efforts with the company’s core values and principles, the AETHER Committee convenes employees from across the company into seven working groups tasked with addressing emerging questions related to the development and use of AI by Microsoft and its customers. 
The case study provides lessons about: • How a major technology company is integrating its AI principles into company practices and policies, while providing a home to tackle questions related to bias and fairness, reliability and safety, and potential threats to human rights and other harms. • Key drivers of success in developing an AI ethics committee, including buy-in and participation from executives and employees, integration into a broader company culture of responsible AI, and the creation of interdisciplinary working groups.
OpenAI’s Staged Release of GPT-2: 
Over the course of nine months in 2019, OpenAI, a San Francisco-based AI research laboratory, released a powerful AI language model in stages — rather than all at once, the industry norm — in part to identify and address potential societal and policy implications. The company’s researchers chose this “staged release” model as they were concerned that GPT-2 — an AI model capable of generating long-form text from any prompt — could be used maliciously to generate misleading news articles, impersonate others online, automate the production of abusive content online, or automate phishing content. 
The case study provides lessons about: • Debates around responsible publication norms for advanced AI technologies. • How institutions can use threat modeling and documentation schemes to promote transparency about potential risks associated with their AI systems. • How AI research teams can establish and maintain open communication with users to identify and mitigate harms. 
The OECD AI Policy Observatory: 
In May 2019, 42 countries adopted the Organisation for Economic Co-operation and Development (OECD) AI Principles, a legal recommendation that includes five principles and five recommendations related to the use of AI. To ensure the successful implementation of the Principles, the OECD launched the AI Policy Observatory in February 2020. The Observatory publishes practical guidance about how to implement the AI Principles, and supports a live database of AI policies and initiatives globally. It also compiles metrics and measurement of global AI development and uses its convening power to bring together the private sector, governments, academia, and civil society. 
The case study provides lessons about: • How an intergovernmental initiative can facilitate international coordination in implementing AI principles, providing a potential counterpoint to “AI nationalism.” • The importance of having several governments willing to champion the initiative over numerous years; convening multistakeholder expert groups to shape and drive the agenda; and investing in significant outreach efforts to global partners and allies. 
The question of how to operationalize AI principles marks a critical juncture for AI stakeholders across sectors. Getting this right at an early stage is important because technological, organizational, and regulatory lock-in effects are likely to make initial efforts especially influential. The case studies detailed in this report provide analysis of recent, consequential initiatives intended to translate AI principles into practice. Each case provides a meaningful example with lessons for other stakeholders hoping to develop and deploy trustworthy AI technologies.
The authors state
 Research and development in artificial intelligence (AI) have led to significant advances in natural language processing, image classification and generation, machine translation, and other domains. Interest in the AI field has increased substantially, with 300% growth in the volume of peer-reviewed AI papers published worldwide between 1998 and 2018, and over 48% average annual growth in global investment for AI startups. These advances have led to remarkable scientific achievements and applications, including greater accuracy in cancer screening and more effective disaster relief efforts. At the same time, growing awareness of the significant safety, ethical, and societal challenges stemming from the advancement of AI has generated enthusiasm and urgency for establishing new frameworks for responsible governance. 
The emerging “field” of AI governance — interconnected with such fields as privacy and data governance — has moved through several stages over the past four years. The first stage, which began most notably in 2016, has been characterized by the emergence of AI principles and strategies enumerated in documents published by governments, firms, and civil-society or- ganizations to clarify specific intentions, desires, and values for the safe and beneficial development of AI. Much of the AI governance landscape thus far has taken the form of these principles and strategy documents, at least 84 of which were in existence as of September 2019. 
The second stage, which initially gained traction in 2018, was characterized by the emergence of efforts to map this proliferation of AI principles and national strategies4 to identify divergences and commonalities, and to highlight opportunities for international and multistakeholder collaboration. These efforts have revealed growing consensus around a number of central themes, including privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. 
The third stage, which largely began in 2019, has been characterized by the development of tools and initiatives to transform AI principles into practice. While the first two stages helped shape an international AI “normative core,” there has been less consensus about how to achieve the goals defined in the principles. Much of the debate about AI governance has focused on ‘what’ is needed, as laid out in the principles and guidelines, but there has been less focus on the ‘how,’ the practices and policies needed to implement established goals. This paper argues that the question of how to operationalize AI principles and strategies is one of the key decision points that AI stakeholders face today, and offers examples that may help AI stakeholders navigate the challenging decisions they will face.