11 October 2018

AI and Human Rights

Governing Artificial Intelligence: Upholding Human Rights and Dignity by Mark Latonero comments
Can international human rights help guide and govern artificial intelligence (AI)?Currently, much of society is uncertain about the real human impacts of AI systems. Amid hopes that AI can bring forth “global good” there is evidence that some AI systems are already violating fundamental rights and freedoms. As stakeholders look for a North Star to guide AI development, we can rely on human rights to help chart the course ahead. International human rights are a powerful tool for identifying, preventing, and redressing an important class of risks and harms. A human rights-based frame could provide those developing AI with the aspirational, normative, and legal guidance to uphold human dignity and the inherent worth of every individual regardless of country or jurisdiction. Simply put:
In order for AI to benefit the common good, at the very least its design and deployment should avoid harms to fundamental human values. International human rights provide a robust and global formulation of those values. 
This report is intended as a resource for anyone working in the field of AI and governance. It is also intended for those in the human rights field, outlining why they should be concerned about the present-day impacts of AI. What follows translates between these fields by reframing the societal impact of AI systems through the lens of human rights. As a starting point, we focus on five initial examples of human rights areas – nondiscrimination, equality, political participation, privacy, and freedom of expression – and demonstrate how each one is implicated in a number of recent controversies generated as a result of AI-related systems. Despite these well-publicized examples of rights harms, some progress is already underway. Anticipating negative impacts to persons with disabilities, for example, can lead designers to build AI systems that protect and promote their rights. 
This primer provides a snapshot of stakeholder engagement at the intersection of AI and human rights. While some companies in the private sector have scrambled to react in the face of criticism, others are proactively assessing the human rights impact of their AI products. In addition, the sectors of government, intergovernmental organizations, civil society, and academia have had their own nascent developments. There may be some momentum for adopting a human rights approach for AI among large tech companies and civil society organizations. To date, there are only a few, albeit significant, number of examples at the United Nations (UN), in government, and academia that bring human rights to the center of AI governance debates. 
Human rights cannot address all the present and unforeseen concerns pertaining to AI. Near-term work in this area should focus on how a human rights approach could be practically implemented through policy, practice, and organizational change. Further to this goal, this report offers some initial recommendations:
• Technology companies should find effective channels of communication with local civil society groups and researchers, particularly in geographic areas where human rights concerns are high, in order to identify and respond to risks related to AI deployments. 
• Technology companies and researchers should conduct Human Rights Impact Assessments (HRIAs) through the life cycle of their AI systems. Researchers should reevaluate HRIA methodology for AI, particularly in light of new developments in algorithmic impact assessments. Toolkits should be developed to assess specific industry needs. 
• Governments should acknowledge their human rights obligations and incorporate a duty to protect fundamental rights in national AI policies, guidelines, and possible regulations. Governments can play a more active role in multilateral institutions, like the UN, to advocate for AI development that respects human rights. 
• Since human rights principles were not written as technical specifications, human rights lawyers, policy makers, social scientists, computer scientists, and engineers should work together to operationalize human rights into business models, workflows, and product design. 
• Academics should further examine the value, limitations, and interactions between human rights law and human dignity approaches, humanitarian law, and ethics in relation to emerging AI technologies. Human rights and legal scholars should work with other stakeholders on the tradeoffs between rights when faced with specific AI risks and harms. Social science researchers should empirically investigate the on-the-ground impact of AI on human rights. 
• UN human rights investigators and special rapporteurs should continue researching and publicizing the human rights impacts resulting from AI systems. UN officials and participating governments should evaluate whether existing UN mechanisms for international rights monitoring, accountability, and redress are adequate to respond to AI and other rapidly emerging technologies. UN leadership should also assume a central role in international technology debates by promoting shared global values based on fundamental rights and human dignity.