Artificial agents – from autonomous cars and weapon systems to social bots, from profiling and tracking programmes to risk assessment software predicting criminal recidivism or voting behaviour – challenge general principles of national and international law. This article addresses three of these principles: responsibility, explainability, and autonomy. Responsibility requires that actors are held accountable for their actions, including damages and breaches of law. Responsibility for actions and decisions taken by artificial agents can be secured by resorting to strict or objective liability schemes, which do not require human fault and other human factors, or by relocating human fault, i.e. by holding programmers, supervisors or standard setters accountable. “Explainability” is a term used to characterise that even if artificial agents produce useful and reliable results, it must be explainable how these results are generated. Lawyers have to define those areas of law that require an explanation for artificial agents’ activities, ranging from human rights interferences to, possibly, any form of automated decision-making that affects an individual. Finally, the many uses of artificial agents also raise questions regarding several aspects of autonomy, including privacy and data protection, individuality, and freedom from manipulation. Yet, artificial agents do not only challenge existing principles of law, they can also strengthen responsibility, explainability, and autonomy.
10 February 2018
AI Rights?
'Artificial Agents and General Principles of Law' by Antje von Ungern-Sternberg German Yearbook of International Law (Forthcoming)) comments