'Weapons of Mass Disruption: Artificial Intelligence and International Law' by Simon Chesterman in (2021) 10 Cambridge International Law Journal 181–203 comments
The answers each political community finds to the law reform questions posed artificial intelligence (AI) may differ, but a near-term threat is that AI systems capable of causing harm will not be confined to one jurisdiction — indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, though different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy, and opacity of AI systems. For that reason, some measure of collective action is needed. Lessons may be learned from efforts to regulate the global commons, as well as moves to outlaw certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires active involvement of states. To coordinate those activities and enforce global ‘red lines’, this paper posits a hypothetical International Artificial Intelligence Agency (IAIA), modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponization and other harmful effects.