'Licensing high-risk artificial intelligence: Toward ex ante justification for a disruptive technology' by Gianclaudio Malgieri and Frank Pasquale in (2024) Computer Law and Security Review comments
The regulation of artificial intelligence (AI) has heavily relied on ex post, reactive tools. This approach has proven inadequate, as numerous foreseeable problems arising out of commercial development and applications of AI have harmed vulnerable persons and communities, with few (and sometimes no) opportunities for recourse. Worse problems are highly likely in the future. By requiring quality control measures before AI is deployed, an ex ante approach would often mitigate and sometimes entirely prevent injuries that AI causes or contributes to. Licensing is an important tool of ex ante regulation, and should be applied in many high-risk domains of AI. Indeed, policymakers and even some leading AI developers and vendors are calling for licensure in the area.
To substantiate licensing proposals, this article specifies optimal terms of licensure for AI necessary to justify its use. Given both documented and potential harms arising out of high-risk AI systems, licensing agencies should require firms to demonstrate that their AI meets clear requirements for security, non-discrimination, accuracy, appropriateness, and correctability before being deployed. Under this ex ante model of regulation, AI developers would bear the burden of proof to demonstrate that their technology is not discriminatory, not manipulative, not unfair, not inaccurate, and not illegitimate in its lawful bases and purposes. While the European Union's General Data Protection Regulation (GDPR) can provide key benchmarks here for ex post regulation, the proposed AI Act (AIA) offers a first regulatory attempt towards an ex ante licensure regime in high-risk areas, but it should be strengthened through an expansion of its scope and substantive content and through greater transparency of the ex ante justification process.
Regulating AI is difficult. Complex technology, under-resourced regulators, substantial economic consequences, and high risks for fundamental rights all contribute to this difficulty. Thanks to the well-recognized “black box” problem, identifiable AI abuses are only the tip of an iceberg of problems.1 AI systems can be opaque, nonlinear, and unpredictable, and they evolve rapidly. This makes it difficult to keep ex post, reactive regulations up to date with the latest technological advances. Years-long litigation will also often fail to set relevant precedents and standards before major damage occurs. Meanwhile, many AI developers either lack legal expertise, or ignore potential legal problems, and they often have vastly more resources than the authorities supposedly monitoring and regulating them.
These asymmetries cause many problems, pressuring governments to prioritize innovation (however destructive its effects) at the cost of fundamental sacrifices of societal values. Since jobs and growth are often far easier to quantify than, say, the negative effects of discrimination or disinformation (amongst the many harms unregulated AI can cause), inadequate regulations and enforcement are endemic to the field. In addition, AI regulatory frameworks cannot guarantee a good level of accountability of AI providers if they foresee small fines in case of AI misuse. A small dent in profits is not enough to deter bad behaviour; rather, it is treated as a cost of doing business. This can incentivize companies to take risks with their AI systems and prioritize profits over safety and ethical considerations. This would be understandable if AI were only a concern of a small number of scientists and laboratories. But it is now evident that the use of AI in business, policing, administration, and beyond, poses high risks to fundamental rights, such as privacy and equality, and can perpetuate and even amplify biases and discrimination, which can have a significant impact on individuals in a situation of vulnerability.
This paper will criticise policymakers’ over-reliance on ex post legal measures, including fines and penalties, and will advocate for AI licensure, taking inspiration jointly from the European Union's General Data Protection Regulation and the proposed AI Act, but going well beyond these approaches. Their approach might not prevent harm from occurring in the first place. A more proactive approach, ex ante rather than ex post, would require companies to meet certain safety and ethical standards before deploying AI systems, would be more effective in preventing harm and ensuring accountability. While the GDPR has essential principles for AI justification (including fairness and purpose limitation), it is generally more based on an ex post approach, since there is no requirement for prior administrative authorisation for high risk data processing. On the other hand, the proposed AI Act is based on an ex ante model (conformity assessment before commercialisation), but that model might prove limited in its scope (the rigid list of high-risk AI systems might be not adequate), substance (the proposed draft does not refer to, e.g., a fairness principle) and transparency (there is no duty to disclose the ex ante justification statement to the public).
A key regulatory tool for an ex ante regime is licensure. Under a licensing system, products, services, and activities are unlawful until the entity seeking to develop, sell, or use them has proven otherwise. High-risk AI's documented and potential harms indicate a strong case for a licensure regime here. Under our proposal, to obtain a license, a high-risk AI provider must certify that its AI system meets clear requirements for security, non-discrimination, accuracy, appropriateness, and correctability before it is commercialized. Such a standard may not seem administrable now, given the widespread and rapid use of AI at companies of all sizes. But such requirements could be applied, at first, to the largest companies’ most troubling practices, and then gradually to other applications of AI. Under such a regime, AI providers may, for example, be required to demonstrate basic practices of fairness, accuracy, and validity once they have used an AI system in use by, or affecting, over 1 million people. Since government often charges fees for licenses, this system may also prove effective at providing much-needed resources to regulatory bodies now struggling to keep up with the AI revolution.
Our proposal builds on existing scholarship and regulatory proposals and practices. Scholars have argued that certain data practices should not be permitted; licensure would help ensure that they are indeed prohibited. Rather than expecting underfunded, understaffed regulators to overcome monumental black box problems after harm has been done, responsibility could be built into the structure of data-driven industries via licensure schemes that require certain standards to be met before large-scale data practices expand even further. Licensure should spur fundamental quality improvements in the realm of product-based and services-based AI, including automobiles, aircraft, logistics, smart infrastructures, financial and employment recommendations, and scoring. There is increasing concern about the validity of the data used in AI and the algorithms it is based on. Rather than addressing all these concerns in an ex post way via tort-based judicial actions or audits and litigation by regulators, the ex ante approach of licensure must be part of the regulatory armamentarium. There are some wrongs that can arise out of AI that are too serious to be recompensed ex post.
In addition, a solely ex post approach can create unnecessary risks for fundamental rights of consumers and end-users. Suppose that after a period of time of intensive use of an AI system (e.g., an App) by a massive number of consumers, regulators find that AI-driven app violates the law. A possible sanction might be to block the app and prevent those people to continue using that system. However, considering the period when the app was largely used, people might experience the need of that app, based on a psychological, economic or functional dependency from that AI system. Such harms occurred after the Italian ex post prohibition of Replika and ChatGPT, where many users experienced emotional distress and similarly significant adverse effects after that the Italian DPA prohibited those AI-driven systems. To be sure, the Italian moves here were warranted. Nevertheless, regulators’ ex post approach created the paradox that both keeping an AI system in use and prohibiting it risked either harming or reducing the utility of individuals. By contrast, conditioning the burden of proof on AI providers to provide a justification of fairness, safety, non-discrimination, and integrity ex ante would prevent such troubling double binds, and many other problems.
Beyond its value in preventing avoidable harms and double binds for regulators, a licensure regime for AI would also enable citizens to democratically shape technology's scope and proper use, rather than resigning themselves to forces beyond their control. To ground the case for more ex ante regulation, Part 2 catalogues the limitations of ex post approaches in the regulation of AI, while Part 3 examines the substantive foundation of licensure models by elaborating a jurisprudential conception of justification. Part 4 addresses the institutional dimensions of our licensure proposal and addresses objections. Part 5 concludes with reflections on the opportunities created by AI licensure frameworks and potential limitations upon them. This paper focuses mostly on the EU law. However, when formulating its proposal, it makes a necessary comparison with other legal systems, where the models of ex ante prohibition and licensures are already a reality or where the legal discussion can already offer some important food for thought.