22 December 2019

AI Ethics

'Data Informed Duties in AI Development' by Frank Pasquale in (2019) 119(7) Columbia Law Review comments
Law should help direct—and not merely constrain—the development of artificial intelligence (AI). One path to influence is the development of standards of care both supplemented and informed by rigorous regulatory guidance. Such standards are particularly important given the potential for inaccurate and inappropriate data to contaminate machine learning. Firms relying on faulty data can be required to compensate those harmed by that data use—and should be subject to punitive damages when such use is repeated or willful. Regulatory standards for data collection, analysis, use, and stewardship can inform and complement generalist judges. Such regulation will not only provide guidance to industry to help it avoid preventable accidents. It will also assist a judiciary that is increasingly called upon to develop common law in response to legal disputes arising out of the deployment of AI. 
Pasquale argues
 Corporations will increasingly attempt to substitute artificial intelli­gence (AI) and robotics for human labor.  This evolution will create novel situations for tort law to address. However, tort will only be one of several types of law at play in the deployment of AI. Regulators will try to forestall problems by developing licensing regimes and product stand­ards. Corporate lawyers will attempt to deflect liability via contractual arrangements.  The interplay of tort, contract, and regulation will not just allocate responsibility ex post, spreading the costs of accidents among those developing and deploying AI, their insurers, and those they harm. This matrix of legal rules will also deeply influence the develop­ment of AI, including the industrial organization of firms, and capital’s and labor’s relative share of productivity and knowledge gains. 
Despite these ongoing efforts to anticipate the risks of innovation, there is grave danger that AI will become one more tool for deflecting liability, like the shell companies that now obscure and absorb the blame for much commercial malfeasance. The perfect technology of irresponsi­ble profit would be a robot capable of earning funds for a firm, while taking on the regulatory, compliance, and legal burden tradi­tion­ally shouldered by the firm itself. Any proposal to grant AI “person­hood” should be considered in this light. Moreover, both judges and regulators should begin to draw red lines of responsibility and attribution now, while the technology is still nascent 
It may seem difficult to draw such red lines, because both journalists and technologists can present AI as a technological development that exceeds the control or understanding of those developing it.  However, the suite of statistical methods at the core of technologies now hailed as AI has undergone evolution, not revolution. Large new sources of data have enhanced its scope of application, as well as technologists’ ambi­tions. But the same types of doctrines applied to computational sensing, prediction, and actuation in the past can also inform the near future of AI advance.   
A company deploying AI can fail in many of the same ways as a firm us­ing older, less avant-garde machines or software. This Essay focuses on one particular type of failing that can lead to harm: the use of inaccurate or inappropriate data in training sets for machine learning. Firms using faulty data can be required to compensate those harmed by that data use—and should be subject to punitive damages when such faulty data collection, analysis, and use is repeated or willful. Skeptics may worry that judges and juries are ill-equipped to make determinations about appro­priate data collection, analysis, and use. However, they need not act alone—regulation of data collection, analysis, and use already exists in other contexts.  Such regulation not only provides guidance to industry to help it avoid preventable accidents and other torts. It also assists judges assessing standards of care for the deployment of emerging tech­nologies. The interplay of federal regulation of health data with state tort suits for breach of confidentiality is instructive here: Egregious failures by firms can not only spark tort liability but also catalyze commitments to regulation to prevent the problems that sparked that liability, which in turn should promote progress toward higher standards of care. 
Preserving the complementarity of tort law and regulation in this way (rather than opting to radically diminish the role of either of these modalities of social order, as premature preemption or deregulation might do) is wise for several reasons. First, this hybrid model expands opportunities for those harmed by new technologies to demand account­ability. Second, the political economy of automation will only fairly dis­tribute expertise and power if law and policy create ongoing incentives for individuals to both understand and control the AI supply chain and AI’s implementation. Judges, lawmakers, and advocates must avoid develop­ing legal and regulatory systems that merely deflect responsibility, rather than cultivate it, lest large firms exploit well-established power im­balances to burden consumers and workers with predictable harms aris­ing out of faulty data.