Courts have started to recognize standing to sue for those on the government’s No Fly List, which bars listed individuals from flying. This salutary step, however, leaves untouched the complex watch list infrastructure on which the No Fly List is built and whose flaws it inherits. Lower-profile watch lists have fewer determinate consequences on listed individuals than the No Fly List does. But, this article argues, they exact substantial costs.
This article first explains why the incentive structure of terrorist watch lists encourages government agencies to list more people than necessary and not to check their work. It then demonstrates how a misguided understanding of the relationship between false positives and false negatives obscures the effects of these perverse incentives. Those effects, the article shows, extend beyond individuals listed on a watch to include government agents and agencies, public policy, and society at large. Yet, as I explain, neither the current statutory regime nor judicial doctrine can address these broad negative effects; even scholarship largely misses the point. To remedy this situation, this article proposes ways to build self-assessment and improvement — in the form of Bayesian updating — into the watch list process. More broadly, the article contributes to attempts to analyze and constrain the government’s use of big data.Bernstein notes that
The No Fly List, which is used to block suspected terrorists from flying, has been in use for years. But the government still appears “stymied” by the “relatively straightforward question” of what people who “believe they have been wrongly included on” that list should do. In recent months, courts have haltingly started to provide their own answer, giving some individuals standing to sue to remove their names or receive additional process. This step is particularly important as the No Fly List continues its breathtaking growth. It is unclear, however, how a court will evaluate that additional process when the listing criteria are both secret and untested. This doctrinal development poses a challenge not only to the No Fly List, but also to the complex watch list infrastructure on which it is built.
The No Fly List draws on a consolidated terrorist watch list that compiles numerous other lists maintained by a number of federal agencies. Agencies compiling their lists receive information not only from their own agents but from state governments, foreign nations, and private individuals. The No Fly List is well known because it has visible effects like impinging on rights to travel. Indeed, it is precisely such effects that have led courts to recognize standing to challenge them. But the No Fly List’s flaws are inherited from the lists it uses. They, in turn, remain largely unregulated, unappealable, and obscured from public attention.
Commentators have argued that such watch lists raise problems for privacy and due process rights. This Article broadens the frame, moving beyond individual rights to the broader effects that watch lists have on the agents and agencies who run them, the government that commissions them, and the society that houses them. It also explains why agencies currently lack the incentives to address these problems themselves. Because current law fails to rein watch lists in, they require external constraint. Focusing on watch lists’ peculiar epistemological and social structure, this Article identifies the key aspects of watch list creation that require regulation. And it draws on developments in regulatory theory to ground its proposals for reform.
This Article starts with the question of why watch lists require more constraint to begin with. Legal constraints, after all, usually exist to make people do things they would not otherwise do. And at first glance, there seems to be every reason to think that government agencies want to make their watch lists work. If that is the case, we can assume that agencies will try their hardest to create the best and most useful watch lists possible. We would not need to tell them how, or to force them to take some particular route to getting there.
As I contend in Part I, however, the incentive structures surrounding terrorist watch lists push agents and agencies to exaggerate dangers, putting names on watch lists that do not belong there. These false positives might be more acceptable if they made watch lists more comprehensive, reducing the likelihood that the watch list would miss someone who ought to be on there—a false negative. But, as Part I also shows, watch lists’ perverse incentives lead agents and agencies to misconstrue the relationship between false positives and false negatives. These perverse effects endanger the very national security that watch lists are meant to safeguard by discouraging the kind of self- correction that would make watch lists more effective.
Part II explains the structure of contemporary terrorist watch lists, showing how information and knowledge are produced in the watch list context. Contemporary watch lists use the techniques of “big data” to collect information and distribute the work of evaluation and prediction over many participants. However, they largely eschew the self-assessment techniques that make the use of big data reliably useful. Their distributed knowledge production can help watch lists smooth over the peculiarities of individual agents. But it can also exacerbate judgment problems by stacking peculiarity upon peculiarity and giving the result a veneer of objective truth. Explaining how judgment is incorporated in watch lists elucidates the errors they are prone to and helps clarify why a conflicted incentive structure leads to a high false positive rate.
A high rate of false positives might still be acceptable if there were no cost associated with them. And because of their objective veneer, watch lists can seem like a costless, neutral backdrop of impartial information about the world. It seems as though they have no effects on the world themselves. Part III argues that this neutral view is wrong. As scholars concerned with individual rights have recognized, unregulated, error-prone watch lists affect the people listed on them in powerful ways. But watch lists also affect the agents and agencies that maintain them, lowering their efficacy and acumen by failing to provide reality checks for their judgments. Further, watch lists skew public policy by making terrorism appear to be a more imminent and severe threat than it is, which leads resources to be diverted from other programs into terrorism-related ones. And to the broader public, watch lists present a world populated by terrorist threats that can often be recognized with blunt categories like ethnicity and religion. That ffects how people act in their society and what they see as its most urgent problems. Watch lists, in other words, are far from costless. They go beyond affecting individual rights to affect government functioning and social structure. Yet, as Part IV claims, the legal strictures that currently regulate database use miss the point. They focus on informational accuracy, not predictive efficacy. I suggest that this lacuna rests on an outdated understanding of contemporary databases as mere repositories for independently existing information, not the sites of judgment production and prediction they actually are.
Traditionally, government judgment has been subject to legal constraint that can be reviewed in court. The watch list context, as I show, complicates this approach by introducing secret algorithms of prediction that result in little that is cognizable in court. This limitation, I contend, should not dissuade us from analyzing and constraining watch lists. The absence of judicial review cannot obviate scrutiny and constraint of government action in a democratic society. Rather, as recent scholarship has suggested, we must look to institutional design and internal self-regulation to solve those problems that cannot reach the courts.
Part V proposes regulating watch lists by focusing on the increased efficacy that comes with increased constraint. My suggestions build on a growing call for government to assess, and not only project, the effects of its actions. And they stake a claim for Bayesian updating at the center of administrative self-regulation — the kind of regulation that increasingly looks to be the main way of controlling the administrative state.
Finally, the Conclusion examines the limitations of my proposals and explains why any solution to the watch list problem will always be partial. It further discusses how similar concerns, and a similar approach, will be appropriate to other government databases used to make predictions about future human conduct, when their incentive structures are similarly conflicted.