'Biometric Cyberintelligence and the Posse Comitatus Act' (Washington and Lee Legal Studies Paper No. 2016-14) by Margaret Hu is described as
addressing
the rapid growth of what the military and intelligence community refer to as “biometric-enabled intelligence.” This newly emerging intelligence system is reliant upon biometric databases — for example, digitalized collections of scanned fingerprints and irises, digital photographs for facial recognition technology, and DNA. This Article introduces the term “biometric cyberintelligence” to describe more accurately the manner in which this new tool is dependent upon cybersurveillance and big data’s mass-integrative systems.
To better understand the legal implications of biometric cyberintelligence, this Article advances three primary claims. First, it argues that the technological and programmatic architecture of biometric cyberintelligence can be embedded within the data collection and data analysis protocols of civilian governance and domestic law enforcement activities. Next, to demonstrate the potential lethality of this emerging technological and policy development, this Article illustrates how biometric data may be increasingly integrated into drone weaponry, including targeted killing and drone strike technologies. Finally, this Article argues that the Posse Comitatus Act of 1878, designed to limit the deployment of federal military resources in the service of domestic policies, may be impotent in light of the growth of cybersurveillance.
Maintaining strict separation of data between military and intelligence operations on the one hand, and civilian, homeland security, and domestic law enforcement agencies on the other hand, is increasingly difficult as cooperative data sharing increases. The Posse Comitatus Act and constitutional protections such as the Fourth Amendment’s privacy jurisprudence, therefore, must be reinforced in the digital age in order to appropriately protect citizens from militarized cyberpolicing, i.e., the blending of military/foreign intelligence tools and operations and homeland security/domestic law enforcement tools and operations. The Article concludes that, as of yet, neither statutory nor constitutional protections have evolved sufficiently to cover the unprecedented surveillance harms posed by the migration of biometric cyberintelligence from foreign to domestic use.
The Perpetual Line-Up: Unregulated Police Face Recognition in America comments
There is a knock on your door. It’s the police. There was a robbery in your neighborhood. They have a suspect in custody and an eyewitness. But they need your help: Will you come down to the station to stand in the line-up?
Most people would probably answer “no.” This summer, the Government Accountability Office revealed that close to 64 million Americans do not have a say in the matter: 16 states let the FBI use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm.
But the FBI is only part of the story. Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems. And we don’t know how any of these systems—local, state, or federal—affect racial and ethnic minorities.
This report closes these gaps. The result of a year- long investigation and over 100 records requests to police departments around the country, it is the most comprehensive survey to date of law enforcement face recognition and the risks that it poses to privacy, civil liberties, and civil rights. Combining FBI data with new information we obtained about state and local systems, we find that law enforcement face recognition affects over 117 million American adults. It is also unregulated. A few agencies have instituted meaningful protections to prevent the misuse of the technology. In many more cases, it is out of control.
One in two American adults is in a law enforcement face recognition network.
The benefits of face recognition are real. It has been used to catch violent criminals and fugitives. The law enforcement officers who use the technology are men and women of good faith. They do not want to invade our privacy or create a police state. They are simply using every tool available to protect the people that they are sworn to serve. Police use of face recognition is inevitable. This report does not aim to stop it.
Rather, this report offers a framework to reason through the very real risks that face recognition creates. It urges Congress and state legislatures to address these risks through commonsense regulation comparable to the Wiretap Act. These reforms must be accompanied by key actions
by law enforcement, the National Institute
of Standards and Technology (NIST), face recognition companies, and community leaders.
The Report's key findings are as follows, with specific findings for 25 local and state law enforcement agencies in City and State Backgrounders (p. 121). A Face Recognition Scorecard (p. 24) evaluates these agencies’ impact on privacy, civil liberties, civil rights, transparency and accountability. The records underlying all of our conclusions are available online.
1. Law enforcement face recognition networks include over 117 million American adults—and may soon include many more. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos. Roughly one in two American adults has their photos searched this way.
2. Different uses of face recognition
create different risks. This report offers
a framework to tell them apart. A face recognition search conducted in the field
to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible. While some agencies, like the San Diego Association of Governments, limit themselves to more targeted use of the technology, others are embracing high and very high risk deployments.
3. By tapping into driver’s license databases, the FBI is using biometrics in a way
it’s never done before. Historically, FBI fingerprint and DNA databases have
been primarily or exclusively made up
of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic.
4. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time
face recognition lets police continuously
scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least
five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras, bought technology that can do so, or expressed
an interest in buying it. Nearly all major face recognition companies offer real-time software.
5. Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This
has consequences. The Maricopa County Sheriff ’s Office enrolled all of Honduras’ driver’s licenses and mug shots into its database. The Pinellas County Sheriff ’s Office system runs 8,000 monthly searches on the faces of seven million Florida
drivers—without requiring that officers have even a reasonable suspicion before running
a search. The county public defender reports that the Sheriff ’s Office has never disclosed the use of face recognition in Brady evidence.
6. Most law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech.
7. Most law enforcement agencies do little
to ensure that their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing. One major face recognition company, FaceFirst, publicly advertises a 95% accuracy rate but disclaims liability for failing to meet that threshold in contracts with the San Diego Association of Governments. Unfortunately, independent accuracy tests are voluntary and infrequent.
8. The human backstop to accuracy is non- standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without
specialized training, human users make the wrong decision about a match half the time. We found only eight face recognition systems where specialized personnel reviewed and narrowed down potential matches.The training regime for examiners remains a work in progress.
9. Police face recognition will disproportionately affect African Americans. Many police departments do
not realize that. In a Frequently Asked Questions document, the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co- authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either.
Face recognition may be least accurate for those it is most likely to affect: African Americans
10. Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown
to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely.The Los Angeles Police Department has repeatedly announced
new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy.
11. Major face recognition systems are not audited for misuse. Maryland’s system, which includes the license photos of over two million residents, was launched in 2011. It has never been audited. The Pinellas County Sheriff ’s Office system is almost 15 years old and may be the most frequently used system in the country. When asked if his office audits searches for misuse, Sheriff Bob Gualtieri replied, “No, not really.” Despite assurances to Congress, the FBI has not audited use of its face recognition system, either. Only nine of 52 agencies (17%) indicated that they log and audit their officers’ face recognition searches for improper use. Of those, only one agency,
the Michigan State Police, provided documentation showing that their audit regime was actually functional.
The Recommendations are
1. Congress and state legislatures should
pass commonsense laws to regulate law enforcement face recognition. Such laws should require the FBI or the police to have a reasonable suspicion of criminal conduct prior to a face recognition search. After-the-fact investigative searches—which are invisible to the public—should be limited to felonies.
Mug shots, not driver’s license and ID photos, should be the default photo databases for face recognition, and they should be periodically scrubbed to eliminate the innocent. Except for identity theft and fraud cases, searches of license and ID photos should require a court order issued upon a showing of probable cause, and should be restricted to serious crimes. If these searches are allowed, the public should be notified at their department of motor vehicles.
If deployed pervasively on surveillance video or police-worn body cameras, real-time face recognition will redefine the nature of public spaces. At the moment, it is also inaccurate. Communities should carefully weigh whether to allow real-time face recognition. If they do, it should be used as a last resort to intervene in only life-threatening emergencies. Orders allowing it should require probable cause, specify where continuous scanning will occur, and cap the length of time it may be used.
Real-time face recognition will redefine the nature of public spaces. It should be strictly limited.
Use of face recognition to track people on the basis of their political or religious beliefs or their race or ethnicity should be banned. All face recognition use should be subject to public reporting and internal audits.
To lay the groundwork for future improvements in face recognition, Congress
should provide funding to NIST to increase the frequency of accuracy tests, create standardized, independent testing for racially biased error rates, and create photo databases that facilitate such tests.
State and federal financial assistance for police face recognition systems should be contingent on public reporting, accuracy and bias tests, legislative approval—and public posting—
of a face recognition use policy, and other standards in line with these recommendations.
A Model Face Recognition Act (p. 102), for Congress or a state legislature, is included at the end of the report.
2. The FBI and Department of Justice (DOJ) should make significant reforms to the FBI’s face recognition system. The FBI should refrain from searching driver’s license and ID photos in the absence of express approval for those searches from a state legislature. If it proceeds with those searches, the FBI should restrict them to investigations of serious crimes where FBI officials have probable cause to implicate the search subject. The FBI should periodically scrub its mug shot database to eliminate the innocent, require reasonable suspicion for state searches of that database, and restrict those searches
to investigations of felonies. Overall access to the database should be contingent on legislative approval of an agency’s use policy. The FBI should audit all searches for misuse, and test its own face recognition system, and the state systems that the FBI accesses, for accuracy and racially biased error rates.
The DOJ Civil Rights Division should evaluate the disparate impact of police face recognition, first in jurisdictions where it has open investigations and then in state and
local law enforcement more broadly. DOJ should also develop procurement guidance for state and local agencies purchasing face recognition programs with federal funding
The FBI should be transparent about its use of face recognition. It should reverse its current proposal to exempt its face recognition system from key Privacy Act requirements. It should also publicly and annually identify the photo databases
it searches and release statistics on the number and nature of searches, arrests, the convictions stemming from those searches, and the crimes that those searches were used to investigate.
3. Police should not run face recognition searches of license photos without
clear legislative approval. Many police departments have run searches of driver’s license and ID photos without express legislative approval. Police should observe a moratorium on those searches until legislatures vote on whether or not to allow them.
Police should develop use policies for face recognition, publicly post those policies, and seek approval for them from city councils
or other local legislative bodies. City councils should involve their communities in deliberations regarding support for this technology, and consult with privacy, civil rights, and civil liberties organizations in reviewing proposed use policies.
When buying software and hardware, police departments should condition purchase on accuracy and bias tests and periodic tests of the systems in operational conditions over the contract period. They should avoid sole source contracts and contracts that disclaim vendor responsibility for accuracy.
All agencies should implement audits to prevent and identify misuse and a system of trained face examiners to maximize accuracy. Regardless of their approach to contracting, all agencies should regularly test their systems for accuracy and bias.
A Model Police Face Recognition Use Policy (p.116) is included at the end of this report.
4. The National Institute of Standards and Technology (NIST) should expand the scope and frequency of accuracy tests. NIST should create regular tests for algorithmic bias on the basis of race, gender, and age, increase the frequency of existing accuracy tests, develop tests that mirror law enforcement workflows, and deepen its focus on tests for real-time face recognition. To help empower others to conduct testing, NIST should develop a set of best practices for accuracy tests and develop and distribute new photo datasets to train and evaluate algorithms. To help efforts to diminish racially biased error rates, NIST should ensure that these datasets reflect the diversity of the American population.
5. Face recognition companies should test their systems for algorithmic bias on the basis of race, gender, and age. Companies should also voluntarily publish performance results for modern, publicly available benchmarks—giving police departments and city councils more bases upon which to draw comparisons.
6. Community leaders should press for policies and legislation that protect privacy, civil liberties, and civil rights. Citizens are paying for police and FBI face recognition systems. They have a right to know how
those systems are being used.
If those agencies refuse, advocates should take them to court. Citizens should also press legislators and law enforcement agencies for laws and use policies that protect privacy, civil liberties, and civil rights, and prevent misuse and abuse. Law enforcement and legislatures will not act without concerted community action.
This report provides the resources that citizens will need to effect this change. In addition
to the City and State Backgrounders and
the Face Recognition Scorecard, a list of questions that citizens can ask their elected representative or law enforcement agency is in the Recommendations (p. 70).