'The many lives of border automation: Turbulence, coordination and care' by
Debbie Lisle and Mike Bourne in (2019)
Social Studies of Science comments
Automated borders promise instantaneous, objective and accurate decisions that efficiently filter the growing mass of mobile people and goods into safe and dangerous categories. We critically interrogate that promise by looking closely at how UK and European border agents reconfigure automated borders through their sense-making activities and everyday working practices. We are not interested in rehearsing a pro- vs. anti-automation debate, but instead illustrate how both positions reproduce a powerful anthropocentrism that effaces the entanglements and coordinations between humans and nonhumans in border spaces. Drawing from fieldwork with customs officers, immigration officers and airport managers at a UK and a European airport, we illustrate how border agents navigate a turbulent ‘cycle’ of automation that continually overturns assumed hierarchies between humans and technology. The coordinated practices engendered by institutional culture, material infrastructures, drug loos and sniffer dogs cannot be captured by a reductive account of automated borders as simply confirming or denying a predetermined, data-driven in/out decision.
The authors argue
Since the first e-gates were deployed at Faro and Manchester airports in 2008 (Foreign & Commonwealth Office, Home Office and Border Force, 2017; Frontex, 2014), air, land and sea borders in Europe and the UK have been shaped by an intense drive towards automation. As part of the European Union (EU)’s 2013 Smart Borders package, millions of Euro have been invested in technology projects such as ‘ABC4EU’ and ‘FASTPASS’, which use e-gates to bring together e-passports, ‘live’ biometrics (e.g. photographs) and pre-existing databases (e.g. the Registered Travellers Programme) (ABC4EU, 2019; FASTPASS, 2019; see also iBorderCtrl, 2019). Similar technologies have been rolled out in the UK: By the end of 2017 there were 239 e-gates in operation in all major UK airports (Foreign & Commonwealth Office, Home Office and Border Force, 2017). Globally, the market for Automated Border Control e-gates and kiosks alone is expected to grow to $1.58 billion by 2023 (MarketsandMarkets.com, 2017). This drive towards automation is constituted by two interrelated modes of filtering: (i) the databases and sophisticated algorithms capable of gathering, analysing and comparing massive amounts of data on the mobility of people and goods, and (ii) the technologies used in border spaces that translate pre-emptively generated data to make an instantaneous in/out border decision. The widespread embrace of border automation in the UK and EU is underscored by a powerful fantasy that integrates these two modes of filtering: A perfect in/out decision is produced when the algorithms pre-emptively construct a data double that is ‘safe’ or ‘dangerous’, and the automated technology at the border (e.g. the e-gate or the handheld scanner) either confirms or denies that identity. Amidst growing volumes of passengers and freight, the allure of automation emerges as the perfect resolution of tensions between mobility and security. This fantasy of border automation rests on three major claims. First, automated border decisions are instantaneous: Unlike human border guards who struggle to decide within an average of 12 seconds (Frontex and Ferguson, 2014), automated borders draw on the pre-emptive data collection and analytics to produce in/out decisions in a fraction of a second, thereby increasing the convenience of border crossing for predesignated travellers, baggage and freight. Second, border automation enhances the accuracy of decisions because they attach specific pregiven information harvested from large databases to specific bodies in specific sites. In other words, automated borders are the final confirmation that the bodies, bags and boxes in front of them align with the information in the databases. The accuracy provided by automated borders is guaranteed by the certainty and reproducibility of the data driving the decision. Data is stored and can therefore be accessed, rechecked and consulted to identify novel patterns that can aid prediction and ‘future proof’ the border. And finally, automated border decisions are objective and neutral: Because they draw on the algorithmic processing of huge amounts of data, they avoid the biases, prejudices and irritations of human border guards. In this sense, automated borders respect the rights of ‘safe’ persons (because they are not falsely identified), ‘safe’ goods (because they can proceed uninterrupted to their destination), and even suspect persons (because immigration forces have time to plan a humane arrest) (European Commission, 2016a).
Drawing from a multisited and multinational ethnographic study that ran from 2014 to 2016, this article explores the extent to which this powerful fantasy of automation shapes (or indeed, doesn’t shape) the everyday practices and sense-making of our informants: customs officers, immigration officers and airport managers. We critically reflect on how these border agents at a major UK airport and a medium-sized European airport make sense of and interact with the automated technologies put in place to supposedly make their jobs of ‘bordering’ more efficient, accurate and objective. We know from critical border studies and critical security studies that the prevailing fantasy of automation reproduces a problematic anthropocentric landscape in which human operators are separated from the inert technologies they use for bordering (Glouftsios, 2017; Leese, 2015; Schouten, 2014; Sohn, 2016). We are interested in how that anthropocentrism is articulated and troubled in the sense-making and working practices of those using automated borders. In this article, we develop two related questions. First, we explore the extent to which the anthropocentrism underscoring this powerful fantasy of automation operates as a regulative ideal, how it governs the behaviours, practices, relations and imaginaries of those managing automated borders. Here, we build on Allen and Vollmer’s (2018) study of how UK border managers carefully traffic between believing in the promises of border technologies and being deeply suspicious of the machine’s ability to ‘read’ humans. We are particularly interested in the extent to which border agents feel trapped inside a ‘pro-automation’ vs. ‘anti-automation’ debate that forces them to staunchly defend either technology or humans. Certainly, we pay attention to how border agents often unthinkingly reproduce these polarised positions, though we are more interested in how they carefully recognise and acknowledge the limitations of such pregiven positions as they make sense of automation. Indeed, our interviews and observations revealed a great deal of anxiety over who or what is actually making the in/out border decision, and who or what is the best agent to do so. These moments of doubt and uncertainty, often expressed through frustration, loss and lament, lead to our second question, which engages the new working practices emerging as border agents work with, around and in proximity to automated borders. We are particularly interested in the coordinated actions, unexpected improvisations and creative work-arounds that are developing between humans, machines, and other nonhumans. To get a meaningful picture of these new practices, we telescope out from the specific automated technologies of the border to focus on the wider entanglements that are shaping supposedly ‘clean’ in/out border decisions. Through our interviews and observations, we uncovered a complex and expansive understanding of automation, which exceeds the simple and unidirectional flow from pre-emptive data-based filtering to the automated border technology that simply confirms or denies a pregiven decision. Here, we draw from critical work detailing the deterritorialised nature of borders, such as de Goede’s (2018) analysis of the ‘chains of translation’ that constitute the governing of suspicious financial transactions, and Jeandesboz’s (2016) account of the ‘chains of association’ that constitute border policy-making (see also Parker and Adler-Nissen, 2012; Popescu, 2015b). Thinking about automated borders through this radically deterritorialised landscape is important because it creates more space to consider questions of agency. Not only are airport managers and customs and immigration officers repositioned as active agents using technologies in creative, surprising and inventive ways, but the supposedly ‘inert’ technologies of bordering are understood as entities acting, exerting force and directly shaping in/out border decisions in ways that exceed a simple confirmation or denial of a pregiven decision. As our interviews and observations reveal, the multiple relations and attachments between these agents are producing new coordinated practices around automated borders that often confound the deep anthropocentrism underscoring the fantasy of automation
They conclude
The prevailing pro/anti debate over border automation would have us believe that in/out border decisions are the result of either superior technologies capable of translating pregiven data with more speed, accuracy and objectivity, or superior human capabilities such as intuition, experience and tradecraft offering more relevant translations of pregiven data in specific situations. But these two narratives share a crucial assumption: that proper, robust and reliable in/out border decisions come primarily from single actors – either automated technologies or sophisticated human agents. This article contests that deeply reductive ontology and looks instead at what kind of sense-making and working practices emerge when we approach border automation through a lens of entanglement. Our observations and interviews at two airports revealed a complex set of coordinated practices between some expected humans and machines (e.g. immigration officers and e-gates), as well as some unexpected other nonhuman actors (e.g. parking spaces, packing tape, sniffer dogs, cement walls, shit). We came to understand automated borders not as a single moment of decision where an e-gate or e-manifest confirms or denies entry based on pregiven data, but rather as an elongated set of coordinated practices that are irreducible to either human or technology. To be sure, there is much more research to be done on how these practices emerge and transform. For example, what kind of automated border appears in the dedicated training sessions for specific technologies, or the professional mentoring structures that sustain its use? What kind of coordinated practices emerge around the care, maintenance, fixing and cleaning of automated border technologies? And if our turbulent cycle of automation operates across airport space, what are the different intensities operative in each sector? Our purpose in reframing automated borders through their constitutive entanglements and emerging practices of coordination, is to reveal the profound contingency of in/out border decisions, no matter how automated those decisions purport to be. The insights we gleaned from our interviews and observations helped us to contest the isolation, instrumentality and purity of automated borders, and foreground the congregation of agents and multiplicity of ‘situated actions’ that are enrolled in these seemingly simple in/out decisions.