Showing posts with label Implantables. Show all posts
Showing posts with label Implantables. Show all posts

28 October 2022

Implantables and privacy

'When is the processing of data from medical implants lawful? The legal grounds for processing health-related personal data from ICT implantable medical devices for treatment purposes under EU data protection law' by Sarita Lindstad and Kaspar Rosager Ludvigsen in (2022) Medical Law Review states 

Medicine is one of the biggest use cases for emerging information technologies. Data processing brings huge advantages but forces lawmakers and practitioners to balance between privacy, autonomy, accessibility, and functionality. ICT-connected Implantable Medical Devices plant themselves firmly between traditional medical equipment and software that processes health-related personal data, and these implants face many data management challenges. It is essential that healthcare providers and others can identify and understand the legal grounds they rely on to process data. The European Union is currently updating its framework, and the special provisions in the GDPR, the current ePrivacy Directive, and the coming ePrivacy Regulation all provide enhanced thresholds for processing data. This article provides an overview and explanation of the applicability of the rules and the legal grounds for processing data. We find that only a cumulative application of the GDPR and the ePrivacy rules ensure adequate protection of this data and present the legal grounds for processing in these cases. We discuss the challenges in obtaining and maintaining valid consent and necessity as a legal ground for processing and offer use case-specific discussions of the role of consent long-term and the lack of an adequate ‘vital interest’ exception in the ePrivacy rules.

The authors comment 

 Medicine is an emerging field for information communication technologies (ICT). Data processing brings significant advantages, and medical technologies develop at record speeds. ICT-connected Implantable Medical Devices (ICTIMD) plant themselves firmly between traditional medical equipment and software processing health-related personal data. ICTIMD are medical devices implanted in the human body with software capable of communicating and transferring data to external devices. They allow healthcare providers to monitor the patient’s condition without being physically present and help medical industries go from reactive to predictive and proactive models of care. 

However, the rapid technological development is a two-edged sword, forcing lawmakers and practitioners to balance between privacy, data protection, autonomy, and accessibility. ICTIMD rely on the processing of data on a massive scale, and while they face many of the same data management challenges as other fields, there are some major distinguishing factors. Health data is one of the most sensitive types of personal data, and the impact of a data breach can have enormous consequences. ICTIMDs are also, in contrast to most other devices, collecting data automatically and constantly from sensors implanted in human subjects. The end-user and data subject, the patient, does not have the freedom to leave the device at home. These devices form a particularly sensitive part of the private sphere of the users, demanding high data protection standards. 

The European Union (EU) is in the process of updating its privacy and data protection framework. Having replaced the Data Protection Directive (DPD) with the General Data Protection Regulation (GDPR), the complimenting ePrivacy directive (PECD) will eventually be replaced by an ePrivacy Regulation (EPR) and future additional legislation. These instruments together implement enhanced thresholds for processing health data from terminal equipment. For efficient data protection, it is vital that all the actors in the value chain, the healthcare providers, and the patients can identify and understand the lawful grounds available for processing. Our sections II and III start by clarifying the applicability of the rules and provide an overview of the legal grounds for processing from ICTIMD. Sections IV and V dive deeper into consent and necessity as legal grounds for processing ICTIMD data before section VI discusses the framework’s suitability for ICTIMD processing. 

The article will focus on processing enabling medical treatment and exclude processing for research purposes or other public interests. It will be limited to data protection law and will not cover law enforcement access, criminal law issues of illegal access, product liability law, or health law specifically

15 December 2020

Work Surveillance

Data subjects, digital surveillance, AI and the future of work by Phoebe V. Moore for the Panel for the Future of Science and Technology (STOA) and the Secretariat of the European Parliament is characterised as providing 

 an in-depth overview of the social, political and economic urgency in identifying what we call the ‘new surveillance workplace’. The report assesses the range of technologies that are being introduced to monitor, track and, ultimately, watch workers, and looks at the immense changes they imbue in several arenas. How are institutions responding to the widespread uptake of new tracking technologies in workplaces, from the office, to the contact centre, to the factory? What are the parameters to protect the privacy and other rights of workers, given the unprecedented and ever-pervasive functions of monitoring technologies? 

The report evidences how and where new technologies are being implemented; looks at the impact that surveillance workspaces are having on the employment relationship and on workers themselves at the psychosocial level; and outlines the social, legal and institutional frameworks within which this is occurring, across the EU and beyond, ultimately arguing that more worker representation is necessary to protect the data rights of workers. 

 Moore comments 

 Workplace surveillance is an age-old practice, but it has become easier and more common, as new technologies enable more varied, pervasive and widespread monitoring practices, and have increased employers’ ability to monitor what seems like every aspect of workers’ lives. New technological innovations have increased both the number of monitoring devices available to employers as well as the efficiency of these instruments to extract, process and store personal information. Digital transformation, work design experimentation and new technologies are, indeed, overwhelming methods with intensified potential to process personal data in the workplace. While much of the activity appears as an exciting and brave new world of possibility, workers’ personal experiences of being tracked and monitored must be taken into account. Now, issues are emerging having to do with ownership of data, power dynamics of work-related surveillance, usage of data, human resource practices and workplace pressures in ways that cut across all socio-economic classes. 

The first chapter of the present report ‘Surveillance and monitoring: The future of work in the digital era’, commissioned by the European Parliament’s Panel for the Future of Science and Technology (STOA), deals with surveillance studies, which originates in legal studies and criminology but is increasingly important in sociology of work and digitalisation research. The first chapter outlines some of the technologies applied in workplaces. The second chapter looks at the employment relationship, involving how workers and managers, and surrounding pressures transform when a third actor (the machine and/or computer), is introduced. This chapter also covers the ways that inter-collegial relations are impacted, as well as issues around work/life integration. 

The third chapter looks at data protection and privacy regulatory frameworks and other instruments as they have developed over time, leading up to today’s General Data Protection Regulation (GDPR). Various historical moments have impacted how data and privacy protection has evolved. Concepts surrounding this legal historical trajectory have emerged, with some ambivalences at points around which philosophical and ethical foundations are at stake. Some of the tensions in legal concepts driving the debates in privacy and data protection for workers, and paradoxical circumstances within which they are seated, are then dealt with in chapter four, where the possibility for deriving inference from data can lead to discrimination and reputational damage; where the concept of worker ‘consent’ to data collection; and the implications for data collection from wellness and wellbeing initiatives in the workplace are increasingly under the microscope. 

The fifth chapter outlines a series of country case studies, where applied labour and data protection and privacy policy are revealed. Many countries are reviewing data and privacy and labour laws because of new requirements emerging with the GDPR, which has also been extensively reviewed in the present report. Some legal cases have emerged whereby employers have been judged to breach data protection and privacy rules, such as Bărbulescu vs Romania. The sixth chapter, called ‘Worker cameos’, provides a series of worker narratives based on field interviews about their experiences of monitoring and tracking. In particular, content moderators and what the author calls ‘AI trainers’ are the highest surveilled and under the most psychosocial strain. Taking all of these findings into account, the seventh chapter provides a series of the author’s suggestions for first principles and policy options, where worker representation and co-determination through social partnerships with unions, and more commitments to collective governance, are put forward.

Moore argues 

Workplace surveillance over time has occurred within a series of historical phases, where work design, labour markets, and industry trends have differed and business, social and labour processes have taken particular forms. Surveillance of workers can be both analogue and technological, and operates at a series of tangible and psychosocial levels. ... In the report on this nearly year-long project, we look at how insights in technological development have evolved within a sociological and business operations framework, and identify how technologies are being implemented to manage worker performance, productivity, wellness and other activities, in order to analyse and predict the social impact and future of work, within regulatory parameters. Workplace surveillance in the European Union is predominantly outlined, but as early, government-commissioned North American research into workplace surveillance was also very perceptive in foresight, some historical discussion of the United States’ early activity as well as some insights from Norway and Nigeria, are included. 

Workplace surveillance is not separate from the larger structures and systems of labour relations, management styles, workplace design, legal and ethical social trends and today, are explicitly part of the accelerating trends in digitalised surveillance in many spheres of everyday life. Therefore, we address all of these categories of analysis, alongside identifying where and how digitalised surveillance has and is occurring in workplaces or perhaps better said, workspaces, called as such because the concept of ‘place’ has to be interrogated and critiqued, precisely because work is carried out in an increasingly virtual spaces globally, and with the onset of Covid 19 working conditions, increasingly, in homes. 

In 2017, a Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics clearly stated that:

...assessments of economic shifts and the impact on employment as a result of robotics and machine learning need to be assessed; whereas, despite the undeniable advantages afforded by robotics, its implementation may entail a transformation of the labour market and a need to reflect on the future of education, employment, and social policies accordingly. (European Parliament 2017) 

This Recommendation predicted that the use of machine learning and robotics will not ‘automatically lead to job replacement’, but indicates that lower skilled jobs are going to be more vulnerable to automation. Furthermore, the Recommendation cautions the likelihood of labour market transformations and changes to many spheres of life, including as above, ‘education, employment and social policies’ (ibid.). In this light, the current report builds on some of these earlier recommendations to the European Parliament, because it is now more important than ever to address the lagging discussions on ethics, social responsibility, social justice and importantly, the role of unions and worker representative groups in the continuous development and integration of technologies and digitalization into workplaces. The report is written with a human rights focus, seeing data privacy and protection as a fundamental human right. 

Automation, robotics and artificial intelligence (AI) are part and parcel to the discussion of monitoring and surveillance of work,where a bindingfeatureforhow these processes emerge is the collection, storage, processing and usage of large data sets that are collected in more and more spheres of people’s everyday lives and in particular, workplaces. A report prepared for the United Kingdom’s Information Commissioner’s Office (ICO) declared that ‘we live in a surveillance society. It is pointless to talk about surveillance in the future tense... everyday life is suffused with surveillance encounters, not merely from dawn to dusk but 24/7’ (Ball and Wood 2006). More than one decade later, this statement could not be more relevant. From cameras at self-operated grocery store check-outs in New York, to facial recognition sensors at a local gym in London, to recorded calls with a call centre employee of banks who themselves may be based in India or Bangladesh, surveillance is an activity that is no longer only seemingly conducted by law officers on the streets watching out for robbers wearing balaclavas. People are watched in almost every corner of society, and sometimes people are even asked to watch one another, in what Julie E. Cohen calls a participatory surveillance (Cohen 2015). Gary T. Marx earlier referred to forms of participatory surveillance as a kind of ‘new surveillance’ in 1988, just as computers were becoming integrated into everyday life and seemingly integrating a new type of soft surveillance (Marx 1988). In 1982, Craig Brod had already warned of the dangers of over-use of computers at work and talked about the hazards of technostress resulting from the uptake of new technologies in everyday lives (Brod 1982). 

Cohen, an established figure in the research arena of surveillance, looked at the issues surrounding privacy and systems of surveillance. Cohen argues that privacy, as a concept informing practices, has a bad reputation, whereit has been touted as an old-fashioned concept and a delayto progress. Cohen counters these systemic assumptions and says that privacy should be a form of protection for the liberal self (2013: 1905) and important for the democratic process. Indeed, trading privacy for supposed progress reduces the scope for self-making and informed, reflective citizenship and a range of other values that are foundational to consumer society. 

So, effective privacy regulation must render both public and private systems of surveillance meaningfully transparent and accountable, where privacy is defined in the dynamic sense: ‘an interest in breathing room to engage in socially situated processes of boundary management’ (Cohen 2012: 149, cited in Cohen 213: 1926-1927). Privacy incursions harm individuals, but not only individuals. Freedom from surveillance, Cohen argues, whether public or private, is foundational to the practice of informed and reflective citizenship. These ideas are important when looking at workers and their right to privacy. A reasonable expectation of privacy is likely when the actions of the employer suggest that privacy is a condition of work. Internationally there are variations in law and culture in terms of privacy, especially differences between the European Union and the USA. In Europe, privacy has tended to be seen as somewhat more fundamental, something that should not be forfeited, whilst in the US privacy can be viewed as a commodity (Smith and Tabak 2009). 

There was a lot of discussion about business culture after Scientific Management, during the Human Relations and Systems Rationalism periods. Alder (2001) reviewed a range of organisational culture types, asking which ones are more/less amenable for electronic performance monitoring (EPM) integration. Right at the end of the latter period, Deal and Kennedy outlined four cultural types in 1982, which they argue are oriented around risk-taking and frequency of feedback within the organisation: ‘1) tough-guy macho, 2) work hard/play hard, 3) bet your company, and 4) process’ (1982, cited in Alder 2001). Petrock, Alder notes, also outlines a typology of four organisational cultures: 1) clan culture 2) adhocracy, 3) hierarchy, and 4) market cultures (1990), but these types are limiting because there are no associated ways to measure or identify them, offered. Alder believes these delineations are incomplete and not fit for purpose. Wallach, however, came up with the best typology, Alder states, noting the signifiers within three types: ‘bureaucratic, innovative and supportive’ (Wallach 1983, cited in Ibid.). A bureaucratic culture is identified with hierarchy, regulation and procedure and is the organisational culture type that is most responsive and accepting of technological tracking. Alder (2001) argues that workers will respond differently to EPM in different organisational cultures. Therefore, Alder indicates that a management body that wants to implement EPM must think about the culture of their organisation to assess to what extent resistance to it will emerge, and how to accommodate this. These days, however, the culture-based arguments are less and less relevant, as metrics and data appear to hold the promise to make irrelevant specificities in qualitative differences and as data rights become increasingly mainstreamed across the consumer and worker spheres. 

This report, overall, aims to highlight what kinds of technologies are being integrated to monitor, track and therefore, surveil workers; to identify how technologies are being implemented; to understand the impact that having new technologies in workplaces impacts the employment relationship; to throw light on the social, organisational, legal and institutional frameworks within which this is occurring; and reveal the institutional, legal and union responses. Finally, the goal of this report is to provide a set of first principles and policy options to provide EU member and associated states with guidance on protection the privacy and rights of worker data subjects. 

Leading up to the General Data Protection Regulation (GDPR), the EU’s Data Protection Working Party (Art. 29 WP) stressed that ‘workers do not abandon their right to privacy and data protection every morning at the doors of the workplace’ (2002: 4). Some degree of gathering and processing of personal data is a normal and in fact, a vital part of almost any employment relationship (Ball 2010). Some workers’ personal data is necessary to complete contracts i.e. to pay workers and record absences, and much of it is both reasonable and justifiable for use by management. However, that is not to say that any and all forms of surveillance and data processing should be so considered. 

Indeed, employers’ surveillance practices must often be reviewed in light of concerns for the privacy or simply for the human dignity of the worker (Jeffrey 2002; Levin 2009), and this report sets out to do just this. The present author has already situated this trend within the contemporary pressures of global political economy pressures (Moore 2018c) and looked at the psychosocial violence and safety and health risks that workers face with the introduction of digitalised tracking and monitoring (Moore 2019, 2018b). Welfare state retrenchment and austerity policies alongside these technologicalinterventionshavecoalescedintothe‘politicaleconomyofanxiety’(Moore2018a:43) for workers, where self-quantification and wellness discourses and frames thrive, but structural economic change is not occurring fast enough with relevant protections and social partnerships with unions and other worker representative groups. Now, we set out the aims and intentions for the project which form each chapter. 

The aim of the first chapter of the report is to review the concept of surveillance, where workplace electronic performance monitoring and privacy questions are increasingly important. The Taylorist employment model of mental vs manual work in a set hierarchy is increasingly a thing of the past, and while tools for measure were used in Taylor’s workshops, the kinds of technology now available on the market have fed into significant differences to a new world of work. Parallel to this change, the pursuits for surveillance have entered more intimate and everyday spaces than before. The known categories of the ‘watched’ and the ‘watcher’ are transformed. ’New surveillance workplaces’ or what we also refer to as ‘workspaces’, indeed, feature these new characteristics. The first chapter therefore looks at a range of new technologies which are contributing to the recent trends in an uptake of electronic monitoring and tracking at work, backed with existing empirical evidence and primary and secondary literature. 

In the second chapter, the report’s aim is to look at changes to a once presumed standard employment relationship, where managers and corporate and organisational hierarchies were explicit and clearly known. Now, management and operations processes are being digitalised, and workplaces are moving in to a myriad of domains. As a result of the changes to a more standard type of employment relationship and work environment, uncertainty or other psychosocial discomfort can emerge, where workers may feel their managers no longer trust them; or workers experience the issue of function creep, where data is used for other purposes than it was first collected for. Or competition between workers is intensified when performance data is viewable such as on the walls in call centre workplaces or on shared dashboards in gamified wellness programmes. The second chapter outlines the observable and documented as well as probable changes to the employment relationship which new technologies imbue. The third chapter turns to the policy and regulatory frameworks and instruments surrounding privacy and data protection and technological tracking, starting with the Data Protection Directive. 

The most important points in data protection and worker issue based policy are covered in the leadup to the GDPR. Interestingly, the International Labour Office’s Code of Practice around workers’ data, published as early as the 1990s, made similar interventions and recommendations that are now enforceable in the GDPR today. This chapter outlines this process and picks up on some of the most important and insightful developments to provide a foundation for the first principles and policy options outlined in later chapters of the report. The aim of the fourth chapter is to identify some of the tensions in legal principles about which the present author has been concerned, where e.g. inviolability does not seem to cohere with the concept of power of command; or where inferences from data and the link to workers’ reputations must be problematised. This chapter also looks at the concept of ‘consent’, which tends to be de-prioritised in discussions of the employment relationship (where consent is normally discussed in relation to a consumer, in the context of the GDPR) due to its already existing unequal nature. We argue that there are possibilities to rethink the definition of consent, nonetheless, and to perhaps look at a way to update the unidirectional conceptualization of this type of relationship. 

The fifth chapter then provides a series of country case studies provided in part by a series of legal scholars from across the EU, and Norway and Nigeria, where contributors have outlined information about which technologies are characteristic in specific countries; identified which legal mechanisms are being used including aspects of labour law, to ultimately protect workers’ privacy and data; looked at the ways local cases are working to integrate the GDPR as per Art. 88; and begins to put the focus on the role of worker representatives who, we ultimately recommend, should be considered meaningful social partners in dialogue with employers and with co-determination rights (see policy options). 

In the sixth chapter, we present a series of ‘Worker Cameos’ which are based on semi-structured interviews carried out with a series of workers to identify where EPM and tracking are occurring and to investigate and identify workers’ experience of this. Workers in many sectors and spheres, from dentists, to bankers, to content moderators, are being tracked. All workers interviewed feel that their work has intensified, expectations are higher, performance management is increasingly granular, and stress and anxiety are at an all-time high, as tracking and monitoring technologies become increasingly good at surveillance. 

The report concludes with a set of first principles and policy options for European Parliament policymakers, prioritising the role of trade unions and other worker representative groups. These Principles and Options are designed to mitigate against the worst impacts of digitalised tracking, monitoring and surveillance in the world of work.

26 April 2020

COVID Cyborgs

'The COVID Cyborg and Protecting the Unaugmented Human' by Kate Galloway in (2020) Alternative Law Journal comments
This article examines the increasing tendency towards governance of people through their representation via data. In its most contemporary iteration, the COVID-19 pandemic has raised the prospect of contact tracing apps. While public discourse about the apps has focused principally on the important issue of data privacy, there are other possible effects whereby participation in such schemes might become a pre- requisite to accessing services or basic rights—either from government or from corporations. The pathway to acceptability of applying our data in this way is already paved, through fitness monitors and other technologies by which we represent ourselves. This article sets out the foundation of such technologies and their application, before outlining their effect on the recognised boundaries of governance and the conception of the holder of rights and the substance of those rights. 
Galloway argues
In 2018, biohacker Meow-Ludo Disco Gamma Meow-Meow was found guilty of travelling on Sydney buses without a valid ticket. Rather than carrying Sydney transport’s Opal Card with him, he had instead implanted its chip into his hand. He had indeed tapped on when entering the bus—so had paid for his trip. However, Sydney transport authorities were not satisfied with this, alleging that he had breached the card’s terms of use. 
Meow-Meow claimed that his case was based on the principle of ‘cyborg rights’. The modification of his body through embedding technology-capable hardware is a feature of a posthuman evolution, a ‘leaky distinction between animal-human and machine’. As an activist pushing the boundaries of the definition of human, Meow-Meow was simultaneously pushing the boundaries of the rights held by an altered human before the law. 
The science fiction-like nature of body modification is occurring in more prosaic ways. A pacemaker, for example, might transmit data about its human operating system in the same way that Meow-Meow’s Opal card chip transmitted data concerning payment of a bus fare. Whether therapeutic interventions properly constitute a ‘cyborg’ remains an open question, however to the extent that they might, the pacemaker example certainly poses less of a challenge to our general conceptions of humanity than does a more extreme bodily modification, possibly undertaken by oneself. 
Machines and other hardware (and software) may be implanted within us, but more readily we are enhancing our physical capabilities by carrying them on our person. Smartphones are ubiquitous, and as they extend our intellectual capacity, ability to communicate, and even provide biophysical feedback for lifegiving treatments, and share myriad personal data with government and corporations alike. Fitness trackers worn on the wrist measure our physiological signs not only re-presenting them to their wearer as a variety of metrics by way of graphs and icons, but also sharing with other users and their corporate creators. Our devices also call for biometric data to unlock their features. We readily submit to fingerprints and facial recognition, granting global corporates the most intimate of insights into ourselves. 
At the same time as we have willingly released aspects of ourselves, through our data, in the private sphere our government has constructed a surveillance architecture affording security services wide scope for access to our telecommunications data and our biometric data. Although governments have pushed through the suite of legislation for over a decade, this has not come without a cost. The uptake of My Health Record, a putative personal database of one’s medical information, has been poor. And now, in the thrall of a pandemic, government is proposing a contact tracing app whereby a user’s proximity to another person (within 1.5m for more than 15 minutes) would be identified through Bluetooth technology, encrypted, and recorded in the app. If a contact is diagnosed with COVID-19, then all contacts would be notified of that. 
Critique of the app—at the time of writing not yet released—has generally been concerned with data privacy per se. This is, of course, important. However, independently of data privacy is a question the opposite to that encountered by Meow-Meow. For the foreseeable future, and in particular while we are in a declared public health emergency, our infection status regarding COVID-19 is central to our freedom, and indeed, to wider societal freedom. In that sense, a tracing app —a nd its data — effectively function as an extension of ourselves. They are a means of reassurance not only to public health officials running the program, but to wider society, that we, collectively, are safe. Meow-Meow exercised his freedom to extend the functioning of his body by inserting the Opal card chip. But will we be free from extending our corporeal body through the incorporeal data contained in a contact tracing app? Without making the app mandatory, there are multiple ways that it might entrench itself within society to create classes of people. Those whose provenance is known (via the app) and those whose provenance is not. 
This article suggests that the COVID-19 pandemic will test the boundaries of our personhood in a new way. Despite the existing state/corporate data infrastructure whereby others are able to construct a picture of our most intimate lives, there is not yet a universally compelling basis for production of personal data as a threshold for acceptance into places or institutions. Contact tracing may present one. And if our data is to be carried with us as an integral and qualifying part of our interface with the world around us, it may be considered as part of our person. To the extent that our data engagement differentiates us from other humans, the question arises of the protections available at law. In particular, with an ‘extended’ human, the question arises about where recognised boundaries of governance lie, whether the extended human is the bearer of rights, and if so, what is the substance of those rights. 
Part II outlines the basis on which our data is effectively an extension of ourselves, and as such constitutes the extended human as a species of ‘cyborg’ following Haraway’s interpretation. Part III then hypothesises about a variety of social contexts that might prefer or demand, what I call here a ‘COVID cyborg’—a person enhanced by their COVID tracing data—to the exclusion of those not so enhanced. It envisages our society comprising two classes of people: the COVID cyborg, and the unaugmented human. Unlike the experience of Meow-Meow, the COVID cyborg has the potential to be embraced, effectively affording them rights superior to those of the unaugmented human. If this is to be the case, the law needs to comprehend both cyborg and unaugmented status as equal subjects of protection.

31 March 2019

Implantables and biometrics

A slow news day at the Canberra Times, with a breathless item today announcing that 'More than a hundred Canberrans have implantable microchips'.

Yes, we are back in Meow-Ludo Disco Gamma Meow-Meow  territory again, this time with reporting about an Australian entrepreneur who's an advocate of implanted tags.

The CT states
A convicted hacker is selling implantable microchips able to store data including credit card details, and more than 100 Canberrans have signed on to the new technology. 
Chip My Life, an Australian company which imports the technology and sells it domestically, has sold more than 100 microchips to ACT residents since operations began in 2016. 
The microchips are the size of a grain of rice and are implanted into the webbing between the thumb and forefinger in a procedure that takes less than a minute. The procedure can be carried out by specialists in Sydney, and a Canberra-based clinic is slated for the coming months. 
The company's co-founder, a convicted hacker, Skeeve Stevens said interest had surged in the national capital for the technology in recent years. 
He said the microchip was inserted into the webbing between the finger and thumb because the area had fewer pain nerves. 
In 1995, Mr Stevens was sentenced to three years in prison for stealing and publishing the credit card numbers of 1200 AUSNet subscribers. ... 
Mr Stevens laughed off privacy concerns about his business, saying the conviction didn't seem to matter in other parts of his career where he speaks to government departments about the implications of new technology. 
He said he does not have access to the data being placed on the chips. While the company provided the microchip, users were responsible for inputting their personal information. 
"In the three years we've been going, we've sent out around 1600 microchips," Mr Stevens said. "We don't know what people do with the chips once we send it to them," he said. "It's more secure than the card in your hip pocket or your wallet if you're using it for the same type of purposes." 
Mr Stevens is among those who have an implanted microchip. His holds codes for his front door and garage at home. He said the majority of microchip users use the technology to store data normally found in swipe cards. 
"Some of the chips use the same technology as swipe-card technology in that it holds a single serial number to open something like a garage door. "The other type of chips can hold a lot more information and people use it store personal information and things like credit card details."
An irreverent friend commented that 100 implants (primarily to black tshirt-wearing male geeks?)  is not a revolution and wondered whether there had been a similar uptake of Prince Alberts and other things that make most people scratch their heads of go ouch.

'Use and acceptance of biometric technologies in 2017' (AIC Trends and Issues in Crime and Criminal Justice) by Russell G Smith, Alexandra Gannoni and Susan Goldsmid comments 
As part of the Australian Government’s National Identity Security Strategy (AGD 2013), a sample of Australians were surveyed about their experience of identity crime and misuse and how they responded to the problem. In addition to finding out how prevalent misuse of personal information is, the surveys asked respondents to indicate how willing they were to use various biometric technologies to protect their personal information (Emami, Brown and Smith 2016). 
This paper presents the findings of the surveys conducted in 2014, 2016 and 2017 that relate specifically to previous use of biometrics and willingness to use biometrics in the future. It provides updated information on the findings of the 2014 survey, which was the first to assess the willingness of Australian victims of identity crime to use biometrics to enhance the security of their personal information (Emami, Brown and Smith 2016). The other more general findings of the identity crime and misuse surveys have been published elsewhere (Goldsmid, Gannoni and Smith 2018; Smith, Brown and Harris-Hogan 2015; Smith and Hutchings 2014; Smith and Jorna 2018). 
The place of biometrics in identity security 
Biometric technologies use an individual’s unique physiological or behavioural attributes as a means of identification. They include fingerprint matching, signature analysis, or recognition of a person’s retina, iris, face or voice. Facial recognition is now considered to be the dominant biometric technology globally and the one most likely to increase in use over the next few years. The findings of the Biometrics Institute’s annual surveys of members since 2010 have shown that facial recognition has continued to grow in importance. These surveys are conducted annually and are sent by the Biometrics Institute to its 6,000 individual members and other stakeholders worldwide. In June 2018, 310 individuals responded to the survey, representing suppliers of biometrics (48%), users (38%) and other interested organisations and industries (14%; Biometrics Institute 2018). The 2018 survey found that 47 percent of respondents considered facial recognition to be the biometric technology most likely to be on the increase over the next few years (Biometrics Institute 2018). This was followed by iris recognition (8%), fingerprint recognition (7%) and voice recognition (6%). A further 19 percent of survey respondents considered that multi-modal approaches that combine various biometrics would be most likely to increase over the next few years (Biometrics Institute 2018). 
Biometric technologies are currently used by a range of organisations in Australia to verify the identities of the people with whom they deal. For example, the Department of Home Affairs collects biometric information including fingerprints and facial images from offshore visa applicants, onshore protection visa applicants, immigration detainees, and certain categories of airline passengers (Department of Home Affairs 2018). 
Australian airports have facial recognition capabilities, known as SmartGates, that enable travellers with ePassports from Australia, New Zealand, the United Kingdom, Switzerland, Singapore and the United States to process themselves rather than undergoing the customs and immigration checks that are usually conducted by Australian Border Force officers (Department of Home Affairs 2018). Standards for the interoperability of biometric systems have also been developed to promote the effective operation of biometric systems between various government agencies (AGD 2012). 
In addition, biometrics have been introduced to verify an individual’s identity in a range of other settings. For example, a number of financial institutions are considering using biometric technologies such as fingerprint recognition for payment card authentication and for mobile banking services instead of passwords and PINs (Saarinen 2017). Iris recognition has also been used for cardless ATM transactions (Kim 2015). In Australia, the National Australia Bank and Microsoft have collaborated to design a proof of concept ATM using biometrics, cloud and artificial intelligence technologies; this would enable customers to withdraw cash from ATMs using facial recognition technology and a PIN (Planet Biometrics 2018). 
Respondents to the Biometrics Institute’s survey in 2018 indicated that the most significant development in the use of biometrics during the last 12 months related to border control/security, accounting for 20 percent of responses. This was followed by online identity verification (12%), largescale national identity deployments (9%), financial services (8%) and mobile payments/m-commerce, device access and surveillance (these last three types accounting for 7 percent each). Respondents also indicated that over the next five years the most important developments would occur in relation to online identity verification (20%), large-scale national identity deployments (11%) and border control/security (11%; Biometrics Institute 2018).
The authors conclude
As the use of digital technologies has become more widespread, and identity crime and misuse have continued to increase, the computer security industry has sought to improve avenues for the efficient and secure authentication of users’ identities. Existing systems that rely on username and password combinations have become problematic as criminals have become more adept at compromising passwords. The proliferation of username and password combinations has also made it impossible for users to manage this information without resorting to insecure ways of remembering their passwords, or having to purchase and use automated password management software (Emami, Brown and Smith 2016). 
Biometric technologies seek to solve this problem by enabling individuals to use their biological attributes as a means of identifying themselves. This report presents the findings of recent surveys that sought to quantify the extent to which a sample of Australians have made use of different biometrics in the past, and how willing they would be to use the selected biometrics in the future to minimise the risk of criminal misuse of personal information. 
With the rise of international security incidents, a balance must be struck between the need for personal security and the need for privacy and confidentiality of personal information. Prior research has found that concerns over privacy, data loss and spoofing (attempting to overcome biometric recognition systems) are important factors restraining the biometrics market. The present surveys confirmed that respondents were concerned about the privacy and confidentiality of personal information when using biometrics, particularly with systems operating outside government control. Understanding people’s perceptions of risk and their willingness to use technology as a security solution is of critical importance in devising appropriate policy measures that will be effective on the one hand, and accepted by the community on the other (Emami, Brown and Smith 2016).  
The current survey research showed that a relatively small percentage of respondents had used the specified biological biometrics in the past, but that use increased significantly between 2016 and 2017. It also showed that almost half (48%) of respondents in 2017 were willing to use one of the four biological biometrics to protect personal information in the future, and that this was a three percentage point increase on the same finding in 2016. Between 2016 and 2017 all the biometrics examined showed a statistically significant increase in user acceptance. In 2017, nine percent of respondents even reported being willing to use implanted chips to protect their personal information. It was also found that older respondents were significantly more willing to use any of the four biological biometrics than younger respondents, perhaps indicating greater concern among older Australians regarding the security of their personal information, or perhaps their need to guard their assets and life savings from theft. Alternatively, younger people might be reluctant to use technologies that appear to be complex and could be seen to impede their immediate access to information in the online world. Clearly, ongoing monitoring of these attitudes is needed to ensure that future generations of users are willing to use any biometric systems that are implemented. Respondents who reported recent victimisation were also significantly more likely than other respondents to report a willingness to use voice and iris recognition as well as chip implantation to protect their personal information, but not fingerprint or facial recognition systems. 
As the biometrics market continues to develop, further research is needed to understand users’ behaviour and willingness to use biometric technologies, particularly facial recognition and multimodal systems that combine various biometrics, which are developing strongly. Evidence is needed of the extent to which such systems are vulnerable to fraud and misuse and how individuals respond to victimisation and reinstate their personal information following compromise. In addition, evidence is needed of the crime displacement effects of introducing biometrics and how criminal behaviour adapts and changes as a result of enhanced user authentication processes. In particular, risks of violent crime and duress inflicted on users need to be examined and strategies developed to address any such problems.
The work is interesting because it will be embraced by particular advovcates and because much of the data is very discordant, with the authors recurrently referring to inconsistencies regarding claimed exposure to specific technologies rather than merely a wariness about stated attitudes. Like much professional research, it is an invitation to a deeper and more comprehensive study.

14 December 2018

Cultures

I occasionally allow myself to step away from writing about data protection (privacy, confidentiality, secrecy) and health sector regulation by taking a walk on the wild side. Here's the abstract from a presentation at Griffith Law School earlier this week and associated book chapter.
Bullies, Blokes and Buggery: Homosociality, Justice and Male Rape through an Australian lens 
The depiction in Australian cinema of male-on-male sexual assault offers a lens for understanding homosociality and justice within Australia and across the globe. 
Male rape – an assault that objectifies the victim and valorises the perpetrator as both powerful and outside the rules – is a recurring but largely unrecognised feature of the Australian screen. It is evident in for example iconic works such as Wake in Fright (1971), The Chant of Jimmie Blacksmith (1978), Mad Max (1979) and Ghosts of the Civil Dead (1988). Those works often use a distinctly Australian landscape, one that is recognisably not the American West or Scandinavia. 
They involve brutality in an environment in which legal authority – conventions about rules and remedies – is absent, weak or indifferent. It is an environment in which bystanders, the homosocial ‘mates’ whose deepest emotional relationships are with each other, are contemptuous or even amused by the ‘unmanning’ of a victim through force or intoxication, placed outside their brotherhood and without the redemptive ending in for example The Shawshank Redemption (1994). 
The chapter suggests that the films offer a view of belonging, power and exclusion that is at odds with the celebration of difference in Priscilla, Queen of the Desert (1994) or Holding The Man (2015) and with adventures such as Deliverance (1972). If ‘mateship’ is a distinctively, although increasingly fictive, Australian value the films offer a dark view of complicity and violence within the sunburnt country, a land of sweeping plains, kangaroos and eyes wide shut to brutality. At a global level they tell us something interesting about anxieties at the heart of manhood and about the efficacy of law where victimisation excludes men from justice.
En route I caught up with 'Biohacking by Ali K. Yetisen in (2018) 36(8) Trends in Biotechnology 744.

Yetisen comments
Biohacking is a do-it-yourself citizen science merging body modification with technology.The motivations of biohackers include cybernetic exploration, personal data acquisition, and advocating for privacy rights and open-source medicine. The emergence of a bio-hacking community has influenced discussions of cultural values,medical ethics, safety, and con-sent in transhumanist technology. 
Epidermal electronics, biosensors, and artificial intelligence have converged as healthcare technologies to monitor patients in point-of-care settings within the Internet of Things (IoT). These technologies have created a community of hobbyist software developers involved in the quantified-self movement. The self-experimentalist community is primarily interested in tracking their daily physical and biochemical activities to build a library of personal informatics in order to main-tain a healthy lifestyle or improve body performance. The growing interest in this‘tech-savvy’community has motivated questioning the possibility of experimenting with implantable technologies. The emergence of implantables for biometric animal identification has encouraged self-experimentalists to chipify themselves in order to interact with computers in the IoT. Inspired by transhumanism, which advocates the enhancement of human body and intelligence by technology,the overlap between self-experimentation and medical implant domains has created a vision to modify the human body and document their experiences in social media for open-source medicine. 
The movement of biohacking has begun with a self-experimentation project(Cyborg 1.0, 1998) of Kevin Warwick who implanted a radio frequency identification (RFID) tag to his arm in order to control electronic devices. In another experiment, a multielectrode array was implanted in Warwick’s arm to create a neural interface, which allowed controlling a robotic arm and establishing telepathy system with another human implantee via the Internet. Self-experimentation with biomaterials has also been popularized with the performance art works of Stelarc,who had a scaffold implanted in his arm (Third Ear, 2007). The synergy of cybernetics, biopunk, and citizen science has led to the formation of a media-activist biohacking community. Figures in this transhumanist community include Amal Graafstra (tagger), Tim Cannon, LephtAnonym, and Neil Harbisson. These technology activists, also known as grinders,implant chips in their bodies or have them implanted. Their primary motivations include human–electronic device communication and self-quantification, and cosmetic enhancement[. Another over-arching goal of this community is to increase scientific literacy as citizen scientists. The biohacking community is actively engaged in the development of off-the-shelf protocols at low cost, open access research and collaboration by creating individual pursuit of inquiry. Bio-hackers document and share their protocols, equipment designs, and experiences on the Internet
The article has a useful inventory of implants.

'The Security Implications of Synthetic Biology' by Gigi Gronvall, a more insightful piece in (2018) 60(4) Survival: Global Politics and Strategy 165-180, comments
Advances in synthetic biology hold great promise, but to minimise security threats, national and international regulation will need to keep pace. Consumers have grown accustomed to personalised products. There are T-shirts made to order, books printed on demand, music-streaming services that cater to individual tastes, personalised news feeds and lists of suggested apps. 
This trend towards personalisation has even been extended to biology: genetic information and biological techniques can now be used by individuals to meet their personal needs. Biological information, such as the number of steps one takes in a day, one's heart rate or one's genetic code, has become trackable, and can be compiled for individualised purposes. Biological laboratory techniques, once the sole purview of scientific professionals, are likewise becoming increasingly accessible to amateurs, yielding information such as what a person eats or where they live. The trend towards the personalisation of biology would not be possible without synthetic biology, a growing technical field that aims to make biology easier to engineer. Synthetic biology is widely seen as an exciting new branch of the life sciences, but can be difficult to define  One group of researchers has described synthetic biology as ‘a) the design and fabrication of biological components and systems that do not already exist in the natural world and b) the re-design and fabrication of existing biological systems’. Others define synthetic biology in terms of what the field aims to do: make biology easier to engineer. While bioengineering has been around for a while, synthetic biology is more powerful: it has been described as ‘genetic engineering on steroids’ by one of its founding practitioners. Synthetic-biology tools, such as CRISPR (clustered regularly interspaced short palindromic repeats) for gene editing, gene synthesis and gene drives, are being used in a wide range of life sciences.
Scientists working in synthetic biology envision a time when biological traits, functions and products may be programmed like a computer. While there is a great deal of research yet to be done to allow for this, the convergence of high-speed computing power, intense research interest and some early commercial successes during the last decade has spurred the growth of the field. Publications about synthetic biology have increased from 170 per year in 2000–05 to more than 1,200 per year in 2015 More than 700 research organisations in over 40 countries are undertaking work in the field.
One major outcome of this growth is that biology is becoming industrialised. While biological processes have long been used in industrial settings – for example, to produce some medicines and vaccines, as well as certain consumer products such as beer and wine – they are increasingly being exploited for manufacturing, replacing the use of petrochemicals and resource-intensive harvesting from nature. Synthetic biology is now used to alter the internal machinery of microbes so that they produce a variety of desired molecules, from biofuels to flavour compounds to pharmaceuticals.  This has expanded the biological footprint of a range of industries including fuel, agriculture, medicines and mining, and of products such as construction materials, perfumes, fibres and adhesives. The economic implications of synthetic biology are vast and growing: the global market was valued at $3.9 billion in 2016, and is anticipated to grow at an annual rate of 24.4% to reach over $11bn by 2021. McKinsey and Company has reported that the total economic impact of synthetic biology, including applications in energy, agriculture and chemicals, could reach $700bn to $1.6 trillion annually by 2025.
While clearly useful on an industrial scale, synthetic biology can also be useful to individuals. It can yield information that would never merit a traditional research grant from the National Institutes of Health (NIH) or the Wellcome Trust. In contrast to the research funded by agencies like these, which is intended to foster benefits at a societal level, personalisation allows for the acquisition of information and products that are immediately useful to particular individuals. Scientific advances and the democratisation of synthetic biology should bring about an exciting future, but will also lead to changes in national and international security, the governance of biological research, and safety. 
Do-it-yourself biology 
Synthetic biology has already produced one of the most promising developments in cancer treatments for years, known as chimeric antigen receptor T-cell therapy, or CAR-T therapies.  In this treatment, a patient's own T cells are altered in a laboratory so that they will attack cancer cells. The Food and Drug Administration (FDA) has approved two CAR-T therapies, one to treat children with acute lymphoblastic leukaemia and the other to treat adults with advanced lymphomas. The complete remission rate in a trial of 100 adults with refractory or relapsed large B-cell lymphoma was 51%. 
The trend towards the personalisation of biology is not limited to FDA-approved therapies, but is also in the hands of individuals curious about their own bodies. There is intense public interest in harvesting and making sense of personal biological information from health-monitoring devices.  Services like 23andMe and Ancestry.com provide clients with detailed genetic information, including clues – and sometimes surprises – about their ancestry. Their users can find out whether they potentially have a higher likelihood of developing breast cancer (as established by the presence of BRCA genes) or Parkinson's disease. 
PatientsLikeMe is another example of a service generating personalised health information. On this for-profit site, people who suffer from one or more of 2,800 listed conditions share their medical data and reactions to investigational drugs. The company claims that patients who use their service will learn more about their medications and conditions, make connections with others who share their illnesses, and ultimately ‘change the future of personalised health’.  The data provided to this site has led to original published research, and to the development of an easier way to enrol patients in clinical studies.  
Non-traditional research environments, including home- or community-based laboratories, are becoming more common, an approach that has been called DIY Bio (do-it-yourself biology), bio-hacking or citizen science. Community laboratories where bio enthusiasts can gather and work together, alongside many more DIY communities that lack laboratory space, have been established in New York, Boston, Seattle, San Francisco and Baltimore – as well as in Budapest, Manchester, Munich, Paris and Prague.  According to DIYBio.org, a charitable organisation formed with the mission of ‘establishing a vibrant, productive and safe community of DIY biologists’, there were 44 DIY Bio groups across the US and Canada, 31 in Europe, and 17 in Asia, South America and Oceania as of early June 2018.  These laboratories, which typically charge membership fees to purchase equipment, are dedicated to making science accessible and frequently offer educational programmes. 
The Baltimore Underground Science Space (BUGSS), for example, recently held a class for people aged ten and up to learn about bioluminescence in bacteria, during which a gadget was built that puffs air into bacterial cultures to make the bacteria glow.  Participants were directed to take a stool sample at home and to quickly inactivate it so that no living microbes were brought into the laboratory. At the lab, the participants attempted to use polymerase chain reaction (PCR) to amplify the DNA of the microbes so as to identify them. Participants could also compare samples taken before and after embarking on a diet, or of two different people. 
In the hands of amateurs, straightforward ‘DNA-barcoding’ techniques can be used to determine whether purchased sushi is actually made from the species advertised.  Other techniques can be used to detect the presence of melamine, a poison, in baby formula.  The ease of use offered by such technologies has inspired new biological services as well. For instance, apartment-complex owners have required stool samples from tenants’ pets to genetically identify them, for the purpose of identifying and deterring those who do not pick up after them.
The pipeline for non-traditional biological exploration is expanding, thanks to iGEM, the International Genetically Engineered Machine competition. iGEM began more than a decade ago as a class offered at the Massachusetts Institute of Technology (MIT) in Cambridge, MA, that was modelled on robotics competitions intended to draw students into engineering fields.  In iGEM competitions, teams comprising undergraduates from around the world are given a kit of standard biological parts called BioBricks. Over a summer, and with the help of instructors, the teams use the parts and others they create to engineer biological systems and operate them within living cells. The competition has grown from involving fewer than two dozen undergraduates in its early years to drawing more than 6,000 undergraduates, high-school students, DIY Bio practitioners and ‘overgrads’ per year from more than 40 countries, with 30,000 alumni having already participated. Many of the projects aim to tackle real-world problems and to develop solutions that can be used in low-resource settings, such as a bacteria-produced blood substitute that may be stored for long periods. 
As people acquire more biological information about their environment, they will increasingly have the opportunity to make more personalised and biologically informed choices to improve their health, pursue new hobbies and even care for new types of pets. While these are positive outcomes, there is also the potential for negative outcomes, given the possibility that synthetic biology could be misused to cause deliberate harm. There will also be many new opportunities for quackery and dangerous self-experimentation that could spread via social media and thus become a contagious phenomenon. Biological safety practices will be challenged, and there could be some unwelcome surprises

16 March 2018

Chipper

No great surprises in the report that Meow-Ludo Disco Gamma Meow-Meow has been unsuccessful after brouhaha over his bodyhacking of a Transport for NSW (TfNSW) travel card.

Mr Meow-Meow was noted here, here and in a piece for The Conversation.

TfNSW had taken action against him for not using a valid  ticket (using public transport without a valid ticket and for not producing a ticket to transport officers).

Despite hyperbole about 'cyborg rights' (does everyone with a stent, a pacemaker or joint implant count as a cyborg?),  he today pleaded guilty to both offences at Newtown Local Court.

The ABC reports that  Mr Meow-Meow
was fined $220 for breaching the Opal Card terms of use and was ordered to pay $1,000 in legal costs. 
The lawyer representing Mr Meow Meow argued that transport legislation had advanced to include methods of contactless payment through MasterCard and some smart phones. He said that the law should adapt to all available technologies including implantable tech. 
But Magistrate Michael Quinn said, while the legislation may catch up with technology in the future, the law of the day must be followed. 
Outside court, Mr Meow Meow said he was disappointed both offences were not dismissed and that he was ordered to pay legal costs. 
Despite the decision, Mr Meow Meow said he would continue to experiment with implanted technology. He said he was planning to push the boundary even further, replacing his Opal chip with one that will hold all of his personal information, including credit cards and memberships. 
DIY unauthorised modification of credit card and membership cards will breach the terms and conditions of his account with the credit card providers, so he can expect to see those businesses restricting or cancelling the relevant accounts.

17 February 2018

Biohacking and travel cards

Given that Meow-Ludo Disco Gamma Meow-Meow - noted last year - is in the news again it was timely to read 'DIY Bio: Hacking Life in Biotech’s Backyard' by Lisa C. Ikemoto in (2017) 51 University of California Davis Law Review 539.

The peripatic Meow-Meow - recurrent political candidate, cyborg advocate and biohacking enthusiast - has unsurprisingly had his OPAL near-field transit card cancelled after he extracted the chip for subcutaneous insertion. He appears to consider that the resulting litigation - contesting a $200 fine in 2017 for riding the train without a valid ticket and reportedly planning to launch legal action against TfNSW for unlawfully cancelling his cards - will advance cyborg rights.

Australian law does not recognise 'cyborgs' as such and his action would appear to be readily addressed under the terms and conditions for use of his card.

In the Australian Capital Territory there is a prohibition under Regulation 49 of the Road Transport (Public Passenger Services) Regulation 2002 (ACT) of traveling on an ACT government bus using a ticket that has been 'damaged or defaced in a material respect' or 'changed in a material particular', with ticket including a card with a chip or magnetic strip.

In NSW use of the OPAL travel card is governed by the Passenger Transport (General) Regulation 2017 (NSW). The Cards 'are and remain' the property of TransportNSW, which may 'inspect, de-activate or take possession of an Opal Card or require its return at our discretion without notice at any time'.

Users are required to 'take proper care of the Opal Card, avoid damaging it, keep it flat and not bend or pierce it' and - saliently - 'not misuse, deface, alter, tamper with or deliberately damage or destroy the Opal Card'. Further, the user must not 'alter, remove or replace any notices (other than the activation sticker), trademarks or artwork on the Opal Card. Additionally, they must not'modify, adapt, translate, disassemble, decompile, reverse engineer, create derivative works of, copy or read, obtain or attempt to discover by any means, any (i) encrypted software or encrypted data contained on an Opal Card; or (ii) other software or data forming part of the Opal Ticketing System'.

Meow-Meow gained attention several years ago regarding 'biohacking' (centred on a DIY community DNA-modification lab) rather than 'bodyhacking'.

Ikemoto comments
DIY biologists set up home labs in garages, spare bedrooms, or use community lab spaces. They play with plasmids, yeast, and tools like CRISPR-cas9. Media stories feature glow-in-the-dark plants, beer, and even puppies. DIY bio describes itself as a loosely formed community of individualists, working separate and apart from institutional science. This Essay challenges that claim, arguing that institutional science has fostered DIY bio and that DIY bio has, thus far, tacitly conformed to institutional science values and norms. Lack of a robust ethos leaves DIY bio ripe for capture by biotech. Yet, this Essay suggests, DIY bio could serve as a laboratory for reformulating a relationship between science and society that is less about capital accumulation and more about knowledge creation premised on participation and justice.
 She goes on
Popular media depicts biohackers or Do-It-Yourself (“DIY”) biologists as the ultimate science geeks. “DIY bio” refers to noninstitutional science or science performed outside of professional laboratories.  DIY biologists set up home labs in garages, spare bedrooms, and closets or use community lab spaces. The people doing DIY bio range from the self-taught to PhDs. Instead of building computers or creating apps, DIYers play with plasmids, jellyfish, yeast, and polymerase chain reaction in genetic engineering experiments. Media stories and DIY bio websites often feature glow-in-the-dark plants, food, petri dish art, and even puppies.
DIY bio is an emerging set of activities. A range of players, with varied ideologies, are shaping DIY bio’s trajectories. DIY bio’s signature claim is that it exists apart from, and even in opposition to, institutional science. This Essay challenges that claim. Whether all DIY biologists know this or not, DIY bio serves the interests of institutional science and is well-situated for capture by biotechnology. Biotechnology refers not only to the life sciences-based industry, but also to the neoliberal epistemology that values the use of applied science to commercialize the transformation of life itself into technology. DIY bio’s origin stories do reflect resistance to the highly structured and bureaucratic nature of institutional science. Yet these accounts also indicate interest convergence between DIY bio and institutional science. Accounts that forecast DIY bio’s future show DIY bio conforming its practices to mainstream law, policy, and market concerns. Thus far, DIY bio has not crafted its own account of the relationship between science, society, and ethics, and is falling into a science-as-usual practice that situates DIY bio in biotech’s backyard.
Part II sets out a descriptive account of biohacking, and DIY bio, in particular. Part III identifies three overlapping explanations for DIY bio. The first two, explicitly political accounts and nostalgic accounts, are largely consistent with the DIY bio claim that DIY bio is different and apart from institutional science. The third account borrows from Frederick Jackson Turner’s frontier thesis and asserts that DIY bio sustains an ideology of bio-individualism embedded in biotechnology. Part IV reviews and critiques law and policy views of DIY bio and its prospects. These views apply the frames and standards applicable to biotech. Part V makes the case for biotech’s annexation of DIY bio. Part V elaborates on DIY bio’s failure, so far, to re-define the relationship between science and society, and suggests a few initial critical points of engagement for doing so.
She suggests that
As yet, DIY bio has not expressed a commitment to ethical science activity, nor developed a robust ethos. Perhaps, its tacit acceptance of the risk-benefit framework means that its view of ethics aligns with that of institutional science. That is, it conflates a risk-benefit weighing with ethical standards or views ethics as a compliance obligation.
The risk calculus is not devoid of ethical concerns. It maps onto a standard ethical test used in institutional science. The test highlights three criteria — safety, efficacy, and autonomy. That test derives from the Belmont Report’s principlist framework, the FDA’s drug and device approval standards, and neoliberalism’s effects on the life sciences and autonomy. The Belmont Report states four principles — autonomy, beneficence, non-maleficence, and distributive justice. Autonomy’s application is informed consent. The non-maleficence principle is addressed by weighing risk to human health against benefits. Benefits refer to efficacy or improvements to human health. The FDA uses safety and efficacy as its criteria in the drug and device testing requirements for market approval. Efficacy, like safety or risk to human health, is narrowly defined. The FDA requires that the product work, but does not require that it work well or better than existing therapeutics. Market thinking has infiltrated these criteria. Claims that individual choice should trump agency standards in determining access to drugs have gained credence. This indicates that traditional bioethics’ first principle, autonomy, may now be understood as a form of free market individualism. In addition, the pharmaceutical industry has leveraged that version of autonomy to maximize the role of drugs in medical care, and the sale of particular products. While big bio’s risk calculus is not the end-all and be-all of ethics in institutional science, it is part of an impoverished ethical framework.
In 2011, the North American and European DIYbio Congresses issued Draft Codes of Ethics. The codes incorporate principles of open science — open access, transparency, and education; and selfregulation — safety (adopt safe practices), environment (respect the environment), and peaceful purposes (biotechnology should only be used for peaceful purposes). As discussed, the North American Code has one more element — Tinkering. The Code elements are general. As my characterization suggests, the Code elements, like the Belmont Report principles, lend themselves to narrow or broad readings. Read more generously, safety, environment, and peaceful purposes might move DIY bio beyond the issue of forestalling regulation to situating science as a tool for social justice. On the other hand, open access could be read as a right to access, premised on free market individualism. Tinkering invokes the individual, as the nostalgic accounts show. If DIY bio is first and foremost an individualist vision of science, it stands little chance of evolving into a new understanding of science.
The open science principles suggest that DIY bio’s ethos differs from big bio’s, and that DIY bio is not bound by big bio’s norms. Yet, open science goals do not translate to an ethics of science. Open science can be used for different goals, including forms of commercial distribution that are exploitative. In addition, the Code states the elements as universal principles, which in itself is problematic. Typically, dominant readings of so-called universal principles are used to maintain boundaries, and identify the out-group as non-compliant. It is very possible that the universal principles may be used to undercut the inclusive goals that open science asserts.
My comments in the previous subparts suggest, without prescriptive detail, the possibility of using DIY bio to redefine the possible relationship between science and society. Contemporary accounts indicate that DIY bio projects are typically small-scale and are relatively unsophisticated. As such, DIY bio seems underpowered as a platform for re-thinking the political economy of the life sciences. What I suggest here is not that DIY biologists directly challenge or redesign institutional science. Rather, DIY bio might provide an opportunity to create, by deliberate experimentation, a set of practices that are ethos-based and originate from critical social inquiry. The most valorized explanatory accounts speak, in bits and pieces, of social justice goals. Using these as a starting point, DIY bio might craft ways of doing science that embed justice-based ethics into inquiry and practice. Ethics, then, could become not a compliance checklist, but constitutive of good science.
Ikemoto concludes
 DIY bio is many things to many people. That is, undoubtedly, part of its appeal. What is it not, however, is separate and apart from institutional science. Its location in biotech’s backyard, without a fence or substantive alternative vision of DIY bio’s role, makes it vulnerable to annexation. In that scenario, DIY bio and its dream of a new science by the people might disappear. This Essay maps the relationships between DIY bio and institutional science. The mapping also critiques aspects of biotechnology that are inconsistent with DIY bio’s stated goals of access and participatory knowledge formation. If DIY bio takes those goals seriously, this Essay suggests that it move beyond compliance-based thinking, and beyond experimentation using plasmids and pipettes. Acknowledging that science is a social practice, followed by scientific-social inquiry about how and why we engage with plasmids and pipettes, and willingness to experiment with new social methods of doing science, might move DIY bio out of biotech’s backyard, and into society.

27 June 2017

Cats Pyjamas

Slow news day? The ABC features an item on biopunk Mr Meow-Ludo Disco Gamma Meow-Meow (formerly Stuart McKellar), under the heading 'Sydney man has Opal card implanted into hand to make catching public transport easier'.

The item states
If you have ever been caught fumbling for your Opal card at the ticket gate, a Sydney man may have found the solution. He had the chip from an Opal card inserted into his hand and is now tapping on using the technology that is implanted underneath his skin. 
Bio-hacker Meow-Ludo Disco Gamma Meow-Meow, his legal name, had the Opal near-field communication (NFC) chip cut down and encased in bio-compatible plastic, measuring 1 millimetre by 6 millimetres. He then had the device implanted just beneath the skin on the side of his left hand. 
"It gives me an ability that not everyone else has, so if someone stole my wallet I could still get home," he said. He is able to use the Opal just like other users, including topping the card up on his smartphone. However, his hand needs to be about 1 centimetre from the reader, closer than traditional cards, and he sometimes needs to tap more than once, due to his device's smaller antenna.
"My goal is to have frictionless interaction with technology," he said.
Mr Meow-Meow had his device implanted by a piercing expert, in a procedure lasting approximately one hour.  He warned others not to do the same without expertise and research. "Most certainly don't try this at home unless you know what you're doing," he said. 
Mr Meow-Meow said there was a risk of bacterial infection whenever anything was implanted beneath the skin, so it was important to consult professionals. "Be aware of the risks involved and make a wise judgement based on that." 
He also said his actions were a breach of Opal's terms of service, which prohibit tampering. "It will be really interesting to see what happens when the first transit officer scans my arm," he said.
The  officer might be more impressed by Mr Meow-Meow's given and surnames, which gained some attention when he stood for parliament.

Last year Bloomberg reported
If your name is Meow Meow, there’s a decent chance you’re an unusual dude. This holds true for Meow-Ludo Disco Gamma Meow-Meow, a polyamorous, trans-humanist bio-hacker in Sydney. In 2014, Meow Meow opened Australia’s first do-it-yourself bio-hacking lab, in which anyone could pay a membership fee to experiment with DNA and make whatever creatures they could imagine.
For people familiar with the VeriChip controversy there is more bite in 'Towards insertables: Devices inside the human body' by Kayla Heffernan, Frank Vetere and Shanton Chang in (2017) 22(3) First Monday or 'The security implications of VeriChip cloning' by John Halamka, Ari Juels, Adam Stubblefield and Jonathan Westhues in (2006) 13(6) Journal of the American Medical Informatics Association 601-607.

Ethical Implications of Implantable Radiofrequency Identification (RFID) Tags in Humans' by Kenneth Foster and Jan Jaeger in (2008) 8(8) The American Journal of Bioethics comments on
two areas of present ethical concern that are distinctive to implanted RFID chips, and in partic- ular the VeriChip. 
Disclosure of Risks
A central ethical principle holds that individuals have a right to know about possible adverse effects of a treatment, in this case implantation of a chip. Should VeriChip have disclosed the results of the rodent studies before anti-chip activists raised this issue? A finding of carcinogenic effect of an im- plant in rodents is, at least, suggestive of the possibility of a similar effect in humans. Predictably, the issue has assumed major importance to VeriChip, which saw a large drop in its stock price following media reports of this issue. The company commissioned a consultant to write an article for its website that downplayed risks to humans. While regulatory agencies might not give much weight to indications of foreign-body induced tumorigenesis in rodents, there is clearly a diversity of opinion among experts. “I think the evidence from the animal studies is indeed alarming,” one prominent cancer researcher told one of the present authors “and one should refrain from chipping people unless the mechanisms and long-term effects are known.” (A. Lerchl, Jacobs University Bremen [Bremen, Germany], personal communication [e-mail] to K. R. Foster October 16, 2007). Should the possibility of cancer be added to the rather long list of potential adverse effects provided by the FDA, most of which are seemingly highly unlikely?
Truth in Advertising
VeriChip markets the VeriMed system for identification of patients who might present to emergency rooms incapable of communicating their identity to caregivers. Its promo- tional literature lists a wide variety of conditions, which, the company believes, would justify the cost of implantation of a chip and subscription to its medical database.
However, we know of no studies showing that being chipped gives a better outcome at the emergency room or otherwise improves public health in comparison with sim- pler and noninvasive technologies, such as medical alert bracelets, USB drives with personal health information, identification cards in wallets, fingerprint scanners, biometric identification, for example. An independent assessment of the risks and benefits of the use of implanted RFID tags in humans for medical identification purposes is badly needed, if only as a consumer protection measure to help consumers make informed decisions whether to buy into the system. For most individuals, we suspect, chipping would be a poor investment with slight prospects of resulting in a better outcome in a health crisis given other options available to the patient.
So far, only preliminary studies are underway which address this issue. A pilot project using this system was announced in June 2006 by VeriChip Corporation with Hackensack University Medical Center (Hackensack, NJ), a large provider of medical services in the state, and Horizon Blue Cross Blue Shield of New Jersey (Newark, NJ). A larger test, with 200 Alzheimer’s patients, was announced in February 2007 by VeriChip Corporation and Alzheimer’s Community Care, Inc (West Palm Beach, FL).
From the brief descriptions of these studies, it not clear whether they are designed to assess benefits of the technology to individuals or to the healthcare providers. To make an informed choice, the consumer needs to know the likelihood that being chipped will result in a better outcome in a health emergency than with other identification technologies. A well-designed study to examine that endpoint would have to be far larger than either of the two studies mentioned previously. The pilot studies may be better suited to demonstrate the benefits to the healthcare system in accessing patient insurance and health records data, which is a different matter entirely. (The second of these studies raises issues of obtaining consent from Alzheimer’s patients, a thorny bioethical issue in itself).
Given the uncertainties about the safety of implanted RFID chips, and uncertainties in the benefits that they may bring, caution is warranted. We agree with the caution reflected in a recent report of the Council on Ethical and Judicial Affairs of the American Medical Association on the technology (Sade 2007):
Radio frequency identification (RFID) devices may help to identify patients, thereby improving the safety and efficiency of patient care, and may be used to enable secure access to patient clinical information. However, their efficacy and security have not been established. Therefore, physicians implanting such devices should take certain precautions: 1) The informed consent process must include disclosure of medical uncertain- ties associated with these devices. 2) Physicians should strive to protect patients’ privacy by storing confidential information only on RFID devices with informational security similar to that required of medical records. 3) Physicians should support research into the safety, efficacy, and potential non-medical uses of RFID devices in human beings.
Coercion
If receiving an RFID tag were purely a matter of consumer choice, few serious ethical issues would arise apart from generic concerns about consumer protection. Thus, for example, a consumer might reasonably choose to be chipped — preferably not in a tattoo parlor — to avoid having to carry a credit card or RFID tag on a key chain.
By far the most important and distinctive ethical issues connected with implanted RFID transponders result from the very real possibility that the chips might be implanted under real or implied coercion, coupled with the deep aversion — or at least unease — with which many individuals view the technology.
Despite extensive, and at times hyperbolic, discussion of the uses of implanted RFID chips in humans to be found on the Internet, few systematic studies have been reported on the acceptability of implanted RFID chips to average people. A small survey in 2003 (Hiltz et al. 2003) found that 18 of 23 people questioned objected to the idea of implantable chips. ”If they are putting something inside of you”, one respondent replied, ”it’s like you’re changing yourself. It’s not right” (Hiltz et al. 2003, 7).
People from different cultures will certainly differ in their acceptability of implanted RFID chips. In some cultures, altering the bodily image may ostracize individuals from their sociocultural networks. In the United States, some fundamentalist Christian groups vehemently object to implanted RFID tags as “marks of the beast.” Both Judaism and Islam prohibit tattoos, and their religious authorities may forbid implanted RFID tags for similar reasons. Other cultural and religious factors in acceptability of the technology have hardly been explored in discussions to date about implanting RFID chips in people for identification.
In view of widespread popular apprehension about the technology, proposals to “chip” individuals would raise extremely serious ethical issues if an element of coercion were involved, either direct or tacit. This can easily come about if RFID tags were to become widely adopted for access control or identification in nonmedical settings.
Indeed, a variety of proposals have been floated in public discussions that would involve coercive implantation of RFID chips, some on face value highly impractical. In March 2006, a columnist for The New Republic Online defended a proposal to implant RFID tags in sex offenders (Cottle 2006), pointing out that such people are already subject to extensive restrictions, and that tracking individuals through implanted RFID chips might be preferable to present practices, for example, residency restrictions based on Megan’s Law legislation that in some jurisdictions force convicted sex offenders to sleep under bridges or in their vehicles. However, the proposal raises obvious objections on practical grounds. Must every entrance to every school be equipped with an RFID reader to detect chipped individuals? Would it not be easy for a chipped individual to conceal the transponder from the reader? A more practical way to implement the plan would be to chip the teachers instead, and use the RFID readers to provide positive identification when they enter a school. We suspect that teachers’ unions would fiercely oppose such a plan.
Far more troubling (and thankfully very far from reality) is the proposal by Silverman (VeriChip’s Chairman of the Board) to “chip” guest workers entering the United States. One might argue that receiving implants would be voluntary for such individuals. But which immigrant, facing poverty at home and the prospects of a job in a new country, would be in a position to argue with demands to have a chip implanted as a condition of entry into the coun- try? Would college professors or bioethicists headed to the United States for a brief sabbatical or training be chipped as well as agricultural workers? If not, who would decide, and on what basis? If being chipped becomes a requirement for work by a noncitizen in the United States, what impact would there be on the global labor market? The prospects of being chipped will surely be a strong deterrent to others from coming here to work and learn.
Forcing immigrants to be chipped is deeply offensive on human rights grounds. It would frame the RFID chip as a branding device similar in theory to the brand of the western cowboy on cattle or to the tattoo of an inmate in a Nazi concentration camp. Arguably, it is a violation of Article 3 of the Universal Declaration of Human Rights (1948), which guarantees everybody the right to “life, liberty and security of person.” To the extent that forced implantation of a RFID chip in a person’s body is a violation of his/her privacy, it would also violate the privacy provision of the International Covenant on Civil and Political Rights (1966), to which the US is a party.
While implantable RFID technology is presently being marketed as a measure for patient protection, its chief benefit — convenient and reliable identification of an individual by means of a device that is difficult for the subject to lose — might well be more significant to organizations than to individuals, and the issue is intrinsically more complicated than one of consumer choice alone. In institutions that have adopted the use of implanted RFID tags for identification purposes, pressures will inevitably build on individuals to receive the tags. Suppose, for example, healthcare organizations with electronic records systems gave their patients a choice between maintaining possession of an identification card or receiving a chip? Would elderly, forgetful patients be pressured to receive a chip? What about a soldier in an army that decided to replace dog tags with implanted chips? Are these individuals less vulnerable to coercion to receive a chip than the hapless immigrants considered in Silverman’s proposal? Other technologies, such as fingerprint identification or retinal scans, allow reliable identification of individuals without the need to compromise bodily integrity.
Faced with widespread public concerns about coercive implantable RFID chips, several states have passed legislation regulating their use. In May 2006, for example, Wis- consin passed a bill (Assembly Bill 290) that would prohibit requiring anybody to have a microchip implanted. North Dakota and California have also passed similar bills. Enforcing such laws might be difficult if implanted chips, like drivers’ licenses, remain legally voluntary but become de facto requirements for many kinds of employment, voting, or receipt of health care.
Because of concerns discussed previously, a national dis- cussion is needed about the use of implanted RFID chips among the many groups potentially affected by the technology. Decisions about the use of the technology need to be made by a broader group of stakeholders than the engineers and companies involved in the field. A commitment must be made to restrict the technology to people who freely choose to be implanted, and to shield other individuals from real or implied coercion. As Anderson and Labay remarked (2006), a “decision about where to draw the line of acceptable use must be made soon, before the technology becomes rampant and it becomes too late to prevent misuse.” Or, in more specific terms, we have already implanted RFID tags in our dogs and cats. Is Aunt Millie next?