'When digital health meets digital
capitalism, how many common
goods are at stake?' by Tamar Sharon in (2018)
Big Data and Society 1–12 comments
In recent years, all major consumer technology corporations have moved into the domain of health research. This
‘Googlization of health research’ (‘GHR’) begs the question of how the common good will be served in this research. As
critical data scholars contend, such phenomena must be situated within the political economy of digital capitalism in
order to foreground the question of public interest and the common good. Here, trends like GHR are framed within a
double, incommensurable logic, where private gain and economic value are pitted against public good and societal value.
While helpful for highlighting the exploitative potential of digital capitalism, this framing is limiting, insofar as it acknowledges only one conception of the common good. This article uses the analytical framework of modes of justification developed by Boltanksi and Thevenot to identify a plurality of orders of worth and conceptualizations of the common
good at work in GHR. Not just the ‘civic’ (doing good for society) and ‘market’ (enhancing wealth creation) orders, but
also an ‘industrial’ (increasing efficiency), a ‘project’ (innovation and experimentation), and what I call a ‘vitalist’ (proliferating life) order. Using promotional material of GHR initiatives and preliminary interviews with participants in GHR projects, I ask what moral orientations guide different actors in GHR. Engaging seriously with these different conceptions
of the common good is paramount. First, in order to critically evaluate them and explicate what is at stake in the move
towards GHR, and ultimately, in order to develop viable governance solutions that ensure strong ‘civic’ components.
Sharon argues
In the last few years, every major consumer technology
corporation, from Google to Apple, to Facebook,
Amazon, Microsoft and IBM, has moved decisively
into the health and biomedical sector. These are companies that, for the most part, have had little interest in
health in the past, but that by virtue of their data expertise and the large amounts of data they already have
access to, are becoming important facilitators, if not initiators, of data-driven health research and healthcare.
This ‘Googlization of health research’ (GHR), as I
have called this process elsewhere (Sharon, 2016),
promises to advance health research by providing the
technological means for collecting, managing and
analysing the vast and heterogeneous types of data
required for data-intensive personalized and precision
medicine. Apple’s ResearchKit software, for example,
which turns the iPhone into a platform for conducting
medical studies, allows researchers to access diverse
types of data (sleeping patterns, food consumption,
gait), to recruit larger numbers of participants than
average in clinical trials, and to monitor participants
in real time (Savage, 2015). Similarly, the new analytics
techniques and data repositories offered by consumer
technology companies seek to overcome limitations in
traditional medical analytics methods and infrastructure. DeepMind, for example, Google’s London-based
artificial intelligence offshoot, is applying deep learning
for the prediction of cardiovascular risk, eye disease,
breast cancer and patient outcomes, in collaboration
with several hospitals (Poplin et al., 2018; Ram,
2018). Verily, Alphabet’s life science branch, is developing new tools to capture and organize unstructured
health data, for example in its ‘Project Baseline’ in partnership with Stanford and Duke University. The study
will collect and analyse a wide range of genetic, clinical
and lifestyle data on 10,000 healthy volunteers, with the
aim of comprehensively ‘mapping human health’
(Verily, 2018). Google, Microsoft, Amazon and IBM
have also begun packaging their clouds as centralized
genomic databases where researchers can store and run
queries on genomic data.
Many of these techniques still have not delivered on
their promises, all the while introducing a host of new
challenges and limitations, such as new selection and
other types of biases (Agniel et al., 2018; Hemkens
et al., 2016; Jardine et al., 2015). Yet their potential, if
not over-hyped, remains promising (Fogel et al., 2018),
and places these corporations in a privileged position in
the move towards personalized medicine and Big Data
analytics – and broader healthcare vistas. Indeed, most
recently a number of these companies have begun
moving into the domains of electronic health record
management, employee healthcare and health insurance
(Farr, 2017; Farr, 2018; Wingfield et al., 2018).
Beyond these promises, GHR also raises a number of
challenges and risks. First amongst these are concerns of
privacy and informed consent. GHR is an instance of
data-intensive research characterized by the use of large
digital datasets and Big Data analytics, where traditional mechanisms put in place to protect research participants are increasingly under strain. These issues may
be exacerbated in situations where consumer technology
companies, whose data-sharing practices often are not
subject to the same privacy-protecting regulations and
codes of conduct as those of medical researchers, are
involved (Zang et al., 2015). The potential for ‘context
transgressions’ (Nissenbaum, 2010), whereby data may
flow between medical, social and commercial contexts
governed by different privacy norms, is greater here.
Furthermore, broader questions about the value of personal health data and publicly generated datasets, and
what market advantage is conferred to commercial entities who can access them and develop treatments and
services based on this access, will emerge. In other
words, in GHR initiatives, concerns that are common
in the practices of digital capitalism are imported into
the health realm (Sharon, 2016).
A recent controversy surrounding a data sharing
partnership between Google DeepMind and the NHS
illustrates how some of these issues are already playing
out. Announced in 2016, the collaboration between
DeepMind and the Royal Free London, a NHS
Foundation Trust, granted DeepMind access to identifiable information on 1.6 million of its patients in order
to develop an app to help medical professionals identify
patients at risk of acute kidney injury (AKI). The terms
of this agreement have been analysed in depth by
Powles and Hodson (2017, 2018), who argue that it
lacked transparency and suffered from an inadequate
legal and ethical basis. Indeed, following an investigation, the Information Commissioner’s Office (ICO,
2017) ruled that this transfer of data and its use for
testing the app breached data protection law. Namely,
patients were not at all aware that their data was being
used. Under UK common law, patient data can be used
without consent if it is for the treatment of the patient,
a principle known as ‘direct care’, which the Trust
invoked in its defence. But as critics argue, insofar as
only a small minority of the patients whose data was
transferred to DeepMind had ever been tested or treated for AKI, appealing to direct care could not justify
the breadth of the data transfer.
Of course, GHR collaborations taking place in different jurisdictions will be provided with different
opportunities and face different legal challenges. And
despite the global profile of the corporations in question, national and regional guidelines for the management of AI and Big Data in health will impact what
GHR collaborations can and cannot do. But the
DeepMind case also raises questions beyond data protection, privacy and informed consent, which have to
do with the newfound role that tech corporations will
play in health research and healthcare, and new power
asymmetries between corporations, public health institutions and citizens that may ensue. For example, will
these corporations become the gatekeepers of valuable
health datasets? What new biases may be introduced
into research using technologies, such as iPhones, that
only certain socio-economic segments of the population
use? What role will these companies, already dominant
in other important domains of our lives, begin to play
in setting healthcare agendas? These are questions that
concern collective and societal benefit – broadly speaking, the common good. They point to the need to
situate the analysis of GHR in the wider context of the
political economy of data sharing and use, and they
foreground a number of concerns that move beyond
(just) privacy and informed consent, including social
justice, accountability, democratic control and the
public interest.
These values are the focus of the growing body of
literature in critical data studies that draws on a political economy critique to address the development of
new power asymmetries and discriminations emerging
in Big Data infrastructures (Taylor, 2017; van Dijk,
2014; Zuboff, 2015). In this context, new Big Data divides can be expected based on access to and ownership
of data, technological infrastructures and technical
expertise, with important repercussions for who
shapes the future of (health) research (boyd and
Crawford, 2012). However, by focusing on the new
power asymmetries emerging between data subjects
and corporations, critical data studies tend to frame
data sharing in terms of two incommensurable logics:
public benefit and private, corporate gain. In this article, I argue that this dichotomy is limiting, insofar as it
only allows for one vision of the common good, while a
plurality of conceptualizations of the common good are
at work in GHR. In the following, I use the interpretive
framework of economies of worth developed by the
sociologists Luc Boltanski and Laurent Thevenot
(2006 [1991]) to identify a number of moral repertoires
that each draw upon different conceptualizations of the
common good and that are mobilized by actors in
GHR-type initiatives. Doing so depicts a much richer
ethical terrain of GHR than is accounted for in most
critical analyses of digital capitalism.
This is valuable for several reasons. First, it is paramount that the moral orientations of actors in GHR be
taken seriously, insofar as they influence and guide
decision-making processes that are currently taking
place. Here I draw on the constructivist tradition that
views the discourses, repertoires and logics that convey
moral orientations as performative; as contributing to
the enactment of technological futures (Foucault, 1965;
Latour and Woolgar, 1979). Critical research on GHR
must engage with these competing moral orientations
and conceptualizations of the common good. Second,
this type of mapping is a necessary first step towards
critically evaluating different moral repertoires, insofar
as it contributes to rendering explicit the trade-offs that
will be involved in the enactment of different repertoires. In the current situation, where no comprehensive
ethical and policy guidance for GHR exists, this is
required if we are to have serious public deliberation
about what is at stake in the move towards GHR.
Finally, while Boltanski and The´venot’s framework
was developed as a descriptive project, I argue that it
can be used to help develop normative guidelines for
governance of GHR-type projects, and that this should
be further developed into a research programme. Here,
solutions can be thought of as combinations of repertoires, where different repertoires can check and balance each other. Such solutions will have a good
chance of adoption insofar as they will appeal to a
wide range of actors. Further, if what Boltanski and
The´venot call the ‘civic’ order of worth embodies the
most publicly legitimate conception of the common
good, we can design solutions that ensure the presence
of strong civic components. For this, however, the civic
repertoire must be ‘updated’, so to speak: it must first
engage seriously with competing conceptions of the
common good that are mobilized in the empirical reality of GHR. The article thus seeks to map and analyse
the different orders of worth invoked by actors involved
in GHR as a first step towards this endeavour.