26 April 2019

Pragmatism

In my doctoral dissertation I noted criticisms of legal pragmatism, variously damned as merely ‘freedom from theory-guilt’, ‘unprincipled opportunism’, an apology for the status quo, the sophistry of nihilists, ‘mushy’, a ‘dog’s dinner’, the legal equivalent of MacIntyre’s ‘general theory of holes’, ‘a series of slogans providing cover for a flourishing philosophy-made-easy school of legal theory’ or an ‘anti-theory’ that atomises legal constructs such as identity by emphasizing that each subject is unique and can only be understood in relation to that subject’s time and context.

The spirited but for me grossly unpersuasive 'Posner's Folly: The End of Legal Pragmatism and Coercion's Clarity' by Joseph D'Agostino in (2018) 7  British Journal of American Legal Studies 365 comments 
 Highly influential legal scholar and judge Richard Posner, newly retired from the bench, believes that law is irrelevant to most of his judicial decisions as well as to most constitutional decisions of the U.S. Supreme Court. His recent high-profile repudiation of the rule of law, made in statements for the general public, was consistent with what he and others have been saying to legal audiences for decades. Legal pragmatism has reached its end in abandoning all the restraints of law. Posner-endorsed “epistemological democracy” obscures a discretion that is much worse than the rule of law promoted by epistemological authoritarianism. I argue that a focus on conceptual essentialism and on the recognition of coercive intent as essential to the concept of law, both currently unpopular among legal theorists and many jurists, can clarify legal understandings and serve as starting points for the restoration of the rule of law. A much more precise, scientific approach to legal concepts is required in order to best ensure the rational and moral legitimacy of law and to combat eroding public confidence in political and legal institutions, especially in an increasingly diverse society. The rational regulation by some (lawmakers) of the real-world actions of others (ordinary citizens) requires that core or central instances of concepts have essential elements rather than be “democratic.” Although legal pragmatism has failed just as liberal theory generally has failed, the pragmatic value of different conceptual approaches is, in fact, the best measure of their worth. Without essentialism in concept formation and an emphasis on coercion, the abilities to understand and communicate effectively about the practical legal world are impaired. Non-essentialism grants too much unwarranted discretion to judges and other legal authorities, and thus undermines the rule of law. Non-essentialist or anti-essentialist conceptual approaches allow legal concepts to take on characteristics appropriate to religious and literary concepts, which leads to vague and self-contradictory legal concepts that incoherently and deceptively absorb disparate elements that are best kept independent in order to maximize law’s rationality and moral legitimacy. When made essentialist, the concept of political positive law shrinks, clarifies, and reveals its true features, including the physically-coercive nature of all laws and the valuable method of tracing the content of law by following its coercive intents and effects.

Carp

'Bullshitters. Who Are They and What Do We Know about Their Lives?' (IZA Institute of Kabor Econics, 2019) by John Jerrim, Phil Parker and Nikki Shure comments 
‘Bullshitters’ are individuals who claim knowledge or expertise in an area where they actually have little experience or skill. Despite this being a well-known and widespread social phenomenon, relatively few large-scale empirical studies have been conducted into this issue. This paper attempts to fill this gap in the literature by examining teenagers’ propensity to claim expertise in three mathematics constructs that do not really exist. Using Programme for International Student Assessment (PISA) data from nine Anglophone countries and over 40,000 young people, we find substantial differences in young people’s tendency to bullshit across countries, genders and socio-economic groups. Bullshitters are also found to exhibit high levels of overconfidence and believe they work hard, persevere at tasks, and are popular amongst their peers. Together this provides important new insight into who bullshitters are and the type of survey responses that they provide. 
The authors state
In his seminal essay-turned-book On Bullshit, Frankfurt (2005) defines and discusses the seemingly omnipresent cultural phenomenon of bullshit. He begins by stating that “One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share” (Frankfurt, 2005: 1). His book spent weeks on the New York Times’ bestsellers list in 2005 and has recently been cited in the post-truth age to better understand Donald Trump (e.g. Jeffries, 2017; Heer, 2018; Yglesias, 2018). 
Other philosophers have since expanded on his work, most notably G. A. Cohen in his essay “Deeper into Bullshit” (Cohen 2002), but there has been limited large scale empirical research into this issue. We fill this important gap in the literature by providing new cross-national evidence on who is more likely to bullshit and how these individuals view their abilities and social status. This is an important first step in better understanding the seemingly ubiquitous phenomenon of bullshit. 
We make use of an influential cross-national education survey administered every three years by the Organisation for Economic Cooperation and Development (OECD), namely the Programme for International Student Assessment (PISA). This data is commonly used by the OECD and education researchers to benchmark education systems or the performance of specific subgroups of pupils (e.g. Anderson et al., 2007; Jerrim and Choi, 2014; Parker et al., 2018), but has never been used to compare participants across countries in terms of their proclivity to bullshit. This paper fills this important gap in the literature. Previous academic work on bullshit has been limited and mostly theoretical. Black (1983) edited a collection of essays on “humbug”, the predecessor of bullshit, which he defines as “deceptive misrepresentation, short of lying, especially by pretentious word or deed, of somebody's own thoughts, feelings or attitudes” (Black, 1983: 23). Frankfurt (2005) is the first theoretical treatment of the concept of “bullshit” and he situates it in terms of previous philosophical traditions. A crucial aspect of bullshitting in Frankfurt’s work is the fact that bullshitters have no concern for the truth, which is different than a purposeful lie (Frankfurt, 2005: 54). Cohen responds to Frankfurt’s essay and focuses on a slightly different definition of bullshit where “the character of the process that produces bullshit is immaterial” (Cohen, 2002: 2). 
Petrocelli (2018) is one of the few studies to explore bullshitting empirically. He looks at the “antecedents of bullshit”, namely: topic knowledge, the obligation to provide an opinion hypothesis (i.e. individuals are more likely to bullshit when they feel social pressure to provide a response) and the “ease of passing bullshit hypothesis” (i.e. people are more willing to bullshit when believe they will get away with it). He finds that participants are more likely to bullshit when there is pressure to provide an opinion, irrespective of their actual level of knowledge. Petrocelli also concludes that individuals are more likely to bullshit when they believe they can get away with it, and less likely to bullshit when they know they will be held accountable for the responses they provide (Petrocelli, 2018). His work uses smaller sample sizesthan our work (N ≈ 500) and does not answer the question of who bullshitters are and how they view their abilities or social standing. 
Pennycook et al. (2015) is the only other empirical study focused on bullshit. They present experiment participants with “pseudo-profound bullshit” - vacuous statements constructed out of buzzwords - to ascertain when they can differentiate bullshit from meaningful statements and create a Bullshit Receptivity (BSR) scale. Their results point to the idea that some people may be more receptive towards pseudo-profound bullshit, especially if they have a more intuitive cognitive style or believe in the supernatural (Pennycook et al., 2015). Their study focuses on ability to detect bullshit and the mechanisms behind why some people cannot detect bullshit, rather than proclivity to bullshit, which is the focus of this paper. 
In psychology, there has been a related literature on overconfidence and overclaiming. More and Healy (2008) provide a thorough overview of existing studies on overconfidence and distinguish between “overestimation”, “overplacement”, and “overprecision” as three distinct types of overconfidence. Overestimation occurs when individuals rate their ability as higher than it is actually observed to be, overplacement occurs when individuals rate themselves relatively higher than their actual position in a distribution, and overprecision occurs when individuals assign narrow confidence intervals to an incorrect answer, indicating overconfidence in their ability to answer questions correctly (More and Healy, 2008). The type of questions we use to construct our bullshit scale are closely related to overestimation and overprecision since the individuals need to not only identify whether or not they are familiar with a mathematical concept, but also assess their degree of familiarity. 
Similar to how we define bullshit, overclaiming occurs when individuals assert that they have knowledge of a concept that does not exist. In one of the first studies on overclaiming, Philips and Clancy (1972) create an index of overclaiming based on how often individuals report consuming a series of new books, television programmes, and movies, all of which were not real products. They use this index to explore the role of social desirability in survey responses. Stanovich and Cunningham (1992) also construct a scale of overclaiming using foils, fake concepts mixed into a list of real concepts, and signal-detection logic for authors and magazines to examine author familiarity. In both of these studies, however, the focus is not on the actual overclaiming index. Randall and Fernandes (1991) also construct an overclaiming index, but use it as a control variable in their analysis of self-reported ethical conduct. 
Paulhus, Harms, Bruce, and Lysy (2003) focus more directly on overclaiming. They construct an overclaiming index using a set of items, of which one-fifth are non-existent, and employ a signal-detection formula to measure overclaiming and actual knowledge. They find that overclaiming is an operationalisation of self-enhancement and that narcissists are more likely to overclaim than non-narcissists (Paulhus et al., 2003). Atir, Rosenzweig, and Dunning (2015) find that people who perceive their expertise in various domains favourably are more likely to overclaim. Pennycock and Rand (2018) find that overclaimers perceive fake news to be more accurate. Similar to Atir et al. (2015), we find that young people who score higher on our bullshit index also have high levels of confidence in their mathematics self-efficacy and problem-solving skills. 
We contribute to the existing literature on the related issues of bullshitting, overconfidence and overclaiming in three important ways. First, we use a large sample of 40,550 young people from nine Anglophone countries to examine bullshit, which enables us to dig deeper into the differences between subgroups (e.g. boys versus girls, advantaged vs. disadvantaged young people). Second, we provide the first internationally comparable evidence on bullshitting. We use confirmatory factor analysis to construct our scale and test for three hierarchical levels of measurement invariance (configural, metric and scalar). This allows us to compare average scores on our bullshit scale across countries in a robust and meaningful way. Finally, we also examine the relationship between bullshitting and various other psychological traits, including overconfidence, self-perceptions of popularity amongst peers and their reported levels of perseverance. Unlike many previous studies, we are able to investigate differences between bullshitters and non-bullshitters conditional upon a range of potential confounding characteristics (including a high-quality measure of educational achievement) providing stronger evidence that bullshitting really is independently related to these important psychological traits. 
Our findings support the view that young men are, on average, bigger bullshitters than young women, and that socio-economically advantaged teenagers are more likely to be bullshitters than their disadvantaged peers. There is also important cross-national variation, with young people in North American more likely to make exaggerated claims about their knowledge and abilities than those from Europe. Finally, we illustrate how bullshitters display overconfidence in their skills, and are more likely to report that they work hard when challenged and are popular at school than other young people. 
The paper now proceeds as follows. Section 2 provides an overview of the Programme for International Student Assessment (PISA) 2012 data and our empirical methodology. This is accompanied by Appendix A, where we discuss how we test for measurement invariance of the latent bullshit scale across groups. Results are then presented in section 3, with discussion and conclusions following in section 4.

Machines as Actors

'Machine behaviour' by Iyad Rahwan, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W. Crandall, Nicholas A. Christakis, Iain D. Couzin, Matthew O. Jackson, Nicholas R. Jennings, Ece Kamar, Isabel M. Kloumann, Hugo Larochelle, David Lazer, Richard McElreath, Alan Mislove, David C. Parkes, Alex ‘Sandy’ Pentland, Margaret E. Roberts, Azim Shariff, Joshua B. Tenenbaum and Michael Wellman in (2019) 568 Nature 477–486 comments
Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms. Here we argue that this necessitates a broad scientific research agenda to study machine behaviour that incorporates and expands upon the discipline of computer science and includes insights from across the sciences. We first outline a set of questions that are fundamental to this emerging field and then explore the technical, legal and institutional constraints on the study of machine behaviour. 
The authors argue
 In his landmark 1969 book Sciences of the Artificial, Nobel Laureate Herbert Simon wrote: “Natural science is knowledge about natural objects and phenomena. We ask whether there cannot also be ‘artificial’ science—knowledge about artificial objects and phenomena.” In line with Simon’s vision, we describe the emergence of an interdisciplinary field of scientific study. This field is concerned with the scientific study of intelligent machines, not as engineering artefacts, but as a class of actors with particular behavioural patterns and ecology. This field overlaps with, but is distinct from, computer science and robotics. It treats machine behaviour empirically. This is akin to how ethology and behavioural ecology study animal behaviour by integrating physiology and biochemistry—intrinsic properties—with the study of ecology and evolution—properties shaped by the environment. Animal and human behaviours cannot be fully understood without the study of the contexts in which behaviours occur. Machine behaviour similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate. 
At present, the scientists who study the behaviours of these virtual and embodied artificial intelligence (AI) agents are predominantly the same scientists who have created the agents themselves (throughout we use the term ‘AI agents’ liberally to refer to both complex and simple algorithms used to make decisions). As these scientists create agents to solve particular tasks, they often focus on ensuring the agents fulfil their intended function (although these respective fields are much broader than the specific examples listed here). For example, AI agents should meet a benchmark of accuracy in document classification, facial recognition or visual object detection. Autonomous cars must navigate successfully in a variety of weather conditions; game-playing agents must defeat a variety of human or machine opponents; and data-mining agents must learn which individuals to target in advertising campaigns on social media. 
These AI agents have the potential to augment human welfare and well-being in many ways. Indeed, that is typically the vision of their creators. But a broader consideration of the behaviour of AI agents is now critical. AI agents will increasingly integrate into our society and are already involved in a variety of activities, such as credit scoring, algorithmic trading, local policing, parole decisions, driving, online dating and drone warfare. Commentators and scholars from diverse fields—including, but not limited to, cognitive systems engineering, human computer interaction, human factors, science, technology and society, and safety engineering—are raising the alarm about the broad, unintended consequences of AI agents that can exhibit behaviours and produce downstream societal effects—both positive and negative—that are unanticipated by their creators.
In addition to this lack of predictability surrounding the consequences of AI, there is a fear of the potential loss of human oversight over intelligent machines5 and of the potential harms that are associated with the increasing use of machines for tasks that were once performed directly by humans. At the same time, researchers describe the benefits that AI agents can offer society by supporting and augmenting human decision-making. Although discussions of these issues have led to many important insights in many separate fields of academic inquiry, with some highlighting safety challenges of autonomous systems and others studying the implications in fairness, accountability and transparency (for example, the ACM conference on fairness, accountability and transparency (https://fatconference.org/)), many questions remain.
This Review frames and surveys the emerging interdisciplinary field of machine behaviour: the scientific study of behaviour exhibited by intelligent machines. Here we outline the key research themes, questions and landmark research studies that exemplify this field. We start by providing background on the study of machine behaviour and the necessarily interdisciplinary nature of this science. We then provide a framework for the conceptualization of studies of machine behaviour. We close with a call for the scientific study of machine and human–machine ecologies and discuss some of the technical, legal and institutional barriers that are faced by researchers in this field.
'Machines as the New Oompa-Loompas: Trade Secrecy, the Cloud, Machine Learning, and Automation' by Jeanne C. Fromer in(2019) 94 New York University Law Review comments
In previous work, I wrote about how trade secrecy drives the plot of Roald Dahl’s novel Charlie and the Chocolate Factory, explaining how the Oompa-Loompas are the ideal solution to Willy Wonka’s competitive problems. Since publishing that piece, I have been struck by the proliferating Oompa-Loompas in contemporary life: computing machines filled with software and fed on data. These computers, software, and data might not look like Oompa-Loompas, but they function as Wonka’s tribe does: holding their secrets tightly and internally for the businesses for which these machines are deployed. 
Computing machines were not always such effective secret-keeping Oompa Loompas. As this Article describes, at least three recent shifts in the computing industry — cloud computing, the increasing primacy of data and machine learning, and automation — have turned these machines into the new Oompa-Loompas. While new technologies enabled this shift, trade secret law has played an important role here as well. Like other intellectual property rights, trade secret law has a body of built-in limitations to ensure that the incentives offered by the law’s protection do not become so great that they harm follow-on innovation — new innovation that builds on existing innovation — and competition. 
This Article argues that, in light of the technological shifts in computing, the incentives that trade secret law currently provide to develop these contemporary Oompa-Loompas are excessive in relation to their worrisome effects on follow-on innovation and competition by others. These technological shifts allow businesses to circumvent trade secret law’s central limitations, thereby overfortifying trade secrecy protection. The Article then addresses how trade secret law might be changed — by removing or diminishing its protection — to restore balance for the good of both competition and innovation.

22 April 2019

Animal Rights

'Animal abuse, biotechnology and species justice' by David Rodríguez Goyes and Ragnhild Sollund in (2018) 22(3) Theoretical Criminology 363-383 comments
Generally, in the modern, western world, conceptualizations of the natural environment are associated with what nature can offer us—an anthropocentric perspective whereby humans treat nature and all its biotic components as ‘natural resources’. When nature and the beings within it are regarded purely in utilitarian terms, humans lose sight of the fact that ecosystems and nonhuman animals have intrinsic value. Most biotechnological use of nonhuman animals is informed by an instrumental view of nature. In this article, we endeavour to broaden the field of animal abuse studies by including in it the exploration of biotechnological abuse of animals. We analyse the issue by discussing it in relation to differing philosophical starting points and, in particular, the rights and justice theory developed within green criminology. 
 The authors argue
There seem to be few, if any, moral or ethical limitations to what humans do to nonhuman animals (e.g. Beirne, 1999, 2009; Nurse, 2015; Sollund, 2008, 2013a, 2013b). For example, nonhuman animals are killed for ‘fun’ and as ‘sport’ in hunting (e.g. Lawson, 2017; Sollund, 2017a), and they are exploited in various forms of entertainment, such as in races and fights (e.g. Lawson, 2017; Young, 2017), circuses and zoos (e.g. Berger, 2009: 36). On a large scale, humans breed and kill nonhuman animals for food production in harmful and cruel ways (e.g. Adams, 1996; Cudworth, 2017; Wyatt, 2014) and subject them to painful experiments (e.g. Menache, 2017; Regan, 2012; Sollund, 2008). In this article, we address the ways in which humans, via biotechnology, use and abuse freeborn nonhuman animals by subjecting them to processes aimed at developing medicines and other products for human benefit. Such processes reflect a utilitarian view of nonhuman animals (Singer, 1995) from which a ‘welfarist’ position is derived (e.g. Svärd, 2008)—and in contrast to a perspective that considers nonhuman animals as beings possessing their own right(s) (Francione, 2009, 2014; Regan, 1986; cf. Pellow, 2013). These perspectives or stances, which may be positioned along a continuum regarding the extent to which nonhuman animals should have rights, may be viewed in relation to (other) philosophical directions that will be treated in more detail in this article. Our goals, then, are as follows: (1) to position biotechnological animal abuse in the field of animal abuse studies; (2) to show the complexity of developing studies of biotechnological use of nonhuman animals; and (3) to explore the moral and ethical implications of biotechnological use of nonhuman animals. 
We begin by placing the study within the field of green criminology, including animal abuse studies and so-called ‘wildlife trafficking’. This allows us to connect animal exploitation for traditional (e.g. Ngoc and Wyatt, 2013; Van Uhm, 2016) and western medicine (Regan, 2007, 2012; Sollund, 2008) to the more ‘modern’ animal exploitation within biotechnological research for medical purposes. We then provide an overview of the concept of biotechnology and an introduction to the field of bioprospecting, more generally. These concepts are illuminated through an examination of the exploitation of the ‘poison dart frog’, in which we draw on our own empirical research from Colombia. We then turn to the philosophical debate surrounding the use of animals in biotechnology. We argue that whereas traditional medicine is condemned for the harms it causes to animals, biotechnology escapes those deserved criticisms due to the legitimacy conferred by the label of modern western science. We conclude that most current uses of animals by biotechnology are a prolongation of the harmful logic behind the abuse of animals for development of traditional medicine.

21 April 2019

Data Commons and Algorithmics

'Logged out: Ownership, exclusion and public value in the digital data and information commons' by Barbara Prainsack in (2019) Big Data and Society comments
In recent years, critical scholarship has drawn attention to increasing power differentials between corporations that use data and people whose data is used. A growing number of scholars see digital data and information commons as a way to counteract this asymmetry. In this paper I raise two concerns with this argument: First, because digital data and information can be in more than one place at once, governance models for physical common-pool resources cannot be easily transposed to digital commons. Second, not all data and information commons are suitable to address power differentials. In order to create digital commons that effectively address power asymmetries we must pay more systematic attention to the issue of exclusion from digital data and information commons. Why and how digital data and information commons exclude, and what the consequences of such exclusion are, decide whether commons can change power asymmetries or whether they are more likely to perpetuate them. 
 In referring to 'The iLeviathan: Trading freedom for utility' Prainsack argues
As a concept, ‘Big Data’ started to become an object of attention and concern around the start of the new millennium. Enabled by new technological capabilities to create, store and analyse digital data at greater volume, velocity, variety and value1 the phenomenon of Big Data fuelled the imagination of many. It was hoped to help tackle some of the most pressing societal challenges: Fight crime, prevent disease and offer novel insights into the ways in which we think and act in the world. With time, some of the less rosy sides of practices reliant on big datasets and Big Data epistemologies became apparent (e.g., Mittelstadt and Floridi, 2016): Data-driven crime prevention, for example, requires exposing large numbers of people to predictive policing (e.g., Perry, 2013), and ‘personalised’ disease prevention means that healthy people have to submit to extensive surveillance to create the datasets that allow personalisation in the first place (Prainsack, 2017a). In addition, it became apparent that those entities that already had large datasets of many people became so powerful that they could eliminate their own competition, and at the same time de facto set the rules for data use (e.g., Andrejevic, 2014; Pasquale, 2017; see also van Dijck, 2014; Zuboff, 2015). GAFA – an acronym combining the names of some of the largest consumer tech companies, Google, Apple, Facebook and Amazon – have become what I call the iLeviathan, the ruler of a new commonwealth where people trade freedom for utility. Unlike with Hobbes’ Leviathan, the freedom people trade is no longer their ‘natural freedom’ to do to others as they please, but it is the freedom to control what aspects of their bodies and lives are captured by digital data, how to use this data, and for what purposes and benefits. The utility that people obtain from the new Leviathan is no longer the protection of their life and their property, but the possibility to purchase or exchange services and goods faster and more conveniently, or to communicate with others across the globe in real time. Increasingly, the iLeviathan also demands that people trade privacy and freedom from surveillance for access to services provided by public authorities (Prainsack, 2019). The latter happens, for instance, when people are required to use services by Google, Facebook, or their likes in order to book a doctor's appointment or communicate with a school (see also Foer, 2017). For many of us, it also happens when access to a public service requires email. 
This situation has garnered reactions by activists, scholars and policy makers. Reactions can be grouped in two main approaches, depending on where the focus of their concern lies: On the one side are those who want individual citizens to have more effective control over their own data. I call this the Individual Control approach (Table 1). It comprises (a) those who deem property rights to be the most, or even the only, effective way to protect personal data, as well as (b) some of those who see personal information as an inalienable possession of people. The latter group reject the idea that personal data should be protected by property rights and prefer it to be protected via human rights such as privacy, whereby privacy is understood to be an individual, rather than a collective right (see Table 1). Solutions put forward by scholars in the Individual Control group include the granting of individual property rights to personal data (see below), or the implementation of ever more granular ways of informing and consenting data subjects (e.g., Bunnik et al., 2013; Kaye et al., 2015). The spirit of the new European Union General Data Protection Regulation (GDPR) tacks mostly to an Individual Control approach in the sense that it gives data subjects more rights to control their personal data – to the extent that some might see it as granting quasi-property rights. 
The second approach – which I call the Collective Control approach – comprises of authors who emphasise that increasing individual-level control over personal data is a necessary but insufficient way to address the overarching power of multinational companies and other data capitalists. Scholars within the Collective Control group are diverse in their assessment of the benefits and dangers of increasing individual-level control.4 What they all have in common, however, is that they foreground the use of data for the public good.5 Many of them see the creation of digital data and information commons as the best way to do this, often because of the emphasis that commons place on collective ownership and control. Some authors also see the creation of commons explicitly as a way to resist ‘the prevailing capitalist economy’ (Birkinbine, 2018:6 291; Hess, 2008; De Peuter and Dyer-Whiteford, 2010; for overviews see Hess, 2008;7 Purtova, 2017a). 
In the following section, I will scrutinise the claim made by some authors within the Collective Control group that digital data and information commons can help to address power asymmetries between data givers and data takers. Despite the frequent use of terms such as ‘digital commons’ and ‘data commons’ in the literature, I argue that the question of what kind of commons frameworks are applicable to digital data and information, if any, has not been answered with sufficient clarity. In the subsequent part of the paper I will discuss another aspect that has not received enough systematic attention in this context, namely the topic of exclusion. I argue that collective measures to address power asymmetries in our societies need to pay explicit and systematic attention to categories, practices and effects of exclusion. I end with an overview of what governance frameworks applicable to digital data and information commons need to consider if they seek to effectively tackle inequalities. If they fail to do this, they risk that they are most useful to those who are already privileged and powerful.
'How data protection fits with the algorithmic society via two intellectual property rights – a comparative analysis' by  Cristiana Sappa in (2019) 14(5) Journal of Intellectual Property Law and Practice 407–418 comments
Big Data, IoT and AI are the three interrelated elements of the algorithmic society, responsible for an unprecedented flourishing of innovation. Companies working within such an algorithmic society need to protect the information created and stored for entrepreneurial purposes. Thus, their concerns relate to data protection, in particular with regard to trade secrets and the sui generis protection of databases. 
This paper tries to answer two questions from a EU and US law perspective. First, it asks whether data generated and managed within the frameworks of Big Data, IoT and AI meet the essential requirements to enjoy trade secret protection and the database right, if any. The answer seems to be in the affirmative in most cases. Second, it studies whether trade secrets and the sui generis right are appropriate in a sharing-based paradigm, such as that of Big Data, IoT and AI. The focus on this upstream protection helps to understand the bottlenecks created at the downstream level, which challenge innovation and transparency, as well as consumer protection. In other words, when both exclusive rights (the sui generis protection for databases) and quasi-intellectual property rights (trade secrets) are present, innovation and the circulation of information are not necessarily promoted, and the presence of this double protection may be beneficial to big businesses only. On the other hand, the presence of mere trade secrets does not seem to utterly exclude an encouragement to innovation and the circulation of information and it is therefore more suitable to SMEs and to safeguard the public interest. According to a non-exhaustive notion, Big Data is the huge amount of digital data generated from transactions and communication processes1 collected in datasets, in particular via apps, sensors and other (smart) devices, which regularly lead to predictive analyses via complex algorithms and processors. The Internet of Things (IoT) is a network of interconnected physical objects, each embedded with sensors that collect and upload data to the Internet for analysis or monitoring and control, such as smart-city traffic and waste-management systems. IoT generates and is built upon Big Data. Artificial Intelligence (AI) is created by the interaction between intelligent agents, ie devices perceiving inputs from their environment and being able to reproduce methods and achieve aims. In other words, intelligent agents are able to reproduce cognitive human functions, such as learning and problem solving. AI generates Big Data and its functionalities extend beyond IoT. Big Data, IoT and AI are the three interrelated elements of the algorithmic society, responsible for an unprecedented flourishing of innovation, which no longer seems to require the same outside incentives of the conventional world. 
Companies working with Big Data, IoT or AI need to protect the information stored against the unfair practices of (former) employees, collaborators and other market operators. Data protection via intellectual property rights (IPRs), personal data rules and also contractual and technical measures ensuring confidentiality – as well as a competitive advantage on the market – have become a major concern of companies (and consumers). 
This paper does not study all of the above-mentioned rights. Instead it focuses on two IPRs only. More precisely it discusses how trade secrets and the sui generis right for databases fit with the algorithmic society from a EU and US law perspective. These forms of protection were introduced prior to the advent of Big Data, IoT and AI. In particular, trade secrets protection was first introduced (in different ways) at the national level within the geographical areas covered by this work;8 only at a later stage did both the US and EU adopt measures to ensure a more harmonized legal regime. On the other hand, the sui generis right for databases was introduced in the EU only, while the US has not implemented it and has consistently shown a clear reluctance for such form of protection. 
This analysis tries to answer a two-fold question. The first one is whether data generated and managed within the frameworks of Big Data, IoT and AI meet the essential requirements to enjoy the above-mentioned protections of trade secrets and the sui generis database right, if any. The answer seems to be in the affirmative in most cases. The second question is whether trade secrets and the sui generis right are appropriate in a sharing-based paradigm, such as that provided by the computational innovation generated by the above-mentioned phenomena of Big Data, IoT and AI. In particular, the focus on this existing upstream protection helps to understand the bottlenecks that may be created at the downstream level, which challenge innovation and transparency as well as consumer protection. In other words, when both exclusive rights (such as the sui generis protection of databases) and quasi-IPRs (such as trade secrets) are present, innovation and the circulation of information are not necessarily promoted, and the existence of this double layer of protection seems to be beneficial to big businesses only. On the other hand, the mere presence of trade secrets as currently designed needs further study, but in principle it does not seem to exclude an encouragement to innovation and the circulation of information, and would therefore have more favorable results for SMEs and community interests as a whole. 
In order to answer these suggested questions, Section I will provide information on the current legal framework of trade secrets in EU and US law, and the sui generis right for databases in EU law. Section II will try to answer whether trade secrets and the sui generis right apply to the phenomenon of the algorithmic society. Finally Section III will discuss whether exclusive rights and quasi-IPRs are suited to the newly emerging algorithmic society.
Automated Decision-Making in the EU Member States Laws: The Right to Explanation and Other 'Suitable Safeguards'' by Gianclaudio Malgieri comments 
The aim of this paper is to analyse the very recently approved national Member States’ laws that have implemented the GDPR in the field of automated decision-making (prohibition, exceptions, safeguards): all national legislations have been analysed and in particular 9 Member States Laws address automated decision making providing specific exemptions and relevant safeguards, as requested by Article 22(2)(b) of the GDPR (Belgium, the Netherlands, France, Germany, Hungary, Slovenia, Austria, the United Kingdom, Ireland). 
The approaches are very diverse: the scope of the provisions can be narrow (just automated decisions producing legal or similarly significant effects) or wide (any decision with a detrimental impact) and even the specific safeguards proposed are very diverse. 
Taking into account this overview, article will also address the following questions: are Member States free to broaden the scope of automated decision-making regulation? Are ‘positive decisions’ allowed under Article 22, GDPR, as some Member States seem to affirm? Which safeguards can better guarantee rights and freedoms of the data subject? 
In particular, while most Member States mention just the three safeguards mentioned at Article 22(3) (i.e.subject’s right to express one’s point of view; right to obtain human intervention; right to contest the decision), three approaches seem very innovative: a) some States guarantee a right to legibility/explanation about the algorithmic decisions (France and Hungary); b) other States (Ireland and United Kingdom) regulate human intervention on algorithmic decision through an effective accountability mechanism (e.g. notification, explanation of why such contestation has not been accepted, etc.); c) another State (Slovenia) require an innovative form of human rights impact assessments on automated decision-making.

Biomarkets

'Tradable Body Parts? How Bone and Recycled Prosthetic Devices Acquire a Price without Forming a ‘Market’' by Klaus Hoeyer in (2009) 4(1) Biosocieties 239-256 comments
Exchange of material originating in human bodies is essential to many health technologies but potentially conflicts with a prominent moral ideal according to which human bodies and their parts are beyond trade. In this article, I suggest that the inclination to keep bodies apart from ‘commercial exchange’ has significant implications for the way their parts come to be exchanged. The analysis revolves around two versions of the hip: one prosthetic version made of metal and one version made of bone, the femoral head, which is excised in conjunction with hip replacements and later used for transplantation. How are exchange systems for something moving in and out of human beings organized? Who provides what and who receives what? When and where does money change hands? How are the specific amounts determined? By answering these questions, I provide a description of the exchange form that avoids assuming it to be simply a ‘market’ or a ‘gift economy’. I focus on the mechanisms that allow money to be generated despite the moral ideal viewing body parts as beyond trade—or, rather, the how the ideal facilitates mechanisms through which money can be generated without being viewed as profit. In particular, I suggest that ‘compensation’ is an important example of a mechanism in need of further scrutiny.

Biopiracy, Traditional Knowledge and PBR

Biopiracy and the right to self-determination of indigenous peoples' by Dieter Dörr in (2019) 53 Phytomedicine 308-312 comments
 Since over thirty years, I work on the unclear legal situation of in which indigenous peoples find themselves today in the beginning mainly in the USA and later also in Canada, Australia and New Zealand. The status of indigenous people and native nations is characterized as a mixture of national and international law. Hypothesis/Purpose: To clarify the status of indigenous people it is necessary to analyze and interpret carefully hundreds of old treaties, international declarations and covenants, national statutes and jurisprudence, especially the old leading decisions of the US-Supreme Court. Such an analysis and interpretation should prove that indigenous people have the defensive right of self determination. 
The study outlines the old decisions of the US-Supreme Court with its inherent contradictions which highly influenced the status of indigenous people in all other countries until now. It clarifies the important new developments in international law especially the non binding Declaration on the Rights of Indigenous Peoples and its effects on the interpretation of international and national law in regard to biopiracy. For this purpose it is necessary to use the methods of judgmental comparative law, historical and teleological interpretation. 
By expressly stating that indigenous peoples have a right to self-determination, the Declaration on the Rights of Indigenous Peoples of 2007 complements the protection stipulated in the Charter and the Covenants of 1966. Although the declaration itself is not legally binding as it is a resolution of the UN General Assembly, it can serve as a blueprint to show the rights that indigenous peoples can derive from international law as well as rights which should ideally be granted to them by the states even though they are not yet binding customary or treaty law. Self-determination means exactly that, it is up to the bearers of the right to decide how they want to utilize this right and then work together with the state in which they live in defining a joint framework.
'The Globalisation of Plant Variety Protection: Are Developing Countries Still Policy Takers?' by Graham Dutfield in Intellectual Property and Development: Understanding the Interfaces (Springer, 2019) 277-293 comments
Until recently, for developing and emerging economies intellectual property policy taking was the norm rather than policy making. What we mean is that the developed countries set the standards for other countries to follow. This may still be the general trend but developing nations are starting to devise their own policy approaches that other countries are imitating. This shift towards policy making is certainly noticeable. But it is not yet hugely significant. Conformity to the recommendations (and still in some cases the dictates) of developed countries, their industries, and experts from the Global North remains very common. The question arises of whether developing countries continue to be policy takers or have begun to develop their own counter-norms which are viable. As we will see there is evidence that some developing countries are indeed “translating” international obligations in some imaginative ways that may (or may not) promote their interests better. It may be that divergences between Europe and the United States in how innovations in plant science and agricultural biotechnology are protected inadvertently encourages the adoption of more flexible perspectives than would otherwise have been envisaged. However, there are massive policy challenges ahead especially due to the lack of empirical evidence on the effects of different intellectual property rules concerning plants on rural development and food security that could be used to shape law and policy. This goes far in explaining why only a handful of countries has sought alternative approaches. Further research is desperately needed.
'Traditional Knowledge and the Public Domain in Intellectual Property' by Ruth L. Okediji at 249-275 in the same volume comments
The protection of traditional knowledge is among the most vexing and morally compelling issues in international intellectual property law today. As a matter of conventional IP law, many applications of traditional knowledge—its dizzying array of expressions, forms, and utilities—easily overlay the globally ubiquitous trade secret, patent, copyright, and trademark categories. But as a matter of political and economic organization, the epistemological core of traditional knowledge is based on the distinctiveness and cultural autonomy of indigenous groups and local communities. Amid the notable arguments against recognizing proprietary rights for traditional knowledge holders, the most provocative is the claim that such knowledge is already in the public domain. The claim that traditional knowledge consists principally of public domain material has significant implications for the welfare and development capacity of indigenous groups. It undermines treaties that already acknowledge or require protection for the rights of indigenous groups and, by extension, traditional knowledge holders. Moreover, it violates central obligations of the international IP framework such as non-discrimination and protection for non-economic interests associated with cultural goods. There is no meaningful basis for the argument that exclusive property rights for traditional knowledge are unavailing because of its unique characteristics. This article addresses public domain concerns in the context of ongoing efforts to secure an international regime of protection for traditional knowledge.
It draws on her 'Negotiating the public domain in an international framework for the production of genetic resources, traditional knowledge and traditional cultural expressions' in Pedro Roffe et al (Eds), The WIPO intergovernmental committee negotiations: A history (Routledge, 2017).

SA Forfeiture Review

The South Australian Law Reform Institute inquiry into forfeiture, currently underway, is described thus
The forfeiture rule was extended to both murder and manslaughter in Re: Hall [1914] P 1. This principle was approved and the forfeiture rule effectively endorsed by the joint judgment of the High Court of Australia in Helton v Allen (1940) 63 CLR 691, 709 (Dixon, Evatt and McTiernan JJ) (though the status and effect of this decision is still debated). 
The forfeiture rule has apparent absolute operation in South Australia (see Troja v Troja (1994) 33 NSWLR 269 (though note Kirby P’s dissent); Rivers v Rivers [2002] SASC 437; Re: Luxton [2006] SASC 371). The rule has been held to apply to any example of murder and manslaughter. The rule has drastic effect and provides that any person who has unlawfully caused the death of another is precluded from taking any benefit that arises as a result of the victim’s death. The rule has been held to preclude a killer from acquiring a benefit via a will, distribution on intestacy, the victim’s share in jointly owned property, other benefits such as insurance policies or a statutory pension. The killer is also barred from making a claim under family provision laws. 
The underlying rationale of the forfeiture rule is sound and accords with public policy, as a killer should be generally unable to benefit from his or her crime. However, the scope and operation of the rule are contentious and uncertain (see Re: Edwards [2014] VSC 392; Edwards v State Trustees Ltd [2016] VSCA 28). In particular, the application of the forfeiture rule to unlawful killings in various situations where a lesser degree of moral culpability is recognised has shown that strict application of the rule may lead to unfair outcomes. The rule may lead to potential unfair implications in such situations as the survivor of a suicide pact, assisted suicide, infanticide, manslaughter by gross negligence (as opposed to an act of violence), euthanasia or a ‘mercy killing’, where the offender has a relevant major cognitive impairment (also termed ‘diminished responsibility’) or especially in a context of domestic violence where a victim of domestic violence kills an abusive spouse and is convicted of manslaughter on the basis of excessive self-defence or provocation. The strict application of the rule in such circumstances has been described as ‘unnecessarily harsh, inconsistent and... irrational’ and ‘injudicious and incongruous’ with its public policy rationale. 
The problematic operation of the rule in an assisted suicide context has arisen recently in the UK; see the 2019 English case of In the Matter of Alexander Shedden Ninian (Deceased) and in the Matter of the Forfeiture Act 1982, . The technical application of the forfeiture rule in various property, succession and inheritance situations is also unclear and problematic. SALRI is keen to look at these aspects. In particular, in various property, succession and inheritance situations the rule may result in the ‘sins of the unlawful killer been visited upon their blameless children’. These are discussed by the English Law Commission in its 2005 Report: . 
The forfeiture rule presently does not apply to an individual found not guilty of homicide by reason of mental impairment (previously termed insanity). The Victorian Law Reform Commission opposed any such extension. Noting the NSW statutory model which allows a court to apply the rule where a person is found not guilty of murder on the basis of insanity, SALRI will examine whether the forfeiture rule should be capable of applying to an individual found either unfit to plead or especially not guilty by reason of mental impairment/insanity. The recent judgment of Lindsay J in Re: Settree Estates [2018] NSWSC 1413 provides a very helpful summary. 
Potential Models for Reform 
The forfeiture rule has been modified by statute in the UK, NSW, ACT, and New Zealand. The Victorian Law Reform Commission (VLRC) supported a hybrid model combining aspects of the UK/NSW and New Zealand models. The United Kingdom, ACT and New South Wales have laws that modify the rule and provide discretion to a court to modify the effect of the forfeiture rule. In both models, unlawful killing is broadly defined. The UK and NSW laws do not codify the rule, but rather allow a court to exempt an individual in an appropriate case of unlawful killing (though not amounting to murder) from its application. The UK and NSW models contain some limited guidance with regard to the circumstances in which a court should exercise the discretion, but it is not comprehensive. The New Zealand law fully codifies the forfeiture rule, displacing all related rules of common law, equity, and public policy. Specific forms of unlawful homicide are wholly excluded from the effect of the rule, such as infanticide, those arising out of negligence, or pursuant to a suicide pact. There is no judicial discretion to modify the rule in other categories. The New Zealand model states the assets to which an unlawful killer is disentitled. 
The Tasmanian Law Reform Institute in its report, recommended new laws based on the NSW model by providing a discretion to a court to modify the effect of the rule (but not for murder). It also supported including a greater level of guidance for a court to have regard to in deciding whether or not to exercise its discretion to avoid applying the forfeiture rule. The Tasmanian Law Reform Institute also favoured greater clarity with regard to the burden of proof and the disposal of disinherited assets. 
The Victorian Law Reform Commission (VLRC) in its 2014 report, supported a ‘hybrid’ legislative model. The VLRC proposal would define the scope and effect of the rule, with specific forms of homicide such as infanticide or dangerous driving totally excluded from the rule. However, the VLRC proposal would also provide a discretion to a court to more broadly modify the rule in an appropriate case, whilst setting out the factors for a court to have regard in deciding whether or not to exercise its discretion to avoid applying the rule. 
In its review, SALRI will draw on the academic, judicial and law reform work in this area to date, notably the 2004 Report of the Tasmanian Law Reform Institute, the 2005 Report of the English Law Commission, and especially the recent Report by the VLRC. SALRI is interested to hear any comments on the ACT/NSW/English or New Zealand models and their operation. 
This reference will allow SALRI to identify the problems with the forfeiture rule (both broad areas and its technical implications); look at other models; gather the views of the community, interested parties and experts and on the basis of its research and consultation suggest ways in which the law in South Australia can be best improved. SALRI is due to provide a Report with recommendations for the Government about any potential law reforms by the end of August 2019.

Sapo Harms

'“Kambô” frog (Phyllomedusa bicolor): use in folk medicine and potential health risks' by Francisco Vaniclei Araújo da Silva, Wuelton Marcelo Monteiro and Paulo Sérgio Bernarde in (2019) 52 Revista da Sociedade Brasileira de Medicina Tropical Journal of the Brazilian Society of Tropical Medicine e20180467 comments
Recent reports have revealed the side-effect of Kambô treatment, including death. Leban et al. reported the case of a 44-year-old female in Slovenia who drank six liters of water after applying Kambô, and gradually developed nausea, vomiting, confusion, lethargy, muscle weakness and spasms, fits/convulsions, loss of consciousness, short-term memory, and developed syndrome of inappropriate antidiuretic hormone (SIADH) secretion. Pogorzelska and Łapiński6 treated a 34-year-old male patient in Poland with a chronic history of alcohol and marijuana use, who had signs of transient hepatitis, with Kambô to maintain sobriety. Kumachev et al. also reported the case of a 32-year-old female patient who was admitted to a hospital in Canada with prolonged nausea, frequent episodes of vomiting, and abdominal discomfort eight hours after Kambô treatment. Li et al. treated a 24-year-old female at a first-aid facility in the United States with symptoms of prolonged vomiting, facial flushing, facial swelling, altered mental status, and restlessness 22 hours after using Kambô. Roy et al. also reported the case of a 33-year-old woman in the United States who presented with potential psychosis (with characteristics of paranoia, anxiety, bizarre delusions, labile humor, and panic attacks) associated with Kambô use. The sudden death of a 42-year-old overweight man with signs of coronary disease associated with the use of Kambo was reported in Italy. These authors suggested that the hypotensive effects of Kambô may have resulted in reduced myocardial perfusion and tachycardia, which led to sudden cardiac arrhythmia. In Pindamonhangaba, in the state of São Paulo (Brazil), the death of a 52-year-old man was reported shortly after the application of the Kambô by a practitioner who obtained the Kambô skin secretions from of the state of Acre. All of these complications have been reported in regions far from where Kambô is traditionally used (Western Amazon), and applied by practitioners who may not have the same experience as those who traditionally perform the ritual, and thus poses an additional health risk. 
In addition to its traditional use, Kambô has spread via urban expansion into alternative therapy clinics and Brazilian Ayahuasca religions (Santo Daime and União do Vegetal) with new practitioners, called holistic and medical therapists. Natives are concerned that new practitioners may misapply Kambô or use the skin secretions of other species of amphibians (“Sapo-cururu” Rhinella marina), resulting in health complications or even death. Many people have reported the benefits of this therapy as if it were a “panacea” that is able to cure many diseases (low immunity, headache, gastritis, diabetes, blood pressure problems, cirrhosis, labyrinthitis, epilepsy, impotence, depression, cancer and AIDS). These reported benefits may increase the demand for alternative treatments like Kambô; however, evidence of its efficacy is insufficient and studies of its side effects have not been conducted. The National Sanitary Surveillance Agency has ordered the suspension of all typesofadvertisingforthisalternativetherapy,andrevealedthat there is no scientific evidence to guarantee the quality, safety, and efficacy of this treatment or its indication for any type of disease, imbalance, or treatment of any acute and chronic processes. Due to the reports of complications and death, it is necessary to caution the public on the contraindications regarding the use of Kambô such as severe cardiovascular conditions, hypotensive syndromes, and to limit water intake after the ritual, in order to reduce the risk of contracting SIADH syndrome. In addition, since Kambô is also traditionally used to induce abortions, pregnant women should not participate in this ritual. Excessive applications (overdose), and treatment of children with a lower body mass, should be avoided as mass-to-dose ratio may be relatively higher during the treatment in these two groups of patients. The secretion of P. bicolor contains several different uncharacterized toxins6. Additional studies on the pharmacological potential of amphibians are necessary, and the risk of bio-piracy should be monitored. Trafficking of these animals and their secretions, and the possible impact on the P. bicolor population in their natural habitats, should be expensively studied.
'Case report The syndrome of inappropriate antidiuretic hormone secretion after giant leaf frog (Phyllomedusa bicolor) venom exposure' by  Vid Leban, Gordana Kozelj and Miran Brvara in (2016) 120 Toxicon 107-109 comments
In Europe body purification and natural balance restoring rituals, including Kambô, are becoming increasingly popular. In patients with neurological symptoms and a line of body burns Phyllomedusa bicolor venom exposure should be suspected. Hyponatremia after Phyllomedusa bicolor venom exposure is the result of inappropriate antidiuretic hormone secretion. 
In Europe body purification and natural balance restoring rituals are becoming increasingly popular, but an introduction of Amazonian shamanic rituals in urban Europe can result in unexpected adverse events. 
A 44-year-old woman attended a Kambô or Sapo ritual in Slovenia where dried skin secretion from a giant leaf frog (Phyllomedusa bicolor) was applied to five freshly burned wounds at her shoulder. Afterwards, she drank 6 litres of water and gradually developed nausea and vomiting, confusion, lethargy, muscle weakness, spasms and cramps, seizure, decreased consciousness level and short-term memory loss. The initial laboratory tests showed profound plasma hypoosmolality (251 mOsm/kg) proportional to hyponatremia (116 mmol/L) combined with inappropriately elevated urine osmolality (523 mOsm/kg) and high urine sodium concentration (87 mmol/L) indicating a syndrome of inappropriate antidiuretic hormone secretion. The patient was treated with 0.9% sodium chloride and a restriction of water intake. Plasma osmolality and hyponatremia improved one day after venom exposure, but the symptoms disappeared as late as the third day. 
In patients presenting with neurological symptoms and a line of small body burns Phyllomedusa bicolor venom exposure should be suspected. Acute symptomatic hyponatremia after Phyllomedusa bicolor venom exposure is the result of inappropriate antidiuretic hormone secretion that can be exacerbated by excessive water intake.
Other works on Kambo were noted here.