27 May 2023

AI Regulation

'Weapons of Mass Disruption: Artificial Intelligence and International Law' by Simon Chesterman in (2021) 10 Cambridge International Law Journal 181–203 comments 

The answers each political community finds to the law reform questions posed artificial intelligence (AI) may differ, but a near-term threat is that AI systems capable of causing harm will not be confined to one jurisdiction — indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, though different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy, and opacity of AI systems. For that reason, some measure of collective action is needed. Lessons may be learned from efforts to regulate the global commons, as well as moves to outlaw certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires active involvement of states. To coordinate those activities and enforce global ‘red lines’, this paper posits a hypothetical International Artificial Intelligence Agency (IAIA), modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponization and other harmful effects.

24 May 2023

Curiosa

In Taylor, In the matter of an application for leave to issue or file [2023] HCATrans 63 Gageler ACJ states: 

Pursuant to rules 6.07.3 and 13.03.1, I refuse the application for leave to issue or file the proposed application for a constitutional or other writ. I publish my reasons and I direct that those reasons be incorporated into the transcript. ... 

On 4 April 2023, Ms Cindy Taylor filed an application for leave to issue or file an application for a constitutional or other writ under r 6.07.3 of the High Court Rules 2004 (Cth) (“the Rules”), supported by an affidavit affirmed by her on 28 March 2023. Leave is required because on 27 March 2023 pursuant to r 6.07.2 of the Rules Steward J directed the Registrar to refuse to issue or file the document without the leave of a Justice first had and obtained. 

Ms Taylor’s proposed application for a constitutional or other writ names the Commonwealth Attorney-General as the defendant and seeks “an order of Mandamus on the Attorney General of the Commonwealth to: Immediately instruct the Crown to: Succeed the Plaintiff to the title and role of Sovereign Empress of Australia; And other orders as the Court sees it, in support of the above”. It appears that Ms Taylor also seeks damages for “the Crown’s ongoing use of Lawfare” against her and for having been “thrown from [her] natural path of evolvement”. 

The legal claims sought to be agitated by the application are unintelligible and the primary relief sought is beyond the jurisdiction of this Court. The proposed application is frivolous, vexatious, and an abuse of process.

In Taylor, In the matter of an application for leave to issue or file a document [2017] HCATrans 248 Keane J notes that the applicant sought to file an application for an order to show cause against the Attorney-General for the Commonwealth. 

The application is difficult to understand but appears to be directed to vindicate aspects of the applicant’s claim as “Mother of ALL” to achieve “compliance with the Family Undertaking . . . for all Families by 21/7/18”. 

On 21 September 2017, Nettle J, pursuant to r 6.07.2 of the High Court Rules 2004 (Cth), directed the Registrar to refuse to issue or file the application without the leave of a Justice first had and obtained by the applicant. 

On 26 September 2017, the applicant filed an ex parte application seeking leave to have the application issued and filed. An affidavit by the applicant was filed in support of the application. 

To the extent that this material is intelligible, it only serves to confirm that the application to show cause is frivolous and vexatious. A letter from the applicant dated 4 December 2017, received by the Registry this morning, does not alter this conclusion.

In James v District Court at Whanganui [2023] NZCA 181 the Court states  

[1] The appellant, who goes only by the name James, brought judicial review proceedings in the High Court in February 2022 in which he sought an injunction against the Whanganui District Court. The terms of the injunction were directed towards halting or challenging proceedings brought against James in District Court. It is not possible to discern the subject matter of the District Court proceedings with any certainty from the documents James has filed. 

[2] Churchman J struck out the judicial review proceeding as an abuse of process.[1] James filed an appeal against Churchman J’s decision. ...   

[4] In Commissioner of Inland Revenue v Chesterfields Preschools Ltd, this Court explained that:  ... a “frivolous” pleading is one which trifles with the court’s processes, while a vexatious one contains an element of impropriety. ... [One that is] “otherwise an abuse of the process of the court” ... extends beyond the other grounds and captures all other instances of misuse of the court’s processes, such as a [proceeding] that has been brought with an improper motive or are an attempt to obtain a collateral benefit. ... 

 [6] In his pleading, James sought to distinguish between “the Man James” and “the Legal Fiction Person JAMES JONES” and asserted that District Court required the former’s written consent to “conduct any business” with the latter and that consent had been withdrawn. 

[7] It appeared to Churchman J that the form and wording of James’ statement of claim was consistent with that typically advanced by the “Organised Pseudolegal Commercial Argument Litigants” who adhere to the “ Sovereign Citizen movement”.  Essentially, these arguments proceed on the premise that an individual has both a natural persona and a separate legal or “corporate” persona and that the natural person cannot be subject to the jurisdiction of the state without their consent. The Court has previously held that this position is untenable. Almost always, it will be viewed as an abuse of process by a litigant.  Churchman J concluded that there was no legal basis for James’ claim against the Whanganui District Court and that the proceeding was an abuse of process. ...   

[10] The grounds of appeal, although expressed in a convoluted manner and although denying the concepts of “Organised Pseudolegal Commercial Argument” and “sovereign citizen ”, nevertheless rest on the argument regarding the “separation” between natural and legal persons and the rejection of Acts of Parliament unless consent has been given. They can be summarised as being that the District Court has no jurisdiction over James without him giving his consent, which he has not done, and that Churchman J erred in rejecting this argument. 

[11] The arguments that James relies on are properly characterised as “sovereign citizen” type arguments. They cannot succeed. Apart from the sovereign citizen arguments, there is no genuinely identifiable legal or factual error asserted. We are satisfied that the appeal cannot succeed. We consider that it is properly viewed as both vexatious and an abuse of the Court’s process.

Consumers

'Managed Sovereigns: How Inconsistent Accounts of the Human Rationalize Platform Advertising' by Jake Goldenfein and Lee McGuigan in (2023) 3(3) Journal of Law and Political Economy comments 

Platform business models rest on an uneven foundation. Online behavioral advertising drives revenue for companies like Meta, Google, and Amazon, with privacy self-management governing the flows of personal data that help platforms dominate advertising markets. We argue that this area of platform capitalism is reinforced through a process whereby seemingly incompatible conceptions of human subjects are codified and enacted in law and industrial art. A rational liberal “consumer” agrees to the terms of data extraction and exploitation set by platforms. Inside the platform, however, algorithmic systems act upon a “user,” operationalized as fragmentary patterns, propensities, probabilities, and potential profits. Transitioning from consumers into users, individuals pass through a suite of legal and socio-technical regimes that each orient market formations around particular accounts of human rationality. This article shows how these accounts are highly productive for platform businesses, configuring subjects within a legitimizing framework of consumer sovereignty and market efficiency. ...

As you can tell, reasonable people approach behavioral marketing from very, very disparate perspectives. —Federal Trade Commissioner Jon Leibowitz (FTC 2007)

Policymakers’ beliefs about human cognition and behavior shape how governance structures position people in relation to corporate power (Pappalardo 2012; Hoofnagle 2016). In this paper we ask: Who is the actor presumed and codified in the governance of commercial online platforms? What capacities and subjectivities are imputed onto individuals interacting with these digital surveillance technologies and algorithmic systems of classification and personalization? Focusing on platforms that facilitate online behavioral advertising (OBA), we argue that a contradictory set of rational capacities is assigned to individuals as they act, respectively, as “consumers” who choose to engage with a platform and accept its terms of service, and as “users” who then actually engage with the platform service. Radically inconsistent definitions of the subject are tolerated, or potentially encouraged, to secure the ideological and legal integrity of using market mechanisms to govern personal data flows while simultaneously using those data to predict and manage user behavior. 

When individuals submit to tracking, profiling, and personalization of their opportunities and environments, such as by agreeing to the privacy policies and binding terms of service offered by social media sites and mobile apps or agreeing to pop-up notifications seeking consent for data processing, the presumption is that those individuals act deliberately to maximize self-interest based on a calculation of their options, benefits, and costs (Solove 2013; Susser 2019; Turow, Hennessy, and Draper 2015). Data privacy law in the US and elsewhere encodes a rational consumer, freely trading personal information for “relevant” advertisements and customized services (Baruh and Popescu 2017; Draper 2017; Hoofnagle and Urban 2014). The bargain is considered legitimate so long as (1) transparent disclosures of corporate data practices equip consumers to make reasoned privacy tradeoffs (White House 2012; FTC 2012), and (2) consumers are capable of giving meaningful consent. Marketing experts and policymakers have regarded personalization and the reigning notice-and-choice regime as exemplars of consumer empowerment and market efficiency (Darmody and Zwick 2020; see Thaler and Tucker 2013). As Robert Gehl (2014, 110–111) explains, the subject personified in digital advertising’s policy rhetoric—the “sovereign interactive consumer”—is the “foundational abstraction” of privacy self-management’s governance of social media. 

Individuals are conceptualized much differently on the other side of the privacy-permissions contract, where the presentation of information and opportunities—including advertisements, product offers, and prices—responds dynamically to inferences and predictions about profiled people and populations (Andrejevic 2020; Barry 2020; Fourcade and Healy 2017; Moor and Lury 2018). In the OBA practices enabled by permissive privacy rules, strategic actors operate from the premises that human decision-making is susceptible to influence and that the reliability of that influence can be increased by discerning users’ habits or cognitive patterns (Bartholomew 2017; Calo 2014; Susser et al. 2019). The target of influence is not addressed as a self-determining liberal subject, exercising a stable endowment of preferences and capacities, but rather as a machine-readable collection of variable (sometimes only momentary) properties, correlations, probabilities, and profit opportunities represented by “anonymous” identifiers (Barocas and Nissenbaum 2014; Cheney-Lippold 2017; Fisher and Mehozay 2019). Digital platforms, and the marketers who use them, try to engineer the architectures in which people make choices to systematically increase the likelihood of preferred outcomes, guided by statistical analyses of massive and intimate data (Burrell and Fourcade 2021; Gandy 2021; Nemorin 2018; Yeung 2017). The idea is to steer or nudge individuals toward predicted behaviors through constant tweaking of information environments and opportunities. 

By designing those behavioral pathways, platforms produce and sell probabilistic behavioral outcomes that can be differentiated according to their apparent or expected value (Zuboff 2015). Frequently, that value is determined by the data available about the subject whose attention is being monetized, and so access to personal information that ostensibly enables better valuations or strategic decisions becomes a source of power and advantage for platforms (Birch 2020; West 2019). These processes of prediction and influence may not be as effective as proponents suggest, and, in practice, many advertising professionals work with a mix of algorithmic identities and the demographic or lifestyle categories long used to define consumer targets (Beauvisage et al. 2023). Nevertheless, the business of platform-optimized advertising has become astronomically profitable. That profit is premised on a belief that comprehensive data collection furnishes new abilities to identify and exploit individuals’ susceptibilities, vulnerabilities, and, in certain accounts, irrationalities (Calo 2014), all justified by the pretense of giving sovereign consumers what they desire and bargained for. 

How can we tolerate such a dramatic discrepancy in the conceptions of human rationality that guide high-level policies about the relationships between people and the platforms they use to participate in social, political, and cultural life? How can data collection be authorized by the presumption that rational consumers freely choose to exchange personal data for improved platform services, when the use of that data within the platform presumes that individual users are not rational and that their choices and behaviors can be managed and “optimized” via algorithmic personalization? 

This article contributes to an emergent body of critical scholarship addressing the implications of this inconsistency between the assumptions of liberal subjectivity that frame policy discourse, and the markedly different assumptions about human subjectivity and rationality operative in commercial platforms’ modes of computational or algorithmic management (Barry 2020; Benthall and Goldenfein 2021; Cohen 2013; Goldenfein 2019). We demonstrate that a simple question—Who is the platform subject?—provides a lens for examining (1) the political maneuvers that maintain a system and market configuration premised upon incompatible answers, and (2) how, in the name of consumer sovereignty, privacy self-management and norms of market efficiency are installed and defended as the foundation of platform and data governance against other forms of regulatory intervention. Specifically, we argue that this articulation of OBA and privacy self-management is reinforced through an unlikely process whereby seemingly incompatible conceptions of human subjects are codified and enacted in law and industrial art. 

As an individual transitions from the law’s “consumer” into the platforms’ “user,” they pass through a suite of legal and socio-technical regimes—notice and choice, data protection, consumer protection, and computational or algorithmic management—that each orient market formations around particular accounts of human rationality. These inconsistent accounts are highly productive for platform business practices and the regulatory activities that hold their shape. The vast advertising-oriented sector of platform capitalism is stabilized by a set of institutions that operationalize rationality in divergent yet complementary ways. Those institutions, including the advertising industry and consumer protection law, configure subjects within a framework that simultaneously upholds the ideals of consumer sovereignty and market efficiency while also legitimizing data extraction and its derivative behavioral arbitrage. This article does not argue that law should respond to this contradiction through a more empirically coherent or less stylized subject. The goal is to demonstrate how inconsistencies in legal and industrial accounts of human rationality are used to privilege market ordering for coordinating data flows and to shape those markets in ways that suit commercial stakeholders. 

At this point, defenders of OBA might demand a caveat. They might disavow any manipulative designs, conceding instead that when individuals act as marketplace choosers—both of privacy preferences and of advertised goods and services—they exercise “bounded rationality.” These defenders might insist that the marketplace chooser imagined by designers and practitioners of data- driven, behavioral marketing is a subject who strives for optimal decisions within material constraints, such as limited time, information, and information-processing power. Illegitimate subversion of individuals’ rational-aspiring choices, beyond these unavoidable constraints, will be met by the counteracting force of consumer protection law. 

Suppose we accept all that and set aside the possibility that marketers exploit the cognitive biases cataloged by behavioral economists. Even so, the governance of commercial platforms still requires us to recognize that personalization and “choice architecting” are techniques for adjusting the boundedness of rationality—for setting or relaxing the material constraints on decision-making. Rationality is not a natural and persistent endowment, but a contextually situated capacity, shaped by the environments that structure decision-making; it is constituted through calculating devices and it is performative of markets and economization (Callon 1998). What we are suggesting, then, and what the defenders’ caveat does not resolve, is that digital platforms are designed and operated to cultivate specific and often asymmetrical rational capacities. Even at the terms-of-service threshold, where individuals make ostensibly reasoned choices about becoming users who are subject to the pleasures and perils of platform optimization, companies try to secure the continuous supplies of personal data they need by implementing consent-management interfaces that take advantage of human incapacities (Keller 2022; McNealy 2022). Admitting that consumers are “boundedly” rational, as opposed to predictably irrational, does not address the fact that platforms actively manipulate those boundaries, modulating information and design features that produce and delimit user experiences and market activity. 

Further, and crucially for our argument, we suggest that consumer protection law’s evolving recognition of bounded rationality is doing important work for this sector of platform capitalism. Consumer protection in platform governance works to recuperate the political function of the liberal decisionmaker as the legal subject necessary to stabilize existing consumer-rationality-market configurations, and justify market mechanisms over stronger regulatory constraints, maintaining platform control over data flows powering the OBA business model. The legal integration of bounded rationality as a remedy to rational choice theory’s well-known limits makes space for a range of ready- to-hand regulatory interventions framed through behavioral economics, with minor impact on platform businesses. By deploying a broader theory of behavior, consumer protection law maintains the human subject as a legal technology around which profitable market configurations continue to be instituted, while avoiding real constraints on how platforms and their advertising apparatuses profit from behavioral management. The accommodation of behavioral economics, while tackling a specific set of predatory market practices and contriving new categories of vulnerable subjects, ensures that the idealized conception of individuals as autonomous market actors and the normative goal of market efficiency persist, while enabling platforms to simultaneously undermine both. 

The next sections survey how consumer rationality has been defined and constructed in advertising and marketing, on the one hand, and in law and privacy regulation on the other. Moving through these analyses, it is important to note that we are not explaining how exactly these competing accounts of the human were transmitted across commercial, social-scientific, and legal domains. We are not suggesting, for example, that regulators have been duped or are involved in a knowing conspiracy. Rather, we demonstrate how the platform, as a site of collision between these contradictory accounts, leverages legally and technically codified forms of human rationality (from privacy, data protection, consumer protection, and computational management) into specific market institutions and governance regimes that justify platform business models.

23 May 2023

Metaverse

'Into the Metaverse: Technical Challenges, Social Problems, Utopian Visions, and Policy Principles' by Vincent Mosco in (2023) Javnost - The Public (Journal of the European Institute for Communication and Culture) comments 

The metaverse holds a prominent place in debates over the future direction of digital networks. Proponents claim that advances in virtual and augmented reality will shape every facet of social life. This article defines the metaverse, explores the state of the technology, and addresses its public policy significance. It makes use of a political economic perspective focusing on the concepts of commodification and spatialisation. Specifically, it considers how major platform and gaming companies plan to use the metaverse to expand market share. The article also addresses the cultural dimensions of the metaverse as the latest in a series of utopian visions of a digital sublime. It proceeds to take up the social problems associated with the metaverse and concludes by describing the essential policy principles that should guide public authorities in the regulating the metaverse. These principles include acknowledging that current concerns over implementation do not limit future deployment. Moreover, public policy should start by recognising that the metaverse is a public space and not the private property of the major platforms. Finally, policy must address specific social problems deepened by the arrival of the metaverse including crime, privacy, the impact on climate, and data ownership. 

 This Metaverse is going to be far more pervasive and powerful than anything else. If one central company gains control of this, they will become more powerful than any government and be a God on Earth. Tim Sweeney, CEO and Founder of Epic Games  

In 2021, with its social media revenues slowing appreciably, Facebook announced a major strategic change that would deepen its investment in virtual reality systems, or what is increasingly known as the metaverse. In addition to an enormous shift in personnel and budgeting, company CEO Mark Zuckerberg renamed the business he founded Meta and asserted that the metaverse is about “a time when basically immersive digital worlds become the primary way that we live our lives and spend our time”. Meta’s new strategy was quickly followed by Microsoft, as well as by gaming businesses and other Fortune 500 firms that planned to boost profits by investing in, and shifting some of their operations to, the metaverse. As Satya Nadella, Microsoft’s CEO put it, “The metaverse is here, and it’s not only transforming how we see the world but how we participate in it—from the factory floor to the meeting room”. 

Zuckerberg and Nadella were joined by leaders in the public and private sectors who view the metaverse as an opportunity to cut costs and expand products and services. In 2022, the elite World Economic Forum gave its blessing to the virtual world. Aside from a few caveats, the WEF argued that.

the Metaverse has a potential to play a broader role in society through its ability to open our horizons, interact with those that we could not have met in the real world, experience new places, access public services and healthcare, and, overall, create an extension of the real world that we live in, to help us discover ways to make it better. 

The global financial company Citibank forecast that by 2030 the metaverse would be worth between $8 trillion and $13 trillion (all currency in US dollars), and up to 5 billion users would live, work, and play in these new immersive spaces. 

By the end of 2022, Meta’s lofty aspirations appeared less likely to be realised than they were a year earlier. Nevertheless, it is important to focus less on the missteps of a specific company and more on the appeal of virtual world technologies, particularly those enhanced by artificial intelligence systems. Drawing on political economic and cultural studies perspectives, this paper defines the metaverse, assesses its social significance, and addresses key problems and public policy issues.

AI Rights?

'The Full Rights Dilemma for AI Systems of Debatable Moral Personhood' by Eric Schwitzgebel in (2023) 4 Robonomics: The Journal of the Automated Economy comments 

An Artificially Intelligent system (an AI) has debatable moral personhood if it is epistemically possible either that the AI is a moral person or that it falls far short of personhood. Debatable moral personhood is a likely outcome of AI development and might arise soon. Debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or do not treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. The moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties. ... 

We might soon build artificially intelligent entities – AIs – of debatable moral personhood. We will then need to decide whether to grant these entities the full range of rights1and moral consideration that we normally grant to fellow humans. Our systems and habits of ethical thinking are currently as unprepared for this decision as medieval physics was for space flight. 

If there is even a small chance that some technological leap could soon produce AI systems with a reasonable claim to personhood, the issue deserves careful consideration in advance. We will have ushered a new type of entity into existence –an entity perhaps as morally significant as Homo sapiens, and one likely to possess radically new forms of existence. Few human achievements have such potential moral importance and such potential for moral catastrophe. An entity has debatable moral personhood, as I intend the phrase, if it is reasonable to think that the entity might be a person in the sense of deserving the same type of moral consideration that we normally give, or ought to give, to human beings, and if it is also reasonable to think that the entity might fall far short of deserving such moral consideration. I intend “personhood” as a rich, demanding moral concept. If an entity is a moral person, they normally deserve to be treated as an equal of other persons, including for example – to the extent appropriate to their situation and capacities – deserving “human” rights, care and concern similar to that of other people, and equal protection under the law. Personhood, in this sense, entails moral status, moral standing, or moral considerability fully equal to that of ordinary human beings (Jaworska and Tannenbaum 2013/2021). By “moral personhood” I do not, for example, mean merely the legal personhood sometimes attributed to corporations for certain purposes. 

An AI’s personhood is “debatable”, as I will use the term, if it is reasonable to think that the AI might be a person but also reasonable to think that the AI might fall far short of personhood. Substantial doubt is appropriate – not just minor doubts about the precise place to draw the line in a borderline case. Note that debatable personhood in this sense is both epistemic and relational: An entity’s status as a person is debatable if we (we in some epistemic community, however defined) are not compelled, given our available epistemic resources, either to reject its personhood or to reject the possibility that it falls far short. Other entities or communities, or our future selves, with different epistemic resources, might know perfectly well whether the entity is a person. Debatable personhood is thus not an intrinsic feature of an entity but rather a feature of our epistemic relationship to that entity. 

I will defend four theses. First, debatable personhood is a likely outcome of AI development. Second, AI systems of debatable personhood might arise soon. Third, debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or don’t treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. Fourth, the moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties.

'Hybrid theory of corporate legal personhood and its application to artificial intelligence' by Siina Raskulla in (2023) 3(78) SN Social Sciences comments 

Artificial intelligence (AI) is often compared to corporations in legal studies when discussing AI legal personhood. This article also uses this analogy between AI and companies to study AI legal personhood but contributes to the discussion by utilizing the hybrid model of corporate legal personhood. The hybrid model simultaneously applies the real entity, aggregate entity, and artificial entity models. This article adopts a legalistic position, in which anything can be a legal person. However, there might be strong pragmatic reasons not to confer legal personhood on non-human entities. The article recognizes that artificial intelligence is autonomous by definition and has greater de facto autonomy than corporations and, consequently, greater potential for de jure autonomy. Therefore, AI has a strong attribute to be a real entity. Nevertheless, the article argues that AI has key characteristics from the aggregate entity and artificial entity models. Therefore, the hybrid entity model is more applicable to AI legal personhood than any single model alone. The discussion recognises that AI might be too autonomous for legal personhood. Still, it concludes that the hybrid model is a useful analytical framework as it incorporates legal persons with different levels of de jure and de facto autonomy. ... 

Artificial intelligence is compared to fire, oil, and electricity in taking humankind to the next level of development. In legal studies, artificial intelligence has been compared to nature, animals, children, idols and money. (See, e.g., Gordon 2021; Gunkel & Wales 2021; Kurki 2019; Solaiman 2017; Beck 2016; Solum 1992.) Nevertheless, the analogy between AI and companies is perhaps the most widely used. The analogy between AI and corporations also works at the level of fire, oil and electricity: Corporate personhood is a legal invention that significantly added value to society during the Roman period, the Middle Ages and the colonial era. Corporate persons continue to create added value in today’s market economy. (See, e.g., Micklethwait and Wooldridge 2003; Berle 1952; Dodd 1948; Savigny 1884, pp. 86−88). 

While the previous discussion on AI legal personhood has been extensive, the contribution of this article is that it utilizes the hybrid model of corporate legal personhood. It applies three distinct models of corporate legal personhood simultaneously: the real entity, aggregate entity, and artificial entity models. (Raskulla 2022, p. 324.) Simultaneous application of several models has also been envisioned by Chatman (2018), with the significant difference that Chatman only applies the real entity and artificial entity models. The article returns to Chatman’s model and its application in Chapter 6. 

Many studies argue that artificial intelligence, particularly one with moral autonomy, challenges pre-existing models, rendering them useless or risky. They suggest that new models of legal personhood are required. (See, e.g., Novelli et al. 2022, p. 202; Laukyte 2020, p. 445; Chen and Burgess 2019, p. 76. See also Mocanu 2021; Kurki 2019.) While the suggestion is warranted, this article studies whether a less radical approach could be adopted. It examines whether the hybrid model of legal personhood could be applied to AI and whether it would offer a new perspective for the challenges we face. 

Chapter 2 of the article introduces key concepts: artificial intelligence and legal personhood. Chapter 3 introduces the theoretical framework by outlining three competing models of corporate legal personhood and discussing the hybrid model. Chapter 4 then aims to resolve whether AI legal personhood is best described by the real entity model, aggregate entity model, artificial entity model, or hybrid model. The objective is not to answer the question of whether AI should be provided with legal personhood but whether it could be provided with legal personhood. These are two very different questions. Nevertheless, Chapter 5 reviews the discussion on the possible risks and advantages of AI legal personhood and considers whether the hybrid model could be useful in resolving any of the issues recognised. Finally, chapter 6 discusses the article’s contribution, and Chapter 7 concludes by summarizing key outcomes and formulating questions for future research.

Reflection

Law teaching often embeds formulaic 'reflective writing.  'Can large language models write reflectively' by Yuheng Li, Lele Sha, Lixiang Yan, Jionghao Lin, Mladen Raković, Kirsten Galbraith, Kayley Lyons, Dragan Gašević and Guanliang Chen in (2023) 4 Computers and Education: Artificial Intelligence comments 

Generative Large Language Models (LLMs) demonstrate impressive results in different writing tasks and have already attracted much attention from researchers and practitioners. However, there is limited research to investigate the capability of generative LLMs for reflective writing. To this end, in the present study, we have extensively reviewed the existing literature and selected 9 representative prompting strategies for ChatGPT – the chatbot based on state-of-art generative LLMs to generate a diverse set of reflective responses, which are combined with student-written reflections. Next, those responses were evaluated by experienced teaching staff following a theory-aligned assessment rubric that was designed to evaluate student-generated reflections in several university-level pharmacy courses. Furthermore, we explored the extent to which Deep Learning classification methods can be utilised to automatically differentiate between reflective responses written by students vs. reflective responses generated by ChatGPT. To this end, we harnessed BERT, a state-of-art Deep Learning classifier, and compared the performance of this classifier to the performance of human evaluators and the AI content detector by OpenAI. Following our extensive experimentation, we found that (i) ChatGPT may be capable of generating high-quality reflective responses in writing assignments administered across different pharmacy courses, (ii) the quality of automatically generated reflective responses was higher in all six assessment criteria than the quality of student-written reflections; and (iii) a domain-specific BERT-based classifier could effectively differentiate between student-written and ChatGPT-generated reflections, greatly surpassing (up to 38% higher across four accuracy metrics) the classification performed by experienced teaching staff and general-domain classifier, even in cases where the testing prompts were not known at the time of model training. ... 

Educators frequently administer reflective writing tasks to elicit students' reflections on their prior learning experiences, within or outside a particular course (Mann et al., 2009). Engagement in this writing task has been shown to promote the development of critical thinking and problem-solving, an important set of skills that can benefit life-long learning (Charon & Hermann, 2012). However, recent advancements in generative language models have raised concerns among educators administering written assignments (Kung et al., 2022, Susnjak, 2022, Yan et al., 2023). For instance, by utilising generative language models to automatically draft their reflective written responses, some students may miss the opportunity to engage in authentic and critical reflections on their own learning experiences. In this way, for example, some students may not be able to evaluate the learning strategies they used in the past, and then modify their learning strategies to ensure more productive learning in the future (Raković et al., 2022). More importantly, the instructors cannot give tailored feedback to improve student learning. 

In particular, ChatGPT, a recently released chatbot based on artificial intelligence (AI), has demonstrated the potential to comprehend different requests from users and, per those requests, generate relevant and insightful texts for different purposes and in different domains, e.g., journal articles (Pavlik, 2023), financial reports (Wenzlaff & Spaeth, 2022), and academic literature reviews (Aydın & Karaarslan, 2022). Despite the promises of ChatGPT to make text generation more efficient, many educators are concerned regarding the potentially detrimental effects of using automatic text generation methods to facilitate student writing, including reflective writing (Kung et al., 2022, Stokel-Walker, 2022, Susnjak, 2022). However, those concerns have not been empirically supported as of yet in the context of reflective writing. More research is thus needed to empirically document and understand the capabilities of cutting-edge text generation methods to generate reflective writing, thus providing educators and researchers with new insights regarding the use of these methods in reflective writing. In addition, it remains unknown whether/how AI-generated writing can be accurately differentiated from students' original work. This may be particularly important for educational stakeholders aiming to identify and prevent academic misconduct, e.g., reflective essays generated by ChatGPT, but submitted as students' original work (Stokel-Walker, 2022). To address these challenges, in this study, we set out to (1) empirically examine the quality of reflective responses generated by ChatGPT and (2) empirically investigate the use of state-of-the-art classification approaches to differentiate between the responses generated by the ChatGPT bot and the responses originally generated by students. 

Accordingly, we posed the following Research Questions (RQs): RQ1 – Can ChatGPT generate high-quality reflective writings? RQ2 – To what extent are reflective responses generated by ChatGPT distinguishable from reflective responses written by university students? To answer the RQs, we have extensively prompted ChatGPT to generate a diverse set of reflective writings. We also involved experienced teaching staff to evaluate the reflective depth presented in the writings. Lastly, we compared the differentiation performance (i.e., whether the ChatGPT-generated writings could be differentiated from student-written ones) among (i) experienced teaching staff; (ii) the state-of-the-art AI text detector released by OpenAI; and (iii) a BERT-based classifier (Devlin et al., 2018) fine-tuned on reflective writings generated by ChatGPT and written by students. 

The contribution of this paper is two-fold: 1) we illustrated the capability of the state-of-the-art large language models, specifically ChatGPT, in generating reflective writings and the quality of these ChatGPT-generated content compared to student-written works, and 2) we developed a BERT-based classifier for distinguishing between AI-generated and student-written reflective writings. These timely contributions could inform educational researchers and practitioners regarding ChatGPT and other large language models' potential impacts on reflective writing tasks, such as students might miss out on the opportunity to engage in cognitive reflection if they choose to use ChatGPT to generate reflective writing for the very purpose of completing an assessment.

22 May 2023

Tasmanian Privacy Law

The Tasmanian Law Reform Institute has released an issues paper regarding the state's privacy regime. 

The paper states 

 This Inquiry was initiated by the Honourable Meg Webb, Independent member of the Tasmanian Legislative Council. The Reference was accepted by the Tasmanian Law Reform Institute (‘TLRI’) Board in December 2019. The TLRI applied for a grant from the Solicitors Guarantee Fund to undertake the Inquiry. In May 2020, the TLRI received advice that its application had been partially successful, with a lesser amount granted than requested. 

The issue of privacy protection is topical in view of the matters raised in the Terms of Reference below and other developments, such as national data breaches relating to organisations such as Medicare and Optus. 

The Terms of Reference were referred to the TLRI in view of:

• the rapid and extensive advances in information, communication, storage, surveillance and other relevant technologies; 

• possible changing community perceptions of privacy and the extent to which it should be protected by legislation; 

• the expansion of state and territory legislative activity in relevant areas; and 

• emerging areas that may require privacy protection. 

The Terms of Reference are for the TLRI to inquire into, review and report on:

1. the current protections of privacy and of the right to privacy in Tasmania and any need to enhance or extend protections for privacy in Tasmania; 

2. the extent to which the Personal Information Protection Act 2004 (Tas) and related laws continue to provide an effective framework for the protection of privacy in Tasmania and the need for any reform to that Act; and 

3. models that enhance and protect privacy in other jurisdictions (in Australia and overseas).

In undertaking this reference, the TLRI will consider and have regard to:

a) the United Nations International Convention on Civil and Political Rights and other relevant international instruments that protect the right to privacy; 

b) relevant existing and proposed Commonwealth, state and territory laws and practices; 

c) any recent reviews of the privacy laws in other jurisdictions; 

d) current and emerging international law and obligations in this area; 

e) privacy regimes, developments and trends in other jurisdictions; 

f) the need of individuals for privacy protection in an evolving technological environment; and 

g) any other related matter.

The TLRI will identify and consult with relevant stakeholders and ensure widespread public consultation on how privacy and obligations relating to protecting privacy can best be promoted and protected in Tasmania, and provide recommendations as to an appropriate model for Tasmania to protect and enhance privacy rights and protections.

The Institute comments 

The content for this Issues Paper was finalised in January 2023. This preceded the release of a report on 16 February 2023 by the Commonwealth Attorney-General’s Department on its review of the Privacy Act 1988 (Cth) (‘Privacy Act’). Accordingly, this Issues Paper does not consider the findings of the report as to options for reforming the Privacy Act (particularly relevant to the contents of Part 2, noted below). However, the findings of the Commonwealth report will be considered in the drafting of the TLRI Final Report and the formulation of recommendations. 

• Part 1 (pages 1 to 6) introduces readers to the concept of privacy protection and gives an overview of existing legal frameworks for privacy protection in Tasmania, Australia, and internationally. 

• Part 2 (pages 7 to 45) discusses the scope, operation, and enforcement of privacy protection under the frameworks introduced in Part 1, focusing on information held by government agencies. It compares the protections in Tasmania under the Personal Information Protection Act 2004 (Tas) (‘PIPA’) with those in other Australian jurisdictions, particularly under the Privacy Act. Part 2 also considers possible future reforms of these frameworks and examines international developments, including the European Union’s General Data Protection Regulation 2016/679 (‘GDPR’). 

• Part 3 (pages 47 to 51) explores different provisions in legislation other than the PIPA that affect how government-held information can be used and shared. It analyses how these provisions affect information privacy and draws comparisons with similar laws in other jurisdictions. 

• Part 4 (pages 52 to 66) broadens the scope beyond government-held information to consider various types of privacy protections under legislation, as well as case law. It discusses legislation regulating information in the context of health services; legislation regulating surveillance (by government or otherwise); criminal laws which create offences relating to stalking and harassment and to the sharing of intimate images; and non-legislative protections in the general law. Part 4 concludes by considering the introduction of a comprehensive civil remedy for interference with privacy and sets out questions about the appropriate model for law reform.

On that basis the paper 

provides background, context, and considerations regarding privacy laws in Tasmania. The aim is to facilitate informed discussion about how privacy can best be legally protected, given the rapid advances in information technology, changing community perceptions about the importance of privacy, and growing legislative regulation of various matters. 

The Paper adopts a broad working definition of privacy ([1.1.2]) which covers the overlapping categories of information privacy, privacy of communications, bodily privacy, and territorial privacy. Bodily and territorial privacy are collectively known as ‘rights to seclusion’, which is the right to have one’s physical self and one’s environment free from intrusion. 

Currently, there is no comprehensive privacy regulation in Tasmania. Rather, privacy protection is fragmented across different laws that protect different types of privacy in different specific circumstances ([1.2]). Different legislation may interact to affect privacy protections (Part 3). The applicability of regulations at the Australian federal level under the Privacy Act and the international level, for example under the European Union’s General Data Protection Regulation 2016/679 (‘GDPR’), create further complexity in the landscape of privacy protection. The primary privacy framework in Tasmania is the Personal Information Protection Act 2004 (Tas) (‘PIPA’) which binds government agencies and their contractors. It protects the information privacy of government-held information, primarily through prescribing ten ‘Personal Information Protection Principles’ by which the entities must abide. While a detailed piece of legislation, there are multiple gaps in its scope, operation, and enforcement that can jeopardise privacy. 

Regarding scope, for example, the PIPA does not cover non-government organisations such as for-profit businesses ([¬2.2.3]); it does not contemplate the possibility of de-identified information being re-identified with the help of additional information ([2.2.22]–[2.2.28]); it does not protect unsolicited personal information—information that comes into the hands of government agencies or their contractors without a deliberate effort on their part to collect it ([2.3.51]); and it does not grant special protections for biometric information, unlike the Commonwealth law ([2.2.43]). 

Advances in technology can exacerbate the impact of these gaps. For example, the lack of special protection for biometric information may pose a greater risk to individuals as technologies increase in sophistication, such as facial recognition. 

This Paper suggests potential reforms to the PIPA aimed at improving privacy protection, such as by allowing individuals to have a right to object to their information being processed, and a right to request their information be erased ([2.3.60]–[2.3.90]). 

However, some of the most important gaps relate to the enforcement of the PIPA, rather than its scope. In particular, there is limited ability for an aggrieved individual to seek review of decisions about whether or not there has been a breach ([2.4.7], [2.4.14]); there are no penalties imposed for breaching obligations ([2.4.10]); there is no mandatory data breach notification scheme that compels information handlers to notify an individual where a breach of their privacy has occurred ([2.4.21]); there is no ability for those handling complaints to order compensation ([2.4.8]); and there is no private right of action that allows an individual to go to court to seek damages for financial or non-financial harm suffered as a result of the breach. 

These gaps, together with the fragmented landscape of protections under both legislation and general law, means that some circumstances that endanger privacy may fall between the cracks of legal regulation ([4.4.3]). This raises questions as to whether there may be a case for creating a civil statutory cause of action (and remedy) for interference with privacy ([4.4]). If such a remedy were to be created, consideration is given to whether it should be comprehensive (applying independently of the context in which the interference occurs), apply in place of or in addition to the existing suite of remedies, and allow individuals to seek redress in court when they have suffered harm. 

In discussing the strengths and weaknesses of the PIPA and privacy laws more generally, this Paper seeks input from the community on several issues, including whether:

• certain entities should be covered by the PIPA; 

• a greater range of remedies should be available for those affected by a breach of the PIPA; 

• a data breach notification requirement should be introduced; 

• new rights to object and to erasure should be introduced; 

• there should be privacy regulation on specific technology such as drones; 

• existing judicial recognition of privacy affords adequate protection; and 

• there should be a civil cause of action for privacy and, if so, what its scope should be.