16 September 2023

Discrimination

‘Setting the Framework for Accountability for Algorithmic Discrimination at Work’ by Alysia Blackham in (2023) 47(1) Melbourne University Law Review comments 

Digital inequalities at work are pervasive yet difficult to challenge. Employers are increasingly using algorithmic tools in recruitment, work allocation, performance management, employee monitoring and dismissal. According to a survey conducted by the Society for Human Resource Management, nearly one in four companies in the United States (‘US’) use artificial intelligence (‘AI’) in some form for human resource management. Of those surveyed who do not use automation for such processes, one in five organisations plan to either use or increase their use of such AI tools for performance management over the next five years. 

The elimination of discrimination in employment and occupation is a fundamental obligation of International Labour Organization (‘ILO’) members, and is included in the ILO Declaration on Fundamental Principles and Rights at Work. This obligation invariably extends to the digital sphere. It is critical, then, to create a meaningful framework for accountability for these algorithmic tools. At present, though, it is unclear who is responsible for monitoring the risks of algorithmic decision-making at work: is it the technology companies who develop and market these algorithmic products? The employers using algorithmic tools? Or the individual workers who might experience inequality as a result of algorithmic decision-making? Or, indeed, all three? 

This article considers how we might create a framework for accountability for digital inequality, specifically concerning the use of algorithmic tools in the workplace that disadvantage groups of workers. In Part II, I consider how algorithms and algorithmic management might be deployed in the workplace, and the way this might address or exacerbate inequality at work. I argue that the automated application of machine learning (‘ML’) algorithms in the workplace presents five critical challenges to equality law related to: the scale of data used; their speed and scale of application; lack of transparency; growth in employer control; and the complex supply chain associated with digital technologies. In Part III, I consider principles that emerge from privacy and data protection law, third-party and accessorial liability, and collective solutions to reframe the operation of equality law to respond to these challenges. Focusing on transparency, third-party and accessorial liability, and supply chain regulation, I draw on comparative doctrinal examples from the European Union (‘EU’) General Data Protection Regulation (‘GDPR’), the Australian Privacy Principles (‘APP’) and Fair Work Act 2009 (Cth) (‘Fair Work Act’), and collectively negotiated solutions to identify possible paths forward for equality law. This analysis adopts comparative doctrinal methods, reflecting what Örücü describes as a ‘problem-solving’ or sociological approach to comparative law, examining how different legal systems have responded to similar problems in contrasting ways. The fact that these jurisdictions are facing a similar problem warrants the comparison; differences in national context increase the potential for mutual learning. The GDPR is seen as setting the standard or benchmark for global data protection regulation: it is therefore considered here as an important comparator to Australian provisions. 

Drawing on these principles, I argue that there is a need to develop a meaningful accountability framework for discrimination effected by algorithms and automated processing, with differentiated responsibilities for algorithm developers, data processors and employers. While discrimination law — either via claims of direct or indirect discrimination — might be adequately framed to accommodate algorithmic discrimination, I argue for a need to reframe equality law around proactive positive equality duties that better respond to the risks of algorithmic management. This represents a critical and innovative contribution to Australian legal scholarship, which has rarely considered the implications of technological and algorithmic tools for equality law. Given the critical differences between Australian, US and EU equality law, there is a clear need for jurisdiction-specific consideration of these issues.

Performers Rights

'AI and Performers’ Rights in Historical Perspective' (CREATe Working Paper 2023/9) by Elena Cooper comments 

This article uses legal history as a vantage point for reflecting on the current moment in the debate about AI and performers’ rights. Current debates often refer to ‘creators’ and/or ‘copyright’ as generic categories denoting both performers and authors. Legal history, I argue, sharpens the critical lens on current debate by drawing our attention to what today remains different about the legal rules protecting performers. That difference, at present, leaves performers less well placed to deal with the challenge of AI than authors and also goes to the heart of Equity’s current reform proposals. That difference should now be debated. 

Last year, I published an Opinion in the E.I.P.R. - Copyright History as a Critical Lens – a follow- up to Interrogating Copyright History (an E.I.P.R. Opinion that I co-authored with Ronan Deazley in 2016). I argued that the study of law in past times can be a powerful critical lens on how we see the legal present. Whether the past reveals a story of continuity or change, there is, I argued: value in looking backwards, before we look forwards: an historical perspective helps us to recover the contingency of the present, to imagine things differently and to look to the future with a more critical eye. 

My comments should be read in the light of a wider ‘historical turn’ in critical thinking about intellectual property law in the last two decades, following the publication in 1999 of The Making of Modern Intellectual Property Law by Brad Sherman and Lionel Bently. Sherman and Bently’s historical work showed that the categories of intellectual property that we know today, are not timeless, natural or inevitable; ‘theory... played at best an ex post facto role’ in later legitimating the legal categories that emerged. In so doing, Sherman and Bently opened the way for legal historians to probe critical questions about the legal present and future of intellectual property law. 

In this article, I provide an example of looking backwards before we look forwards. I demonstrate the critical value of an historical long-lens on a discrete strand of current legal debates raised by cutting-edge technology today: a facet of the impact of Artificial Intelligence technology (or ‘AI’) on performers, particularly actors, and the search today for an appropriate legal response. Equity, the UK actors’ trade union launched a campaign last year, Stop AI Stealing the Show, seeking the legislative reform of statutory performers’ rights, specifically the increase in the scope of legal protection. However, the UK Government has, so far, resisted reform. The UK Government’s position on performers’ rights is indicated in the closing paragraphs of its response to the UK Intellectual Property Office’s consultation AI and IP: Copyright and Patents. Referring to proposals for ‘an expansion of the scope of performers’ rights in the Copyright, Designs and Patents Act 1988’, the UK Government comments as follows: at this stage, the impacts of AI technologies on performers remain unclear. It is also unclear whether and how existing law (both in the IP framework and beyond it) is insufficient to address any issues. If intervention is necessary, the IP system may not be the best vehicle for this. We will keep these issues under review from an IP perspective. 

How might an historical perspective enable us critically to reflect on the present moment in the debate of the future of performers’ rights? 

Technology in the 21st century: Aspects of the challenge of AI today 

Before I look to the past, I start with today. AI is technology that uses machine-based learning to perform functions that were previously the province of usually slower, more costly and/or more labour-intensive processes undertaken by human beings. AI has a wide field of application from medical diagnosis, robotics, to the management of insurance-risk. However, one set of questions for intellectual property law today relates to AI’s impact on authors and performers working in a variety of sectors. 

The creative potential of AI technology has been embraced by some visual artists and AI has been hailed elsewhere as democratising creativity, by providing everyone with the tools to create cultural works.  AI technology also offers new possibilities for enhancing human performances, for instance, in the creation of video games, in turn opening opportunities for actors. Yet, there are also reports that AI is, in certain contexts, increasingly replacing human authors and performers, and putting them out of work. A good example of this is the audio performance sector, where AI generated voices can now be used to undertake audio work (e.g. audio books) or provide voice-overs at negligible cost, in circumstances where a professional actor would previously have been employed. Redundancy for actors caused by the introduction of AI – the replacement of humans by machines – is reported to be now increasingly commonplace. 

While AI can replace human authors and performers, it also frequently utilises their pre-existing work. AI learns by tracking patterns in an existing body of material: a ‘data-set’. In the case of visual images produced by AI, the data-set may comprise large quantities of copyright- protected visual images scraped from the internet without copyright clearance.  For AI produced voices, the data-set may be a collection of recordings of human voices, which may have been recorded by actors for another unrelated purpose (e.g. a casting or audition) and included in the data-set without consent. Alternatively, the data-set may comprise recordings that were licensed to a third party for broad purposes, e.g. ‘for research’, yet particularly where the licence pre-dates AI technology, AI uses were not specifically contemplated by the parties at the time the contract was concluded. 

In addition to the AI learning process, the material generated by AI – for instance images generated in response to a text prompt – may involve ‘copying’ through a new means: computer synthetisation.  In relation to performance, prior to AI technology, the circumstances in which a performance could be copied, without direct taking from a recording itself, were more limited and confined to human imitation such as a sound-a-like imitating an actor’s voice, as in the passing off case of Sim v Heinz (discussed further below).  By contrast, AI technology today opens a future in which a performance, or aspects of a performance, can be recreated through technology, without direct copying from the recording. Equity, adopting the arguments of Mathilde Pavis in Artificial Intelligence and Performers’ Rights, refers to these new modes of copying performances, via ‘digital sound and look-alike’, as ‘performance synthetisation’. 

Whether or not performers are sufficiently protected is, of course, tied to the distinct circumstances raised by the new and unprecedented technologies of today: the challenge for legislators and the courts is to strike the right balance of interests in a specific technological context and then (as in the case of the Economics of Music Streaming Enquiry in recent times) to continue to track how well that ‘balance’ operates in practice. How, then, can the past help us to reflect on debates about AI and performers today?

Deidentification

'De-Identifying Government Datasets: Techniques and Governance' (NIST SP 800-188) by Simson Garfinkel, Barbara Guttman, Joseph Near, Aref Dajani and Phyllis Singer comments 

De-identification is a general term for any process of removing the association between a set of identifying data and the data subject. This document describes the use of deidentification with the goal of preventing or limiting disclosure risks to individuals and establishments while still allowing for the production of meaningful statistical analysis. Government agencies can use de-identification to reduce the privacy risk associated with collecting, processing, archiving, distributing, or publishing government data. Previously, NIST IR 8053, "De-Identification of Personal Information," provided a detailed survey of deidentification and re-identification techniques. This document provides specific guidance to government agencies that wish to use de-identification. Before using de-identification, agencies should evaluate their goals for using de-identification and the potential risks that releasing de-identified data might create. Agencies should decide upon a data-sharing model, such as publishing de-identified data, publishing synthetic data based on identified data, providing a query interface that incorporates de-identification, or sharing data in non-public protected enclaves. Agencies can create a Disclosure Review Board to oversee the process of de-identification. They can also adopt a de-identification standard with measurable performance levels and perform re-identification studies to gauge the risk associated with de-identification. Several specific techniques for de-identification are available, including de-identification by removing identifiers, transforming quasi-identifiers, and generating synthetic data using models. People who perform de-identification generally use special-purpose software tools to perform the data manipulation and calculate the likely risk of re-identification. However, not all tools that merely mask personal information provide sufficient functionality for performing de-identification. This document also includes an extensive list of references, a glossary, and a list of specific de-identification tools, which is only included to convey the range of tools currently available and is not intended to imply a recommendation or endorsement by NIST.

15 September 2023

Black Boxes and Health Torts

'Decoding US Tort Liability in Healthcare’s Black-Box Era: Lessons From the EU' by Mindy Duffourc and Sara Gerke in (2023) 27 Stanford Technology Law Review comments 

The rapid development of sophisticated artificial intelligence (AI) tools in healthcare presents new possibilities for improving medical treatment and general health. Currently, such AI tools can perform a wide range of health-related tasks, from specialized autonomous systems that diagnose diabetic retinopathy to general-use generative models like ChatGPT that answer users’ health-related questions. On the other hand, significant liability concerns arise as medical professionals and consumers increasingly turn to AI for health information. This is particularly true for black-box AI because while potentially enhancing the AI’s capability and accuracy, these systems also operate without transparency, making it difficult or even impossible to understand how they arrive at a particular result. 

The current liability framework is not fully equipped to address the unique challenges posed by black-box AI’s lack of transparency, leaving patients, consumers, healthcare providers, AI manufacturers, and policymakers unsure about who will be responsible for AI-caused medical injuries. Of course, the United States (US) is not the only jurisdiction faced with a liability framework that is out-of-tune with the current realities of black-box AI technology in the health domain. The European Union (EU) has also been grappling with the challenges that black-box AI poses to traditional liability frameworks and recently proposed new liability Directives to overcome some of these challenges. 

As the first article to analyze the liability frameworks governing medical injuries caused by black-box AI in both the US and EU, we demystify the structure and relevance of foreign law in this area to provide practical guidance to courts, litigators, and other stakeholders seeking to understand the application and limitations of current and newly proposed liability law in this domain. We reveal that remarkably similar principles will operate to govern liability for medical injuries caused by black-box AI and that, as a result, both jurisdictions face similar liability challenges. These similarities offer an opportunity for the US to learn from the EU’s newly developed approach to governing liability for AI-caused injuries. In particular, we identify four valuable lessons from the EU’s approach: (1) a broad approach to AI liability fails to provide solutions to some challenges posed by black-box AI in healthcare; (2) traditional concepts of human fault pose significant challenges in cases involving black-box AI; (3) product liability frameworks must consider the unique features of black-box AI; and (4) evidentiary rules should address the difficulties that claimants will face in cases involving medical injuries caused by black-box AI. 

13 September 2023

Profiling and Matching

The Explanatory Memo for the Identity Verification Services Bill 2023 (Cth) states 

 Identity verification services are a series of automated national services offered by the Commonwealth to allow government agencies and industry to efficiently compare or verify personal information on identity documents against existing government records, such as passports, driver licences and birth certificates. 

1:1 matching services (the Document Verification Service and the Face Verification Service) are now used every day by Commonwealth, State and Territory government agencies and industry to securely verify the identity. In 2022, the DVS was used over 140 million times by approximately 2700 government and industry sector organisations, and there were approximately 2.6 million FVS transactions in the 2022-23 financial year. 

Examples of the current uses of the DVS and FVS include:

• verifying the identity of an individual when establishing a myGovID to access online services, including services provided by the Australian Taxation Office 

• financial service providers, such as banks, when seeking to verify the identity of their customers and to meet the ‘know your customer’ obligation under the Anti-Money Laundering and Counter Terrorism Financing Act 2006 (Cth) 

• Government agencies when providing services, disaster relief and welfare payments, and 

• Commonwealth, state and territory government agencies verifying identity in order to provide or change credentials. 

The Identity Verification Services Bill 2023 establishes new primary legislation that provides a legislative framework to support the operation of the identity verification services. The Bill will support the efficient and secure operation of the services without compromising the privacy of the Australian community. 

The IVS Bill will:

• authorise 1:1 matching of identity through the identity verification services, with consent of the relevant individual, by public and private sector entities. This will be enabled by:

 the Document Verification Service which provides 1:1 matching to verify biographic information (such as a name or date of birth), with consent, against government issued identification documents; 

the Face Verification Service which provides 1:1 matching to verifiy biometric information (in this case a photograph or facial image of an individual), with consent, against a Commonwealth, state or territory issued identification document (for example, passports and driver licences); and 

the National Driver Licence Facial Recognition Solution which enables the FVS to conduct 1:1 matching against State and Territory identification documents such as driver licences. 

• authorise 1:many matching services through the Face Identification Service only for the purpose of protecting the identity of persons with a legally assumed identity, such as undercover officers and protected witnesses. The protection of legally assumed identities will also be supported by the use of the FVS. All other uses of 1:many matching through the identity verification services will not be authorised, and will therefore be prohibited. 

• authorise the responsible Commonwealth department – in this case the Attorney General’s Department – to develop, operate and maintain the identity verification facilities (the DVS hub, the Face Matching Service Hub and the NDLFRS). These approved identity verification facilities will be used to provide the identity verification services. These facilities will relay electronic communications between persons and bodies for the purposes of requesting and providing identity verification services. 

Subject to robust privacy safeguards, the Department will be authorised to collect, use and disclose identification information through the approved identity verification facilities for the purpose of providing identity verification services and developing, operating and maintaining the NDLFRS. Offences will apply to certain entrusted persons for the unauthorised recording, disclosing or accessing protected information. 

The Bill ensures that the operation the identity verification services and requests for the use of those services are subject to privacy protections and safeguards. These include consent and notice requirements, privacy impact assessments, requirements to report security breaches and data breaches, complaints handling, annual compliance reporting and transparency about how information will be collected, used and disclosed. Furthermore, privacy law and/or the Australian Privacy Principles will apply to almost all entities that seek to make a request for identity verification services. These privacy protections and safeguards will be set out in participation agreements. 

Government authorities that supply identification information that is used for the purpose of identity verification services will also be subject to the privacy protections and safeguards captured in the participation agreement. Breaches of participation agreements can lead to suspension or termination of the agreement, meaning that the entity would no longer be able to request identity verification services. 

States or territories seeking to contribute to the NDLFRS will be subject to privacy obligations and safeguards, which are required by the Bill and will be set out in the NDLFRS hosting agreement. 

The Bill requires parties to the agreement to agree to be bound by the Privacy Act or a state or territory equivalent, or agree to be subject to the Australian Privacy Principles. The Bill requires state or territory authorities to inform individuals if their information is stored on the NDLFRS (and provide for a mechanism by which those persons can correct any errors), inform the Department and individuals whose information is stored on the NDLFRS of any data breaches, establish a complaints mechanism, and report annually to the Department on the party’s compliance with the agreement. The Bill enables states and territories to limit the use of identity information stored on the NDLFRS, and requires the Department to maintain the security of the NDLFRS. The Department may suspend or terminate access to the NDLFRS in the event of a party’s non-compliance with legislative obligations. 

To protect the privacy of Australians, the Department will be required to maintain the security of electronic communications to and from the approved identity verification facilities, and the information held in the NDLFRS. This information and communications must be encrypted and data breaches reported. 

There will be transparency about the operation of the approved identity verification facilities, including through extensive annual reporting requirements and annual assessments by the Information Commissioner on the operation and management of the facilities. 

The Bill reflects and seeks to implement aspects of the Commonwealth’s commitments under the Intergovernmental Agreement on Identity Matching Services (Intergovernmental Agreement). The Intergovernmental Agreement provides that jurisdictions would share and match biographic and biometric information, with robust privacy safeguards, through the identity verification services. 

The Bill will be supported by the Identity Verification Services (Consequential Amendments) Bill which amends the Australian Passports Act 2005 to provide a clear legal basis for the Minister to disclose personal information for the purpose of participating in one of the following services to share or match information relating to the identity of a person:

- the DVS or the FVS, or 

- any other service, specified or of a kind specified in the Minister’s determination. 

The Consequential Amendments Bill will also allow for automated disclosures of personal information to a specified person via the DVS or the FVS. In combination, this comprehensively authorises the operation of the DVS and FVS in relation to Australian travel documents regulated by the Australian Passports Act.

The Memo also states

... subclause 6(4) of the Bill ensures certain types of information are excluded and cannot be sought or requested through the identity verification services. This information is: 

  •  information or an opinion about an individual’s racial or ethnic origin, political opinions, membership of a political association, religious beliefs or affiliations, philosophical beliefs, membership of a trade union, sexual orientation or practices, or criminal record (paragraph (a)) 

  • health information about an individual (as defined in section 6FA of the Privacy Act) (paragraph (b)), and 

  • genetic information about an individual (paragraph (c))

12 September 2023

Lobbying

The Lobby Network: Big Tech's Web of Influence in the EU (Corporate Europe Observatory and LobbyControl, 2021) by Max Bank, Felix Duffy, Verena Leyendecker, Margarida Silva comments 

 As Big Tech’s market power has grown, so has its political clout. Just as the EU tries to rein in the most problematic aspects of Big Tech – from disinformation, targeted advertising to excessive market power – the digital giants are lobbying hard to shape new regulations. They are being given disproportionate access to policy-makers and their message is amplified by a wide network of think tanks and other third parties. Corporate Europe Observatory and LobbyControl profile Big Tech’s lobby firepower, given it is now the EU’s biggest lobby spending industry. 

The lobby firepower of Big Tech undermines democracy 

In the last two decades we have seen the rise of companies providing digital services. Big Tech firms have become all-pervasive, playing critical roles in our social interactions, in the way we access information, and in the way we consume. These firms not only strive to be dominant players in one market, but with their giant monopoly power and domination of online ecosystems, want to become the market itself. 

In her announcement1 of plans to shape the EU’s digital future, President of the European Commission von der Leyen declared: “I want that digital Europe reflects the best of Europe – open, fair, diverse, democratic, and confident.” 

The current situation is quite the opposite. Tech firms like Google, Facebook, Amazon, Apple and Microsoft long ago consolidated their hold of their market, and dominated top spots in the world’s biggest companies. 

A mere handful of companies determine the rules of online interaction and increasingly shape the way we live. The COVID-19 pandemic has only sped up the momentum for digitisation and the importance of the companies. Big Tech’s business model has received heavy criticism for its role in the spread of disinformation and the undermining of democratic processes its reliance on the exploitation of personal data, and its immense market power and unfair market practices. 

Meanwhile as the economic power of big digital companies has grown, so has their political power. In this report, we offer an overview of the tech industry’s lobbying fire-power with regard to the EU institutions, including who the big spenders are, what they want, and just how outsized their privileged access is. This is especially important given that EU policy-makers are currently seeking to regulate the digital market and its players via the Digital Services Act package. This EU initiative is made up of two components, the Digital Services Act and the Digital Markets Act, meant to “to create a safer digital space in which the fundamental rights of all users of digital services are protected”, and “to establish a level playing field to foster innovation, growth, and competitiveness, both in the European Single Market and globally”. 

We map for the first time the ‘universe’ of actors lobbying the EU’s digital economy, from Silicon Valley giants to Shenzhen’s contenders; from firms created online to those making the infrastructure that keeps the internet running; tech giants and newcomers. 

We found a wide yet deeply imbalanced ‘universe’:

  • with 612 companies, groups and business associations lobbying the EU’s digital economy policies. Together, they spend over € 97 million annually lobbying the EU institutions. This makes tech the biggest lobby sector in the EU by spending, ahead of pharma, fossil fuels, finance, or chemicals. 

  • in spite of the varied number of players, this universe is dominated by a handful of firms. Just ten companies are responsible for almost a third of the total tech lobby spend: Vodafone (€ 1,750,000), IBM (€ 1.750.000), QUALCOMM (€ 1.750.000), Intel (€ 1,750,000), Amazon (€ 2,750,000), Huawei (€ 3,000,000), Apple (€ 3,500,000), Microsoft (€ 5,250,000), Facebook (€ 5,550,000) and with the highest budget, Google (€ 5,750,000). 

  • out of all the companies lobbying the EU on digital policy, 20 per cent are US based, though this number is likely even higher. Less than 1 per cent have head offices in China or Hong Kong. This implies Chinese firms have so far not invested in EU lobbying quite as heavily as their US counterparts. 

  • These huge lobbying budgets have a significant impact on EU policy-makers, who find digital lobbyists knocking on their door on a regular basis. More than 140 lobbyists work for the largest ten digital firms day to day in Brussels and spend more than € 32 million on making their voice heard. 

  • Big Tech companies don’t just lobby on their own behalf; they also employ an extensive network of lobby groups, consultancies, and law firms representing their interests, not to mention a large number of think tanks and other groups financed by them. The business associations lobbying on behalf of Big Tech alone have a lobbying budget that far surpasses that of the bottom 75 per cent of the companies in the digital industry. Academic and Big Tech critic Shoshana Zuboff has argued that lobbying – alongside establishing relationships with elected politicians, a steady revolving door, and a campaign for cultural and academic influence – has acted as the fortification that has allowed a business model, built on violating people’s privacy and unfairly dominating the market, to flourish without being challenged. 

This is also the case at EU level. The aim of Big Tech and its intermediaries seems to make sure there are as few hard regulations as possible – for example those that tackle issues around privacy, disinformation, and market distortion – to preserve their profit margins and business model. If new rules can’t be blocked, then they aim to at least water them down. In recent years these firms started embracing regulation in public, yet continue pushing back against behind closed doors. There are some differences between what different tech firms want in terms of EU policy, but the desire to remain ‘unburdened’ by urgently needed regulations is shared by most of the large platforms. 

Big Tech’s deep pockets might also reflect the fact that this industry is rather new and emerging, and its home base is not the EU. Most of the big players come from the US. This has several consequences for the lobbying efforts of the industry in the EU. First of all, channels of influence are still in the process of being built. The ties to governments are not as close as, for instance, between the German Government and its national car industry. This, in addition to growing criticism of Big Tech’s business practices, can start explaining why the digital industry’s lobbying relies heavily on influencing public opinion and on using third-parties, such as think tanks and law and economic firms, as a tool for that purpose. 

The Digital Markets Act and the Digital Services Act – the two strands of the Digital Services Act package – are the EU’s first legislative attempt to tackle the overarching power of the tech giants. And the lobbying battle being waged over them show us the lobby might of the tech industry in practice. More than 270 meetings on these proposals have taken place, 75 percent of them with industry lobbyists. Most of them targeted at Commissioners Vestager and Breton who are responsible for the new rules. This lobby battle has now moved to the European Parliament and Council. In spite of the lack of transparency, we start seeing Big Tech’s lobbying footprint in the EU capitals.

Insurance

'Workers’ Compensation and Injury Insurance in Australian Sport: The Status May Not Be Quo' by Eric Windholz in (2022) 32 Insurance law Journal 106-126 comments 

This article examines the exemption from workers’ compensation of professional sportspersons. This examination reveals the exemption is complex, with numerous jurisdictional differences, exceptions and qualifications. The examination also reveals that many of the arguments originally advanced in support of the exemption are redundant in a world in which sport has been corporatised and commercialised. This conclusion raises numerous questions. Should the status quo remain, or should professional sportspersons receive coverage under workers’ compensation legislation or some form of bespoke injury insurance scheme? What limitations (if any) should apply to professional sportspersons’ claims for compensation? What type and level of benefits should apply? And how and by whom should it be funded? These (and other) questions make this is an important issue worthy of further investigation. 

Moral Patients

'Will intelligent machines become moral patients?' by Parisa Moosavi in (2023) Philosophy and Phenomenological Research comments 

 Recent advances in Artificial Intelligence (AI) and machine learning have raised many ethical questions. A popular one concerns the moral status of artificially intelligent machines (AIs). AIs are increasingly capable of emulating intelligent human behaviour. From speech recognition and natural-language processing to moral reasoning, they are continually improving at performing tasks we once thought only humans can do. Their powerful self-learning capabilities give them an important sense of autonomy and independence: they can act in ways that are not directly determined by us. Their problem-solving abilities sometimes even surpasses ours. What's more, they are taking on social roles such as caregiving and companionship, and thereby seem to merit a social and emotional response on our part. All this has led many philosophers and technologists to seriously consider the possibility that we will someday have to grant moral protections to AIs. In other words, we would have to “expand the moral circle” and include AIs among moral patients, i.e., entities that are owed moral consideration. 

This question of moral patiency is the focus of my paper. Roughly speaking, the question is whether future AIs will be the kind of entities that can be morally wronged and need moral protection. My position, in a nutshell, is that concerns about the moral status of AI are unjustified. Contrary to the claims of many authors (e.g., Coeckelbergh 2010; 2014; and Gunkel 2018; 2019; Danaher 2020), I argue that we have no reason to believe we will have to provide moral protections to future AIs. I consider this to be a commonsense view regarding the moral status of AI, albeit one that has not been successfully defended in the philosophical literature. This is the kind of defence that I plan to provide in this paper. 

... We may start by clarifying the concept of moral patiency further. Frances Kamm's account of this moral status is particularly helpful. According to Kamm (2007, pp. 227–229), an entity has moral status in the relevant sense if it counts morally in its own right and for its own sake. Let's consider each of these conditions in turn: (i) what it is for an entity to count morally, (ii) what it is for it to count morally in its own right, and (iii) what it is for it to count morally for its own sake. 

To say that an entity counts morally is to say that it is in some way morally significant. More specifically, it means that there are ways of behaving toward the entity that would be morally problematic or impermissible. An entity that counts morally gives us moral reasons to do certain things and act in certain ways toward it, such as to treat it well and not harm it. We typically consider humans as entities that count morally in this sense, and ordinary rocks as entities that do not. But almost anything can count morally in the right context. If an ordinary rock, for instance, is used as a murder weapon and becomes a piece of evidence, it may be morally impermissible to temper with it. 

There are, however, different ways to count morally, not all of which amount to being a moral patient. An entity can count morally, but merely instrumentally so—i.e., because our treatment of it has a morally significant effect on others. To have the relevant moral status, the entity in question must count morally in its own right, i.e., non-instrumentally. In other words, it must be valued as an end, and not merely as a means. The above-mentioned rock does not meet this condition, but our fellow humans do: no further end needs to be served by the way we treat them for us to have a reason to treat them well. 

Moreover, an entity with moral patiency counts morally for its own sake, which is to say that we have reason to treat it in a certain way for the sake of the entity itself. Note that an entity might be valued as an end but not for the sake of itself. For instance, the aesthetic value of Mona Lisa can give us reason to preserve it independently of the pleasure or enlightenment it can bring. This, however, does not mean that we have reason to preserve Mona Lisa for the sake of the painting itself. We do not think of preservation as something that is good for the painting. We rather think we have reason to preserve it because the painting has value for us. We value the painting as an end, but — to borrow Korsgaard's term — this non-instrumental value is still “tethered” to us: we are the beneficiaries of this value. In contrast, our moral reasons to save a human being from drowning are reasons to do something for their sake. They get something out of being saved that The Mona Lisa does not. 

Thus, on Kamm's account, an entity has moral patiency when it can give us reason to treat it well, independently of any further ends that such a treatment might serve, and precisely because being treated well is good for the entity itself. 

This is the conception of moral patiency that I will adopt going forward. The question I am interested in is, therefore, whether future AIs will be the kinds of entities that can give us moral reasons of this specific kind: reasons that have to do with what is good for the entity itself. I am not concerned with whether there will be other sorts of moral reasons to treat them in a certain way. I am only asking whether they will qualify for moral patiency proper.

11 September 2023

Agreements

'The Rise of Nonbinding International Agreements: An Empirical, Comparative, and Normative Analysis' by Curtis A. Bradley, Jack Goldsmith and Oona A. Hathaway in (2023) 90(5) University of Chicago Law Review comments 

The treaty process specified in Article II of the Constitution has been dying a slow death for decades, replaced by various forms of “executive agreements.” What is only beginning to be appreciated is the extent to which both treaties and executive agreements are increasingly being overshadowed by another form of international cooperation: nonbinding international agreements. Not only have nonbinding agreements become more prevalent, but many of the most consequential (and often controversial) U.S. international agreements in recent years have been concluded in whole or in significant part as nonbinding agreements. Despite their prevalence and importance, nonbinding agreements have not traditionally been subject to any of the domestic statutory or regulatory requirements that apply to binding agreements. As a result, they have not been centrally monitored or collected within the executive branch, and they have not been systematically reported to Congress or disclosed to the public. Recent legislation addresses this transparency gap to a degree, but substantial gaps remain. 

This Article focuses on the two most significant forms of nonbinding agreements between U.S. government representatives and their foreign counterparts: (1) joint statements and communiques; and (2) formal nonbinding agreements. After describing these categories and the history of nonbinding agreements and their domestic legal basis, the Article presents the first empirical study of U.S. nonbinding agreements, drawing on two new databases that together include more than three thousand of these agreements. Based on this study, and on a comparative assessment of the practices and reform discussions taking place in other countries, the Article considers the case for additional legal reforms

Modding

'From Community Governance to Customer Service and Back Again: Re-Examining Pre-Web Models of Online Governance to Address Platforms’ Crisis of Legitimacy' by Ethan Zuckerman and Chand Rajendra-Nicolucci in (2023) Social Media and Society comments 

As online platforms grow, they find themselves increasingly trying to balance two competing priorities: individual rights and public health. This has coincided with the professionalization of platforms’ trust and safety operations—what we call the “customer service” model of online governance. As professional trust and safety teams attempt to balance individual rights and public health, platforms face a crisis of legitimacy, with decisions in the name of individual rights or public health scrutinized and criticized as corrupt, arbitrary, and irresponsible by stakeholders of all stripes. We review early accounts of online governance to consider whether the customer service model has obscured a promising earlier model where members of the affected community were significant, if not always primary, decision-makers. This community governance approach has deep roots in the academic computing community and has re-emerged in spaces like Reddit and special purpose social networks and in novel platform initiatives such as the Oversight Board and Community Notes. We argue that community governance could address persistent challenges of online governance, particularly online platforms’ crisis of legitimacy. In addition, we think community governance may offer valuable training in democratic participation for users. 

Since the earliest days of computing, people have used information technology to converse with one another. Four years before the internet, Noel Morris and Tom Van Vleck wrote both an electronic mail system and a real-time chat system for MIT’s Compatible Time-Sharing System (CTSS), allowing users who logged onto the single shared computer to leave messages for one another or send messages to another user’s terminal (Van Vleck, 2012). Within 3 years of the introduction of the internet, email became the primary use of a network initially established to let computer scientists run programs on remote machines (Sterling, 1993). France’s Minitel service, designed to give users access to an electronic telephone directory and the ability to make travel reservations online, quickly became dominated by chat services, particularly erotic chat (Tempest, 1989). People want to talk to one another and will find ways to do so as soon as they are technically capable of connecting to one another. Unfortunately, as soon as people are able to talk to one another, they are also able to harm each other. Spam has undermined the utility of email and largely destroyed Usenet, the dominant community platform of the academic internet in the 1980s and early 1990s. Harassment and hate speech have become facts of life for users of many online systems, particularly for women, people of color, and LGBTQIA+ people. People often behave differently online than they would offline (Suler, 2004) and the impetus for humans to harass each other via digital tools is at least as strong as the impulse to connect. 

The emergence of trust and safety as a professional discipline reflects the centrality of issues like content moderation, spam and fraud prevention, and efforts to combat child sexual abuse imagery (CSAM) to the operation of platforms that enable user-generated content and conversation. As Tarleton Gillespie (2018) notes in Custodians of the Internet, “Platforms are not platforms without moderation.” Recent efforts to recognize trust and safety as a profession, with the establishment of the Trust & Safety Professional Association in 2020 and the emergence of a Journal of Online Trust and Safety in 2021 are overdue, as the work of policing online spaces traces back at least to the 1980s, if not earlier. 

One danger of losing the early history of online governance is a narrowing of possible futures, making it seem as if the contemporary model for governing online spaces, where professionals make decisions about what behavior is acceptable, with little input from members of the community, is the way it’s always been done. We refer to this model as the “customer service” model and contrast it to earlier models of online governance in which community members were significant, if not always primary, decision-makers about the online spaces they were a part of. This article examines three paradigms of online governance that preceded the contemporary customer service model and suggests that varying degrees of community governance may be a viable and socially beneficial option for many online spaces. 

This article is far from an exhaustive history of early online governance or of the emergence of the customer service model, though both histories are needed. While there has been excellent work calling attention to the complexities of trust and safety (Gillespie, 2018; Gray & Suri, 2019), it has focused primarily on the “web 2.0” social media platforms that emerged in the mid-2000s—the shift toward the customer service model begins in the late 1980s and is cemented in place by the mid-1990s. This is also an opinionated and personal history, as one of the authors (Zuckerman) built the early content moderation department for Tripod.com, one of the web’s first user-generated content sites, from 1995 to 1999.

Minds

'Moral Uncertainty and Our Relationships with Unknown Minds' by John Danaher in (2023) 32(4)Cambridge Quarterly of Healthcare Ethics 482-495 comments 

In June 2022, Blake Lemoine, a Google-based AI scientist and ethicist, achieved brief notoriety for claiming, apparently in earnest, that a Google AI program called LaMDA may have attained sentience. Lemoine quickly faced ridicule and ostracization. He was suspended from work and, ultimately, fired. What had convinced him that LaMDA might be sentient? In support of his case, Lemoine released snippets of conversations he had with LaMDA. Its verbal fluency and dexterity were impressive. It appeared to understand the questions it was being asked. It claimed to have a sense of self and personhood, to have fears, hopes, and desires, just like a human. Critics were quick to point out that Lemoine was being tricked. LaMDA was just a very sophisticated predictive text engine, trained on human language databases. It was good at faking human responses; there was no underlying mind or sentience behind it. 

Whatever the merits of Lemoine’s claims about LaMDA, his story illustrates an ethical-epistemic challenge we all face: How should we understand our relationships with uncertain or contested minds? In other words, if we have an entity that appears to display mind-like properties or behaviors but we are unsure whether it truly possesses a mind, how should we engage with it? Should we treat it “as if” it has a mind? Could we pursue deeper relationships with it, perhaps friendship or love? This is an epistemic challenge because in these cases, we have some difficulty accessing evidence or information that can confirm, definitively, whether the entity has a mind. It is an ethical challenge because our classification of the other entity—our decision as to whether or not it has a mind—has ethical consequences. At a minimum, it can be used to determine whether the entity has basic moral standing or status. It can also be used to determine the kinds of value we can realize in our interactions with it. 

Our relationships with AI and robots are but one example of a situation in which we face this challenge. We also face it with humans whose minds are fading (e.g., those undergoing some cognitive decline) or difficult to access (those with “locked-in” syndrome). And, we face it with animals, both wild and domestic. Our default assumptions vary across each of these cases. Many people are willing to presume that humans, whatever the evidence might suggest, have minds and that their basic moral status is unaffected by our epistemic difficulties in accessing those minds. They might be less willing to presume that the value of the relationships they have are unaffected by these epistemic difficulties. Some people are willing to presume that animals have minds, at least to some degree, and that they deserve some moral consideration as a result. Many of them are also willing to pursue meaningful relationships with animals, particularly pets. Finally, most people, as of right now, tend to be skeptical about the claims that AI or robots (what I will call “artificial beings” for the remainder of this article) could have minds. This is clear from the reaction to Blake Lemoine’s suggestions about LaMDA. 

In this article, I want to consider, systematically, what our normative response to uncertain minds should be. For illustrative purposes, I will focus on the case study of artificial beings, but what I have to say should have broader significance. I will make three main arguments. First, the correct way to approach our moral relationships with uncertain minds is to use a “risk asymmetry” framework of analysis. This is a style of analysis that is popular in the debate about moral uncertainty and has obvious applications here. Second, deploying that argumentative framework, I will suggest that we may have good reason to be skeptical of claims about the moral status of artificial beings. More precisely, I will argue that the risks of over-inclusivity when it comes to moral status may outweigh the risk of under-inclusivity. Third, and somewhat contrary to the previous argument, I will suggest that we should, perhaps, be more open to the idea of pursuing meaningful relationships with artificial beings—that the risks of relationship exclusion, at least marginally, outweigh those of inclusion. 

In deploying the risk asymmetry framework to resolve the ethical-epistemic challenge, I do not claim any novelty. Other authors have applied it to debates about uncertain minds, before. In the remainder of this article, I will reference the work of four authors in particular, namely Erica Neely, Jeff Sebo, Nicholas Agar, and Eric Schwitzgebel — each of whom has employed a variation of this argument when trying to determine the moral status of unknown minds. The novelty in my analysis, such as it is, comes from the attempt to use empirical data and psychological research to determine the risks of discounting or overcounting uncertain minds. My assessment of this evidence leads me to endorse conclusions that are different from those usually endorsed in this debate (though, I should say, similar to the conclusions reached by Nicholas Agar). The other contribution I hope to make is to be more systematic and formal in my presentation of the risk asymmetry framework. In other words, irrespective of the conclusions I reach, I hope to demonstrate a useful method of analysis that can be applied to other debates about uncertain minds.

Architecture

'Law as Architecture: Mapping Contingency and Autonomy in Twentieth-Century Legal Historiography' by Dan Rohde and Nicolas Parra-Herrera in (2023) 3(3) Journal of Law and Political Economy argues for seeing law as an “architecture”: a set of tools with which we build our society, with law’s autonomy lying in the way that it facilitates specific forms of societal ordering at the expense of others. 

Four aspects of the “law as architecture” approach warrant discussion. First, this approach does not simply focus on law as enabling new modes of social coordination, but also as simultaneously disenabling other modes. It thereby distinguishes itself from more functionalist approaches and Whig history that sees legal development as, more or less, a simple story of progress. Second, this approach emphasizes the extent to which law is inherited with every generation, leaving legal actors (for the most part) to iterate on the architecture they were born into rather than designing legal arrangements ab initio. Third, law as architecture evokes a material and spatial quality in social collaboration and conflict, where the configuration of these spaces’ modularity touches lives in their bare materiality; it affects them, sometimes, profoundly. Fourth, and lastly, we describe the “law as architecture” as “existential” — meaning that, while it is somewhat determinate at any given moment, we can never fully predict the long-term uses to which changes in a given piece of legal architecture will be put, nor the long-term social consequences that will result.