04 November 2023

Pseudologia

Serial identity offender Samantha Azzopardi is in the news again, with The Age reporting that she has been banned from stalking or attempting to locate a 'protected person', in addition to multiple identity charges (including obtaining financial advantage by deception). 

She is alleged to have dishonestly obtained $18,837 by portraying herself as being from Europe and in need of support services for family violence and child sexual abuse. Through that portrayal she is reported as having deceptively obtained accommodation, vouchers and medical costs from specialist family violence support services. 

Last week it was revealed that she had been arrested in Northcote following allegations of deceptive conduct in Dandenong, Brighton and Reservoir. Unsurprisingly, bail was refused.

As noted elsewhere in this blog she has previously pretended to be more than 70 different people, including a child orphan and a Russian gymnast. A year ago she was sentenced in Downing Local Court (Sydney) to been sentenced to 17-months behind bars - that time regarding claims she was a 14-year-old child abuse victim from France - but was out after three months.

03 November 2023

Ephemera

'The Cinderella Stamps and Philatelic Practices of Micronations: The Materiality of Claims to Statehood' by Harry Hobbs and Jessie Hohmann in (2024) London Review of International Law (forthcoming) comments 

When non-state entities design, issue and use their own postage stamps, they visibly harness the symbolism and trappings of statehood that stamps carry. Drawing on the international legal regulation of postage stamps, this article makes a close examination of micronations’ stamps as objects of international law, to consider how the philatelic practices of micronations illuminate material practices of sovereignty and statehood. 
 
As a ‘small but mighty symbolic emissary from one particular nation to the rest of the world...the postage stamp is an excellent vehicle for spurious, tenuous, or completely fictitious states to declare their existence’. 
 
In May 2022, the first author of this article received a parcel. The brown A4 envelope (see figure 1) was laden with a diverse collection of stamps labelled with the mysterious country of the Sultanate of Occussi-Ambeno. The collection of stamps was a window into the Sultanate, its relations with its neighbours and its values. Stamps celebrated forty-five years of independence of a political community called Mevu, fifty years of Occussi-Ambeno’s own independence, the 60th anniversary of the first human space flight, the centenary of Tibet’s independence, and ‘Ten Fun Years of Peppa Pig’. The envelope was postmarked 15 March 2022 by the Sultanate’s Chadwick Post Office. It also bore two New Zealand Stamps. 
 
Occussi-Ambeno is not a recognised state. It is a ‘micronation’ – a self-declared state. Nominally located on the territory of East Timor the Sultanate is in fact led by Bruce Grenville, an imaginative, Auckland-based anarchist.  Micronations perform and mimic acts of sovereignty and adopt many protocols associated with statehood (including issuing stamps), but micronations lack a foundation in law for their claims. Occussi-Ambeno may appear to have a post office and a wonderfully creative set of postage stamps, but it has no independent existence in international law. And while the stamps issued by Occussi-Ambeno appear compellingly stamp-like, they are better understood as ‘cinderellas’. This is the term used in philatelic circles to refer to objects that resemble postage stamps but are ‘not issued for postal purposes by a government postal administration’. These little fragments of adhesive paper may have an ‘extraordinary ability to conjure an entire nation on a tiny piece of paper’, but the postage stamps of micronations remain formally stickers, not stamps. 
 
Cinderella stamps are to stamps what micronations are to states. They are parasitic on the accepted contours of an existing entity but have not undergone the alchemical process that turns in one case, an adhesive sticker into a stamp, and in the other, a constellation of power into a state. In this paper, we explore the stamps and philatelic practices of micronations as a parallel process that seeks to harness (at small and large scale) the constitutive power of international law to make the objects around us into recognisable forms and confer on them legal status and power. 
 
We situate this paper within the unfolding international legal scholarship that recognises both images and objects as an important focus of study. More specifically we explore the relationship between international law and the material culture that has shaped, vested and recorded international law and international legal practice. Stamps, as objects of international law, are both structured by international law and help to consolidate its doctrines, (especially, from our perspective, sovereignty and statehood) through their circulation and design. At the same time, however, they demonstrate the resistant potential of objects, in that they can be read and viewed, sent and received, in subversive and destabilising ways that demonstrate the absurdity of international law’s central precepts and preoccupations. 
 
Our paper is divided into five parts. After further situating our approach within the material turn in international legal scholarship (B), we provide a brief explanation for how micronations, and stamps are regulated in international law (C). Then, to probe how stamps (and micronations) both complicate and illuminate international legal doctrines of sovereignty and statehood, we map out several ways in which they appear as international law’s objects (D). Mindful that the stamps and philatelic practices of micronations defy easy categorisation, we nevertheless distil four distinct practices. First, in section (a) we outline how stamps are constituted by international law as a manifestation of sovereignty under the Universal Postal Union (UPU). This sets the background and context against which the stamps and philatelic practices of micronations appear. In a series of vignettes that highlight the philatelic practices of micronations, we look at how micronations issue and employ stamps as (b) a presentation of sovereignty, (c) a means to bolster claims or to advertise sovereignty; and finally, (d) a critique or protest of sovereignty and statehood.

01 November 2023

EU health AI

'The Council of Europe's AI Convention (2023-2024): Promises and Pitfalls for Health Protection' by Hannah van Kolfschooten and Carmel Shachar in (2023) Health Policy comments

The Council of Europe, Europe's most important human rights organization, is developing a legally binding instrument for the development, design, and application of AI systems. This “Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law” (AI Convention) aims to protect fundamental rights against the harms of AI. The AI Convention may become the first legally-binding international treaty on AI. In this article, we highlight the implications of the proposed AI Convention for the health and human rights protection of patients. We praise the following characteristics: Global regulation for technology that easily crosses jurisdictions; The human rights-based approach with human rights assessment; The actor-neutral, full-lifecycle approach; The creation of enforceable rights through the European Human Rights Court. We signal the following challenges:  The sector-neutral approach, The lack of reflection on new human rights,Definitional issues, and The process of global negotiations. We conclude that it is important for the Council of Europe not to compromise on the wide scope of application and the rights-based character of the proposed AI Convention. 

In the medical field, physicians, patients, and tech developers are calling for regulation of Artificial Intelligence (AI). There are questions and concerns about doctors using ChatGPT,  the liability of increasingly autonomous surgical systems, and the persistent racial biases exhibited by medical AI systems.  Worldwide, legislators are rushing to regulate AI, while new applications keep popping up at an unprecedented speed. On the European continent, multiple regional legislative instruments are being negotiated in parallel. In June 2023, the European Parliament finally agreed on the content of the EU Artificial Intelligence Act (AI Act). Concurrently, the Council of Europe (CoE), Europe's most important human rights organization, is developing a legally binding instrument for the development, design, and application of AI systems: the “Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law” (AI Convention). The Convention will apply to the medical context. 

Unlike the AI Act – that only applies to the 27 EU Member States – the AI Convention has the potential of becoming the first legally-binding international treaty on AI. We argue that, with its clear focus on fundamental rights protection, the AI Convention has the potential to fill the currently existing regulatory gaps in the protection of patients against the harms of medical AI. We first briefly outline the challenges posed by medical AI. Then we explain how the AI Convention is different from the AI Act and provide an overview of the current text of the AI Convention. Subsequently, we highlight the most important implications for the health and fundamental rights protection of patients. We conclude with recommendations on how to strengthen the protection of health in the ongoing legislative drafting of the AI Convention.

31 October 2023

Biden AI Executive Order

The Biden Executive Order regarding AI directs the following actions: 

 New Standards for AI Safety and Security 

As AI’s capabilities grow, so do its implications for Americans’ safety and security. With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems: 

Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public. 

Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety. Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI. 

Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world. 

Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure. 

Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI. 

Protecting Americans’ Privacy 

Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions: 

Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data. 

Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies. 

Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data. 

Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems. These guidelines will advance agency efforts to protect Americans’ data. 

Advancing Equity and Civil Rights 

Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions: 

Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination. 

Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI. 

Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis. 

Standing Up for Consumers, Patients, and Students 

AI can bring real benefits to consumers—for example, by making products better, cheaper, and more widely available. But AI also raises the risk of injuring, misleading, or otherwise harming Americans. To protect consumers while ensuring that AI can make Americans better off, the President directs the following actions: 

Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI. Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools. 

Supporting Workers 

AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement. To mitigate these risks, support workers’ ability to bargain collectively, and invest in workforce training and development that is accessible to all, the President directs the following actions: 

Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize. 

Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI. 

Promoting Innovation and Competition 

America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined. The Executive Order ensures that we continue to lead the way in innovation and competition through the following actions: 

Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change. 

Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities. 

Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews. 

Advancing American Leadership Abroad 

AI’s challenges and opportunities are global. The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide. To that end, the President directs the following actions: 

Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak. 

Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable. 

Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure. 

Ensuring Responsible and Effective Government Use of AI 

AI can help government deliver better results for the American people. It can expand agencies’ capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. However, use of AI can pose risks, such as discrimination and unsafe decisions. To ensure the responsible government deployment of AI and modernize federal AI infrastructure, the President directs the following actions: 

Issue guidance for agencies’ use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment. 

Help agencies acquire specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting. 

Accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields.