Showing posts with label Identity. Show all posts
Showing posts with label Identity. Show all posts

28 August 2024

Moral Economy

 'The Moral Economy of High-Tech Modernism' by Henry Farrell and Marion Fourcade Author and Article Information in (2023) 152(1) Daedalus 225–235 comments 

Algorithms-especially machine learning algorithms-have become major social institutions. To paraphrase anthropologist Mary Douglas, algorithms “do the classifying.” They assemble and they sort-people, events, things. They distribute material opportunities and social prestige. But do they, like all artifacts, have a particular politics? Technologists defend themselves against the very notion, but a lively literature in philosophy, computer science, and law belies this naive view. Arcane technical debates rage around the translation of concepts such as fairness and democracy into code. For some, it is a matter of legal exposure. For others, it is about designing regulatory rules and verifying compliance. For a third group, it is about crafting hopeful political futures. 

The questions from the social sciences are often different: How do algorithms concretely govern? How do they compare to other modes of governance, like bureaucracy or the market? How does their mediation shape moral intuitions, cultural representations, and political action? In other words, the social sciences worry not only about specific algorithmic outcomes, but also about the broad, society-wide consequences of the deployment of algorithmic regimes-systems of decision-making that rely heavily on computational processes running on large databases. These consequences are not easy to study or apprehend. This is not just because, like bureaucracies, algorithms are simultaneously rule-bound and secretive. Nor is it because, like markets, they are simultaneously empowering and manipulative. It is because they are a bit of both. Algorithms extend both the logic of hierarchy and the logic of competition. They are machines for making categories and applying them, much like traditional bureaucracy. And they are self-adjusting allocative machines, much like canonical markets. 

Understanding this helps highlight both similarities and differences between the historical regime that political scientist James Scott calls “high modernism” and what we dub high-tech modernism. We show that bureaucracy, the typical high modernist institution, and machine learning algorithms, the quintessential high-tech modernist one, share common roots as technologies of hierarchical classification and intervention. But whereas bureaucracy reinforces human sameness and tends toward large, monopolistic (and often state-based) organizations, algorithms encourage human competition, in a process spearheaded by large, near-monopolistic (and often market-based) organizations. High-tech modernism and high modernism are born from the same impulse to exert control, but are articulated in fundamentally different ways, with quite different consequences for the construction of the social and economic order. The contradictions between these two moral economies, and their supporting institutions, generate many of the key struggles of our times. 

Both bureaucracy and computation enable an important form of social power: the power to classify. Bureaucracy deploys filing cabinets and memorandums to organize the world and make it “legible,” in Scott's terminology. Legibility is, in the first instance, a matter of classification. Scott explains how “high modernist” bureaucracies crafted categories and standardized processes, turning rich but ambiguous social relationships into thin but tractable information. The bureaucratic capacity to categorize, organize, and exploit this information revolutionized the state's ability to get things done. It also led the state to reorder society in ways that reflected its categorizations and acted them out. Social, political, and even physical geographies were simplified to make them legible to public officials. Surnames were imposed to tax individuals; the streets of Paris were redesigned to facilitate control. 

Yet high modernism was not just about the state. Markets, too, were standardized, as concrete goods like grain, lumber, and meat were converted into abstract qualities to be traded at scale. The power to categorize made and shaped markets, allowing grain buyers, for example, to create categories that advantaged them at the expense of the farmers they bought from. Businesses created their own bureaucracies to order the world, deciding who could participate in markets and how goods ought to be categorized. 

We use the term high-tech modernism to refer to the body of classifying technologies based on quantitative techniques and digitized information that partly displaces, and partly is layered over, the analog processes used by high modernist organizations. Computational algorithms-especially machine learning algorithms-perform similar functions to the bureaucratic technologies that Scott describes. Both supervised machine learning (which classifies data using a labeled training set) and unsupervised machine learning (which organizes data into self-discovered clusters) make it easier to categorize unstructured data at scale. But unlike their paper-pushing predecessors in bureaucratic institutions, the humans of high-tech modernism disappear behind an algorithmic curtain. The workings of algorithms are much less visible, even though they penetrate deeper into the social fabric than the workings of bureaucracies. The development of smart environments and the Internet of Things has made the collection and processing of information about people too comprehensive, minutely geared, inescapable, and fast-growing for considered consent and resistance. 

In a basic sense, machine learning does not strip away nearly as much information as traditional high modernism. It potentially fits people into categories (“classifiers”) that are narrower-even bespoke. The movie streaming platform Netflix will slot you into one of its two thousand-plus “microcommunities” and match you to a subset of its thousands of subgenres. Your movie choices alter your position in this scheme and might in principle even alter the classificatory grid itself, creating a new category of viewer reflecting your idiosyncratic viewing practices. Many of the crude, broad categories of nineteenth-century bureaucracies have been replaced by new, multidimensional classifications, powered by machine learning, that are often hard for human minds to grasp. People can find themselves grouped around particular behaviors or experiences, sometimes ephemeral, such as followers of a particular YouTuber, subprime borrowers, or fans of action movies with strong female characters. Unlike clunky high modernist categories, high-tech modernist ones can be emergent and technically dynamic, adapting to new behaviors and information as they come in. They incorporate tacit information in ways that are sometimes spookily right, and sometimes disturbing and misguided: music-producing algorithms that imitate a particular artist's style, language models that mimic social context, or empathic AI that supposedly grasps one's state of mind. Generative AI technologies can take a prompt and generate an original picture, video, poem, or essay that seems to casual observers as though it were produced by a human being. 

Taken together, these changes foster a new politics. Traditional high modernism did not just rely on standard issue bureaucrats. It empowered a wide variety of experts to make decisions in the area of their particular specialist knowledge and authority. Now, many of these experts are embattled, as their authority is nibbled away by algorithms whose advocates claim are more accurate, more reliable, and less partial than their human predecessors. 

One key difference between the moral economies of high modernism and high-tech modernism involves feedback. It is tempting to see high modernism as something imposed entirely from above. However, in his earlier book Weapons of the Weak, Scott suggests that those at the receiving end of categorical violence are not passive and powerless. They can sometimes throw sand into the gears of the great machinery. 

As philosopher Ian Hacking explains, certain kinds of classifications-typically those applying to human or social collectives-are “interactive” in that when known by people or those around them, and put to work in institutions, [they] change the ways in which individuals experience themselves-and may even lead people to evolve their feelings and behavior in part because they are so classified. 

People, in short, have agency. They are not submissive dupes of the categories that objectify them. They may respond to being put in a box by conforming to or growing into those descriptions. Or they may contest the definition of the category, its boundaries, or their assignment to it. This creates a feedback loop in which the authors of classifications (state officials, market actors, experts from the professions) may adjust the categories in response. Human society, then, is forever being destructured and restructured by the continuous interactions between classifying institutions and the people and groups they sort. 

But conscious agency is only possible when people know about the classifications: the politics of systems in which classifications are visible to the public, and hence potentially actionable, will differ from the politics of systems in which they are not. 

So how does the change from high modernism to high-tech modernism affect people's relationships with their classifications? At its worst, high modernism stripped out tacit knowledge, ignored public wishes and public complaints, and dislocated messy lived communities with sweeping reforms and grand categorizations, making people more visible and hence more readily acted on. The problem was not that the public did not notice the failures, but that their views were largely ignored. Authoritarian regimes constricted the range of ways in which people could respond to their classification: anything more than passive resistance was liable to meet brutal countermeasures. Democratic regimes were, at least theoretically, more open to feedback, but often ignored it when it was inconvenient and especially when it came from marginalized groups. 

The pathologies of computational algorithms are often more subtle. The shift to high-tech modernism allows the means of ensuring legibility to fade into the background of the ordinary patterns of our life. Information gathering is woven into the warp and woof of our existence, as entities gather ever finer data from our phones, computers, doorbell cameras, purchases, and cars. There is no need for a new Haussmann to transform cramped alleyways into open boulevards, exposing citizens to view. Urban architectures of visibility have been rendered nearly redundant by the invisible torrents of data that move through the air, conveying information about our movements, our tastes, and our actions to be sieved through racks of servers in anonymous, chilled industrial buildings. 

The feedback loops of high-tech modernism are also structurally different. Some kinds of human feedback are now much less common. Digital classification systems may group people in ways that are not always socially comprehensible (in contrast to traditional categories such as female, married, Irish, or Christian). Human feedback, therefore, typically requires the mediation of specialists with significant computing expertise, but even they are often mystified by the operation of systems they have themselves designed.

21 May 2024

Identity

Mellor J in COPA v Wright [2024] EWHC 1198 (Ch) comments 

1. Dr Craig Steven Wright (‘Dr Wright’) claims to be Satoshi Nakamoto i.e. he claims to be the person who adopted that pseudonym, who wrote and published the first version of the Bitcoin White Paper on 31 October 2008, who wrote and released the first version of the Bitcoin Source Code and who created the Bitcoin system. Dr Wright also claims to be a person with a unique intellect, with numerous degrees and PhDs in a wide range of subjects, the unique combination of which led him (so it is said) to devise the Bitcoin system. 

2. Thus, Dr Wright presents himself as an extremely clever person. However, in my judgment, he is not nearly as clever as he thinks he is. In both his written evidence and in days of oral evidence under cross-examination, I am entirely satisfied that Dr Wright lied to the Court extensively and repeatedly. Most of his lies related to the documents he had forged which purported to support his claim. All his lies and forged documents were in support of his biggest lie: his claim to be Satoshi Nakamoto. 

3. Many of Dr Wright’s lies contained a grain of truth (which is sometimes said to be the mark of an accomplished liar), but there were many which did not and were outright lies. As soon as one lie was exposed, Dr Wright resorted to further lies and evasions. The final destination frequently turned out to be either Dr Wright blaming some other (often unidentified) person for his predicament or what can only be described as technobabble delivered by him in the witness box. Although as a person with expertise in IT security, Dr Wright must have thought his forgeries would provide convincing evidence to support his claim to be Satoshi or some other point of detail and would go undetected, the evidence shows, as I explain below and in the Appendix, that most of his forgeries turned out to be clumsy. Indeed, certain of Dr Wright’s responses in cross-examination effectively acknowledged that point: from my recollection at least twice he indicated if he had wanted to forge a document, he would have done a much better job. 

4. If Dr Wright’s evidence was true, he would be a uniquely unfortunate individual, the victim of a very large number of unfortunate coincidences, all of which went against him, and/or the victim of a number of conspiracies against him. 

5. The true position is far simpler. It is, however, far from simple because Dr Wright has lied so much over so many years that, on certain points, it can be difficult to pinpoint what actually happened. Those difficulties do not detract from the fact that there is a very considerable body of evidence against Dr Wright being Satoshi. To the extent that it is said there is evidence supporting his claim, it is at best questionable or of very dubious relevance or entirely circumstantial and at worst, it is fabricated and/or based on documents I am satisfied have been forged on a grand scale by Dr Wright. These fabrications and forgeries were exposed in the evidence which I received during the Trial. For that reason, this Judgment contains considerable technical and other detail which is required to expose the true scale of his mendacious campaign to prove he was/is Satoshi Nakamoto. This detail was set out in the extensive Written Closing Submissions prepared by COPA and the Developers and further points drawn out in their oral closing arguments.

28 September 2023

Digital Identity

The Explanatory Memo for the Identity Verification Services Bill 2023 (Cth), ahead of the foreshadowed privacy reforms noted in the preceding post, states 

1. Secure and efficient identity verification is critical to minimising the risk of identity fraud and theft, and protecting the privacy of Australians when seeking to access government and industry services and engage with the digital economy. The identity verification services are the only national capability that can be used by industry and government agencies to securely verify the identity of their customers. 

2. Identity verification services are a series of automated national services offered by the Commonwealth to allow government agencies and industry to efficiently compare or verify personal information on identity documents against existing government records, such as passports, driver licences and birth certificates. 

3. 1:1 matching services (the Document Verification Service and the Face Verification Service) are now used every day by Commonwealth, State and Territory government agencies and industry to securely verify the identity. In 2022, the DVS was used over 140 million times by approximately 2700 government and industry sector organisations, and there were approximately 2.6 million FVS transactions in the 2022-23 financial year. 

4. Examples of the current uses of the DVS and FVS include:

• verifying the identity of an individual when establishing a myGovID to access online services, including services provided by the Australian Taxation Office 

• financial service providers, such as banks, when seeking to verify the identity of their customers and to meet the 'know your customer' obligation under the Anti-Money Laundering and Counter-Terrorism Financing Act 2006 (Cth) 

• Government agencies when providing services, disaster relief and welfare payments, and 

• Commonwealth, state and territory government agencies verifying identity in order to provide or change credentials. 

5. The Identity Verification Services Bill 2023 establishes new primary legislation that provides a legislative framework to support the operation of the identity verification services. The Bill will support the efficient and secure operation of the services without compromising the privacy of the Australian community. 

6. The IVS Bill will:

• authorise 1:1 matching of identity through the identity verification services, with consent of the relevant individual, by public and private sector entities. This will be enabled by: o the Document Verification Service which provides 1:1 matching to verify biographic information (such as a name or date of birth), with consent, against government issued identification documents; o the Face Verification Service which provides 1:1 matching to verifiy biometric information (in this case a photograph or facial image of an individual), with consent, against a Commonwealth, state or territory issued identification document (for example, passports and driver licences); and o the National Driver Licence Facial Recognition Solution which enables the FVS to conduct 1:1 matching against State and Territory identification documents such as driver licences. 

• authorise 1:many matching services through the Face Identification Service only for the purpose of protecting the identity of persons with a legally assumed identity, such as undercover officers and protected witnesses. The protection of legally assumed identities will also be supported by the use of the FVS. All other uses of 1:many matching through the identity verification services will not be authorised, and will therefore be prohibited. 

• authorise the responsible Commonwealth department - in this case the Attorney-General's Department - to develop, operate and maintain the identity verification facilities (the DVS hub, the Face Matching Service Hub and the NDLFRS). These approved identity verification facilities will be used to provide the identity verification services. These facilities will relay electronic communications between persons and bodies for the purposes of requesting and providing identity verification services. 

7. Subject to robust privacy safeguards, the Department will be authorised to collect, use and disclose identification information through the approved identity verification facilities for the purpose of providing identity verification services and developing, operating and maintaining the NDLFRS. Offences will apply to certain entrusted persons for the unauthorised recording, disclosing or accessing protected information. 

8. The Bill ensures that the operation the identity verification services and requests for the use of those services are subject to privacy protections and safeguards. These include consent and notice requirements, privacy impact assessments, requirements to report security breaches and data breaches, complaints handling, annual compliance reporting and transparency about how information will be collected, used and disclosed. Furthermore, privacy law and/or the Australian Privacy Principles will apply to almost all entities that seek to make a request for identity verification services. These privacy protections and safeguards will be set out in participation agreements. 

9. Government authorities that supply identification information that is used for the purpose of identity verification services will also be subject to the privacy protections and safeguards captured in the participation agreement. Breaches of participation agreements can lead to suspension or termination of the agreement, meaning that the entity would no longer be able to request identity verification services. 

10. States or territories seeking to contribute to the NDLFRS will be subject to privacy obligations and safeguards, which are required by the Bill and will be set out in the NDLFRS hosting agreement. 

11. The Bill requires parties to the agreement to agree to be bound by the Privacy Act or a state or territory equivalent, or agree to be subject to the Australian Privacy Principles. The Bill requires state or territory authorities to inform individuals if their information is stored on the NDLFRS (and provide for a mechanism by which those persons can correct any errors), inform the Department and individuals whose information is stored on the NDLFRS of any data breaches, establish a complaints mechanism, and report annually to the Department on the party's compliance with the agreement. The Bill enables states and territories to limit the use of identity information stored on the NDLFRS, and requires the Department to maintain the security of the NDLFRS. The Department may suspend or terminate access to the NDLFRS in the event of a party's non-compliance with legislative obligations. 

12. To protect the privacy of Australians, the Department will be required to maintain the security of electronic communications to and from the approved identity verification facilities, and the information held in the NDLFRS. This information and communications must be encrypted and data breaches reported. 

13. There will be transparency about the operation of the approved identity verification facilities, including through extensive annual reporting requirements and annual assessments by the Information Commissioner on the operation and management of the facilities. 

14. The Bill reflects and seeks to implement aspects of the Commonwealth's commitments under the Intergovernmental Agreement on Identity Matching Services (Intergovernmental Agreement). The Intergovernmental Agreement provides that jurisdictions would share and match biographic and biometric information, with robust privacy safeguards, through the identity verification services. 

15. The Bill will be supported by the Identity Verification Services (Consequential Amendments) Bill which amends the Australian Passports Act 2005 to provide a clear legal basis for the Minister to disclose personal information for the purpose of participating in one of the following services to share or match information relating to the identity of a person: • the DVS or the FVS, or • any other service, specified or of a kind specified in the Minister's determination. 

16. The Consequential Amendments Bill will also allow for automated disclosures of personal information to a specified person via the DVS or the FVS. In combination, this comprehensively authorises the operation of the DVS and FVS in relation to Australian travel documents regulated by the Australian Passports Act.

13 September 2023

Profiling and Matching

The Explanatory Memo for the Identity Verification Services Bill 2023 (Cth) states 

 Identity verification services are a series of automated national services offered by the Commonwealth to allow government agencies and industry to efficiently compare or verify personal information on identity documents against existing government records, such as passports, driver licences and birth certificates. 

1:1 matching services (the Document Verification Service and the Face Verification Service) are now used every day by Commonwealth, State and Territory government agencies and industry to securely verify the identity. In 2022, the DVS was used over 140 million times by approximately 2700 government and industry sector organisations, and there were approximately 2.6 million FVS transactions in the 2022-23 financial year. 

Examples of the current uses of the DVS and FVS include:

• verifying the identity of an individual when establishing a myGovID to access online services, including services provided by the Australian Taxation Office 

• financial service providers, such as banks, when seeking to verify the identity of their customers and to meet the ‘know your customer’ obligation under the Anti-Money Laundering and Counter Terrorism Financing Act 2006 (Cth) 

• Government agencies when providing services, disaster relief and welfare payments, and 

• Commonwealth, state and territory government agencies verifying identity in order to provide or change credentials. 

The Identity Verification Services Bill 2023 establishes new primary legislation that provides a legislative framework to support the operation of the identity verification services. The Bill will support the efficient and secure operation of the services without compromising the privacy of the Australian community. 

The IVS Bill will:

• authorise 1:1 matching of identity through the identity verification services, with consent of the relevant individual, by public and private sector entities. This will be enabled by:

 the Document Verification Service which provides 1:1 matching to verify biographic information (such as a name or date of birth), with consent, against government issued identification documents; 

the Face Verification Service which provides 1:1 matching to verifiy biometric information (in this case a photograph or facial image of an individual), with consent, against a Commonwealth, state or territory issued identification document (for example, passports and driver licences); and 

the National Driver Licence Facial Recognition Solution which enables the FVS to conduct 1:1 matching against State and Territory identification documents such as driver licences. 

• authorise 1:many matching services through the Face Identification Service only for the purpose of protecting the identity of persons with a legally assumed identity, such as undercover officers and protected witnesses. The protection of legally assumed identities will also be supported by the use of the FVS. All other uses of 1:many matching through the identity verification services will not be authorised, and will therefore be prohibited. 

• authorise the responsible Commonwealth department – in this case the Attorney General’s Department – to develop, operate and maintain the identity verification facilities (the DVS hub, the Face Matching Service Hub and the NDLFRS). These approved identity verification facilities will be used to provide the identity verification services. These facilities will relay electronic communications between persons and bodies for the purposes of requesting and providing identity verification services. 

Subject to robust privacy safeguards, the Department will be authorised to collect, use and disclose identification information through the approved identity verification facilities for the purpose of providing identity verification services and developing, operating and maintaining the NDLFRS. Offences will apply to certain entrusted persons for the unauthorised recording, disclosing or accessing protected information. 

The Bill ensures that the operation the identity verification services and requests for the use of those services are subject to privacy protections and safeguards. These include consent and notice requirements, privacy impact assessments, requirements to report security breaches and data breaches, complaints handling, annual compliance reporting and transparency about how information will be collected, used and disclosed. Furthermore, privacy law and/or the Australian Privacy Principles will apply to almost all entities that seek to make a request for identity verification services. These privacy protections and safeguards will be set out in participation agreements. 

Government authorities that supply identification information that is used for the purpose of identity verification services will also be subject to the privacy protections and safeguards captured in the participation agreement. Breaches of participation agreements can lead to suspension or termination of the agreement, meaning that the entity would no longer be able to request identity verification services. 

States or territories seeking to contribute to the NDLFRS will be subject to privacy obligations and safeguards, which are required by the Bill and will be set out in the NDLFRS hosting agreement. 

The Bill requires parties to the agreement to agree to be bound by the Privacy Act or a state or territory equivalent, or agree to be subject to the Australian Privacy Principles. The Bill requires state or territory authorities to inform individuals if their information is stored on the NDLFRS (and provide for a mechanism by which those persons can correct any errors), inform the Department and individuals whose information is stored on the NDLFRS of any data breaches, establish a complaints mechanism, and report annually to the Department on the party’s compliance with the agreement. The Bill enables states and territories to limit the use of identity information stored on the NDLFRS, and requires the Department to maintain the security of the NDLFRS. The Department may suspend or terminate access to the NDLFRS in the event of a party’s non-compliance with legislative obligations. 

To protect the privacy of Australians, the Department will be required to maintain the security of electronic communications to and from the approved identity verification facilities, and the information held in the NDLFRS. This information and communications must be encrypted and data breaches reported. 

There will be transparency about the operation of the approved identity verification facilities, including through extensive annual reporting requirements and annual assessments by the Information Commissioner on the operation and management of the facilities. 

The Bill reflects and seeks to implement aspects of the Commonwealth’s commitments under the Intergovernmental Agreement on Identity Matching Services (Intergovernmental Agreement). The Intergovernmental Agreement provides that jurisdictions would share and match biographic and biometric information, with robust privacy safeguards, through the identity verification services. 

The Bill will be supported by the Identity Verification Services (Consequential Amendments) Bill which amends the Australian Passports Act 2005 to provide a clear legal basis for the Minister to disclose personal information for the purpose of participating in one of the following services to share or match information relating to the identity of a person:

- the DVS or the FVS, or 

- any other service, specified or of a kind specified in the Minister’s determination. 

The Consequential Amendments Bill will also allow for automated disclosures of personal information to a specified person via the DVS or the FVS. In combination, this comprehensively authorises the operation of the DVS and FVS in relation to Australian travel documents regulated by the Australian Passports Act.

The Memo also states

... subclause 6(4) of the Bill ensures certain types of information are excluded and cannot be sought or requested through the identity verification services. This information is: 

  •  information or an opinion about an individual’s racial or ethnic origin, political opinions, membership of a political association, religious beliefs or affiliations, philosophical beliefs, membership of a trade union, sexual orientation or practices, or criminal record (paragraph (a)) 

  • health information about an individual (as defined in section 6FA of the Privacy Act) (paragraph (b)), and 

  • genetic information about an individual (paragraph (c))

02 September 2023

Age Verification

The national Government response to the 'Roadmap for Age Verification' developed by the eSafety Commissioner (eSafety) states 

The Roadmap acquits a key recommendation in the February 2020 House of Representatives Standing Committee on Social Policy and Legal Affairs (the Committee) report, Protecting the Age of Innocence (the report), which recommended that the Australian Government direct and adequately resource the eSafety Commissioner to expeditiously develop and publish a roadmap for the implementation of a regime of mandatory age verification for online pornographic material. The Government response to the report, released in June 2021, supported the recommendation and noted that the Roadmap would be based on ‘detailed research as to if and how a mandatory age verification mechanism or similar could practically be achieved in Australia’. 

The Roadmap makes a number of recommendations for Government, reflecting the multifaceted response needed to address the harms associated with Australian children accessing pornography. 

This Government response addresses these recommendations, sets out the Government’s response to this issue more broadly and outlines where work is already underway. This includes work being undertaken by eSafety under the Online Safety Act 2021, noting that since the Roadmap was first recommended in February 2020, the Australian Government has delivered major regulatory reform to our online safety framework with the passage of the Online Safety Bill on 23 July 2021 with bipartisan support, and the commencement of the Online Safety Act on 23 January 2022. The Online Safety Act sets out a world-leading framework comprising complaints-based schemes to respond to individual pieces of content, mechanisms to require increased transparency around industry’s efforts to support user safety, and mandatory and enforceable industry codes to establish a baseline for what the digital industry needs to do to address restricted and seriously harmful content and activity, including online pornography.   

The Roadmap highlights concerning evidence about children’s widespread access to online pornography 

Pornography is legal in Australia and is regulated under the Online Safety Act. Research shows that most Australian adults have accessed online pornography, with a 2020 survey by the CSIRO finding that 60 per cent of adults had viewed pornography. 

However, pornography is harmful to children who are not equipped to understand its contents and context, and they should be protected from exposure to it online. Concerningly, a 2017 survey by the Australian Institute of Family Studies found that 44 per cent of children between the ages of 9-16 were exposed to sexual images within the previous month. 

The Roadmap highlights findings from eSafety’s research with 16-18-year-olds, revealing that of those who had seen online pornography (75% of participants), almost half had first encountered it when they were 13, 14, or 15 years old. Places where they encountered this content varied from pornography websites (70%), social media feeds (35%), ads on social media (28%), social media messages (22%), group chats (17%), and social media private group/pages (17%). The Roadmap acknowledges that pornography is readily available through websites hosted offshore and also through a wide range of digital platforms accessed by children. 

The Roadmap finds an association between mainstream pornography and attitudes and behaviours which can contribute to gender-based violence. It identifies further potential harms including connections between online pornography and harmful sexual behaviours, and risky or unsafe sexual behaviours. 

The Roadmap finds age assurance technologies are immature, and present privacy, security, implementation and enforcement risks 

‘Age verification’ describes measures which could determine a person’s age to a high level of accuracy, such as by using official government identity documents. However, the Roadmap examines the use of broader ‘age assurance’ technologies which include measures that perform ‘age estimation’ functions. The Roadmap notes action already underway by industry to introduce and improve age assurance and finds that the market for age assurance products is immature, but developing. 

It is clear from the Roadmap that at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness and implementation issues. 

For age assurance to be effective, it must: • work reliably without circumvention; • be comprehensively implemented, including where pornography is hosted outside of Australia’s jurisdiction; and • balance privacy and security, without introducing risks to the personal information of adults who choose to access legal pornography. 

Age assurance technologies cannot yet meet all these requirements. While industry is taking steps to further develop these technologies, the Roadmap finds that the age assurance market is, at this time, immature. 

The Roadmap makes clear that a decision to mandate age assurance is not ready to be taken. 

Without the technology to support mandatory age verification being available in the near term, the Government will require industry to do more and will hold them to account. The Australian Government has always made clear that industry holds primary responsibility for the safety of Australian users on their services. It is unacceptable for services used by children to lack appropriate safeguards to keep them safe. While many platforms are taking active steps to protect children, including through the adoption of age assurance mechanisms, more can and should be done. The Government is committed to ensuring industry delivers on its responsibility of keeping Australians, particularly children, safe on their platforms. 

Government will require new industry codes to protect children 

The effective implementation of the Online Safety Act is a priority of the Albanese Government, including the creation of new and strengthened industry codes to keep Australians safe online. The industry codes outline steps the online industry must take to limit access or exposure to, and distribution and storage of certain types of harmful online content. The eSafety Commissioner can move to an enforceable industry standard if the codes developed by industry do not provide appropriate community safeguards. 

The codes are being developed in two phases, the first phase addressing ‘class 1’ content, which is content that would likely be refused classification in Australia and includes terrorism and child sexual exploitation material. The second phase of the industry codes will address ‘class 2’ content, which is content that is legal but not appropriate for children, such as pornography. 

The codes and standards can apply to eight key sections of the online industry, which are set out in the Online Safety Act: • social media services (e.g. Facebook, Instagram and TikTok); • relevant electronic services (e.g. services used for messaging, email, video communications, and online gaming services, including Gmail and WhatsApp); • designated internet services (e.g. websites and end-user online storage and sharing services including Dropbox and Google Drive); • internet search engine services (e.g. Google Search and Microsoft Bing); • app distribution services used to download apps (e.g. Apple IOS and Google Play stores); • hosting services (e.g. Amazon Web Services and NetDC); • internet carriage services (e.g. Telstra, iiNet, Optus, TPG Telecom and Aussie Broadband); and • manufacturers and suppliers of any equipment that connects to the internet, and those who maintain and install it (e.g. of modems, smart televisions, phones, tablets, smart home devices, e-readers etc). 

Phase 1 

Work on the first phase of codes commenced in early 2022, and on 11 April 2022 eSafety issued notices formally requesting the development of industry codes to address class 1 material. On 1 June 2023, the eSafety Commissioner agreed to register five of the eight codes that were drafted by industry. The eSafety Commissioner assessed these codes and found that they provide appropriate community safeguards in relation to creating and maintaining a safe online environment for end-users, empowering people to manage access and exposure to class 1 material and strengthen transparency of and accountability for class 1 material. 

The steps that industry must take under these codes include, for example: • requirement for providers under the Social Media Services Code, including Meta, TikTok and Twitter, to remove child sexual exploitation material and pro-terror material within 24 hours of it being identified and take enforcement action against those distributing such material, including terminating accounts and preventing the creation of further accounts; and • requirement for providers under the Internet Carriage Service Providers Code, including Telstra, iiNet and Optus, to ensure Australian end-users are advised on how to limit access to class 1 material by providing easily accessible information available on filtering products, including through the Family Friendly Filter program, at or close to the time of sale. 

These registered codes will become enforceable by eSafety when they come into effect on 16 December 2023. 

The eSafety Commissioner requested that industry revise the code for Search Engine Services, to ensure it accounts for recent developments in the adoption of generative AI, and made the decision not to register the Relevant Electronic Services Code and Designated Internet Services Code. The eSafety Commissioner found that these two codes failed to provide appropriate community safeguards in relation to matters that are of substantial relevance to the community. For these sections of industry, eSafety will now move to develop mandatory and enforceable industry standards. The registered codes, including all of the steps industry are now required to take, are available at eSafety’s website: www.esafety.gov.au/industry/codes/register-online-industry-codes-standards. 

Phase 2 

The next phase of the industry codes process will address ‘class 2’ content, which is content that is legal, but not appropriate for children, such as pornography. 

In terms of the content of the code – which will be subject to a code development process – Section 138(3) of the Online Safety Act 2021 outlines examples of matters that may be dealt with by industry codes and industry standards, and includes: • procedures directed towards the achievement of the objective of ensuring that online accounts are not provided to children without the consent of a parent or responsible adult; • procedures directed towards the achievement of the objective of ensuring that customers have the option of subscribing to a filtered internet carriage service; • giving end‑users information about the availability, use and appropriate application of online content filtering software; • providing end‑users with access to technological solutions to help them limit access to class 1 material and class 2 material; • providing end‑users with advice on how to limit access to class 1 material and class 2 material; • action to be taken to assist in the development and implementation of online content filtering technologies; and • giving parents and responsible adults information about how to supervise and control children’s access to material. 

In light of the importance of this work, the Minister for Communications has written to the eSafety Commissioner asking that work on the second tranche of codes commence as soon as practicable, following the completion of the first tranche of codes. The Government notes the Roadmap recommends a pilot of age assurance technologies. Given the anticipated scope of the class 2 industry codes, this process will inform any future Government decisions related to a pilot of age assurance technologies. The Government will await the outcomes of the class 2 industry codes process before deciding on a potential trial of age assurance technologies. 

Government will lift industry transparency 

The Government also notes that the Online Safety Act 2021 sets out Basic Online Safety Expectations (BOSE) for the digital industry and empowers the eSafety Commissioner to require industry to report on what it is doing to address these expectations. A core expectation, set out in section 46(1)(d) of the Online Safety Act 2021, is that providers ‘…will take reasonable steps to ensure that technological and other measures are in effect to prevent access by children to class 2 material provided on the service’. The Online Safety (Basic Online Safety Expectations) Determination 2022 also provides examples of ‘reasonable steps’ that industry could take to meet this expectation, which includes ‘implementing age assurance mechanisms.’ 

The Commissioner is able to require online services to report on how they are meeting the BOSE. Noting the independence of the eSafety Commissioner’s regulatory decision-making processes, the Government would welcome the further use of these powers and the transparency that they bring to industry efforts to improve safety for Australians, and to measure the effectiveness of industry codes. 

Government will ensure regulatory frameworks remain fit-for-purpose 

The Government has committed to bring forward the independent statutory review of the Online Safety Act, which will be completed in this term of government. With the online environment constantly changing, an early review will ensure Australia’s legislative framework remains responsive to online harms and that the eSafety Commissioner can continue to keep Australians safe from harm. The review of the Privacy Act 1988 (Privacy Act Review) also considered children’s particular vulnerability to online harms, and the Privacy Act Review Report made several proposals to increase privacy protections for children online. The Government is developing the response to the Report, which will set out the pathway for reforms. 

The Privacy Act Review Report proposes enshrining a principle that recognises the best interests of the child and recommends the introduction of a Children’s Online Privacy Code modelled on the United Kingdom’s Age Appropriate Design Code. It is recommended that a Children’s Online Privacy Code apply to online services that are likely to be accessed by children. The requirements of the code would assist entities by clarifying the principles-based requirements of the Privacy Act in more prescriptive terms and provide guidance on how the best interests of the child should be upheld in the design of online services. For example, assessing a child’s capacity to consent, limiting certain collections, uses and disclosures of children’s personal information, default privacy settings, enabling children to exercise privacy rights, and balancing parental controls with a child’s right to autonomy and privacy. 

The requirements of the Code could also address whether entities need to take reasonable steps to establish an individual’s age with a level of certainty that is appropriate to the risks, for example by implementing age assurance mechanisms. 

More support and resources for families 

While the Government and our online safety regulator will continue working with industry on this challenge, tools are already available to prevent children accessing pornography online. 

The Government supports the eSafety Commissioner’s work in developing practical advice for parents, carers, educators and the community about safety technologies. These products include online resources such as fact sheets, advice and referral information, and regular interactive webinars. These resources are freely available through the eSafety Commissioner’s website at: www.eSafety.gov.au. The Roadmap proposes the establishment of an Online Safety Tech Centre to support parents, carers and others to understand and apply safety technologies that work best for them. The Government has sought further advice from the eSafety Commissioner about this proposal to inform further consideration. 

The Roadmap also recommends that the Government: • fund eSafety to develop new, evidence-based resources about online pornography for educators, parents and children; and • develop industry guidance products and further work to identify barriers to the uptake of safety technologies such as internet filters and parental controls. The Government supports these recommendations. In the 2023-24 Budget the Government provided eSafety with an additional $132.1 million over four years to improve online safety, increasing base funding from $10.3 million to $42.5 million per year. This ongoing and indexed funding provides Australia’s online safety regulator with funding certainty, allowing long term operational planning, more resourcing for its regulatory processes, and to increase education and outreach. 

The eSafety Commissioner works closely with Communications Alliance – an industry body representing the communications sector – to provide the Family Friendly Filter program. Under this program, internet filtering products undergo rigorous independent testing for effectiveness, ease of use, configurability and availability of support prior to certification as a Family Friendly Filter. Filter providers must also agree to update their products as required by eSafety, for example where eSafety determines, following a complaint, that a specified site is prohibited under Australian law.

30 August 2023

Capacity

'A Congressional Incapacity Amendment to the United States Constitution' by John J. Martin in 76 Stanford Law Review Online comments 

Aside from waiting for the next election, the U.S. Constitution provides no proper recourse for the incapacitation of a member of Congress. Indeed, there presently exist no practical means of ensuring that representation continues undisrupted for affected constituents. This is problematic and antithetical to our democracy. And with Congress’s average age on the rise, the problem may only get worse. 

There has been little discussion within legal scholarship about congressional incapacity, despite its democratic implications. The few existing pieces that have discussed the topic have largely adopted an institutional perspective, focusing on the issue of mass, Congress-wide incapacitation rather than a constituency-minded perspective that focuses on individual instances of incapacitation. This Essay distinguishes itself from existing literature by focusing on the latter. Using the Twenty-Fifth Amendment’s strengths and weaknesses as a blueprint, this Essay provides one of the first attempts to lay out how precisely one could resolve individual congressional incapacity — simply referred to as “congressional incapacity” for the remainder of the Essay — by constitutional amendment. 

Specifically, the Essay imagines such an amendment having two sections. The first section would permit members of Congress to temporarily transfer their powers to an interim appointee in times of short-term incapacity, with limitations regarding the residency and party affiliation of said appointee. The second section would create a process for involuntary transfers of powers whenever a member of Congress has a long-term incapacitation but is unable or unwilling to use the voluntary transfer process or resign. This process would be multi-layered, involving the will of the affected constituents, either by direct vote or proxy via state legislatures, an independent board of medical experts appointed and regulated by Congress, and potentially Congress itself. Through this design, the process would ensure that decisions of involuntary transfer would still maintain democratic legitimacy all while minimizing the effects of personal or partisan biases. 

20 August 2023

Thaler

Another unsurprising loss in the latest Thaler judgment, with Howell J in Stephen Thaler v Shira Perlmutter (Register of Copyrights and Director of the United States Copyright Office, et al) endorsing the Registrar's rejection of Thaler's attempt to register a computer generated (as distinct from computer assisted) work with the computer as author. 

The judgment states

Plaintiff Stephen Thaler owns a computer system he calls the “Creativity Machine,” which he claims generated a piece of visual art of its own accord. He sought to register the work for a copyright, listing the computer system as the author and explaining that the copyright should transfer to him as the owner of the machine. The Copyright Office denied the application on the grounds that the work lacked human authorship, a prerequisite for a valid copyright to issue, in the view of the Register of Copyrights. Plaintiff challenged that denial, culminating in this lawsuit against the United States Copyright Office and Shira Perlmutter, in her official capacity as the Register of Copyrights and the Director of the United States Copyright Office (“defendants”). Both parties have now moved for summary judgment, which motions present the sole issue of whether a work generated entirely by an artificial system absent human involvement should be eligible for copyright. .... 

For the reasons explained below, defendants are correct that human authorship is an essential part of a valid copyright claim, and therefore plaintiff’s pending motion for summary judgment is denied and defendants’ pending cross-motion for summary judgment is granted. 

I. BACKGROUND 

Plaintiff develops and owns computer programs he describes as having “artificial intelligence” (“AI”) capable of generating original pieces of visual art, akin to the output of a human artist. .... One such AI system—the so-called “Creativity Machine”—produced the work at issue here, titled “A Recent Entrance to Paradise:” ... 

After its creation, plaintiff attempted to register this work with the Copyright Office. In his application, he identified the author as the Creativity Machine, and explained the work had been “autonomously created by a computer algorithm running on a machine,” but that plaintiff sought to claim the copyright of the “computer-generated work” himself “as a work-for-hire to the owner of the Creativity Machine.” ... see also id. at 2 (listing “Author” as “Creativity Machine,” the work as “[c]reated autonomously by machine,” and the “Copyright Claimant” as “Steven [sic] Thaler” with the transfer statement, “Ownership of the machine”). 

The Copyright Office denied the application on the basis that the work “lack[ed] the human authorship necessary to support a copyright claim,” noting that copyright law only extends to works created by human beings. ... 

Plaintiff requested reconsideration of his application, confirming that the work “was autonomously generated by an AI” and “lack[ed] traditional human authorship,” but contesting the Copyright Office’s human authorship requirement and urging that AI should be “acknowledge[d] . . . as an author where it otherwise meets authorship criteria, with any copyright ownership vesting in the AI’s owner.” 

Again, the Copyright Office refused to register the work, reiterating its original rationale that “[b]ecause copyright law is limited to ‘original intellectual conceptions of the author,’ the Office will refuse to register a claim if it determines that a human being did not create the work.”... Plaintiff made a second request for reconsideration along the same lines as his first, see id., Ex. G, Second Request for Reconsideration at 2, ECF No. 13-7, and the Copyright Office Review Board affirmed the denial of registration, agreeing that copyright protection does not extend to the creations of non-human entities, Final Refusal Letter at 4, 7. 

Plaintiff timely challenged that decision in this Court, claiming that defendants’ denial of copyright registration to the work titled “A Recent Entrance to Paradise,” was “arbitrary, capricious, an abuse of discretion and not in accordance with the law, unsupported by substantial evidence, and in excess of Defendants’ statutory authority,” in violation of the Administrative Procedure Act (“APA”), 5 U.S.C. § 706(2). See Compl. ¶¶ 62–66, ECF No. 1. 

The parties agree upon the key facts narrated above to focus, in the pending cross-motions for summary judgment, on the sole legal issue of whether a work autonomously generated by an AI system is copyrightable. ... Those motions are now ripe for resolution. ... 

DISCUSSION 

Under the Copyright Act of 1976, copyright protection attaches “immediately” upon the creation of “original works of authorship fixed in any tangible medium of expression,” provided those works meet certain requirements. Fourth Estate v. Public Benefit Corporation v. Wall-Street.com, LLC, 139 S. Ct. 881, 887 (2019); 17 U.S.C. § 102(a). A copyright claimant can also register the work with the Register of Copyrights. Upon concluding that the work is indeed copyrightable, the Register will issue a certificate of registration, which, among other advantages, allows the claimant to pursue infringement claims in court. 17 U.S.C. §§ 410(a), 411(a); Unicolors v. H&M Hennes & Mauritz, L.P., 142 S. Ct. 941, 944–45 (2022). A valid copyright exists upon a qualifying work’s creation and “apart” from registration, however; a certificate of registration merely confirms that the copyright has existed all along. See Fourth Estate, 139 S. Ct. at 887. Conversely, if the Register denies an application for registration for lack of copyrightable subject matter—and did not err in doing so—then the work at issue was never subject to copyright protection at all. 

In considering plaintiff’s copyright registration application as to “A Recent Entrance to Paradise,” the Register concluded that “this particular work will not support a claim to copyright” because the work lacked human authorship and thus no copyright existed in the first instance. First Refusal Letter at 1; see also Final Refusal Letter at 3 (providing the same rationale in the final reconsideration decision). By design in plaintiff’s framing of the registration application, then, the single legal question presented here is whether a work generated autonomously by a computer falls under the protection of copyright law upon its creation. 

Plaintiff attempts to complicate the issues presented by devoting a substantial portion of his briefing to the viability of various legal theories under which a copyright in the computer’s work would transfer to him, as the computer’s owner; for example, by operation of common law property principles or the work-for-hire doctrine. ... These arguments concern to whom a valid copyright should have been registered, and in so doing put the cart before the horse. [In pursuing these arguments, plaintiff elaborates on his development, use, ownership, and prompting of the AI generating software in the so-called “Creativity Machine,” implying a level of human involvement in this case entirely absent in the administrative record. As detailed, supra, in Part I, plaintiff consistently represented to the Register that the AI system generated the work “autonomously” and that he played no role in its creation, see Application at 2, and judicial review of the Register’s final decision must be based on those same facts. ]

By denying registration, the Register concluded that no valid copyright had ever existed in a work generated absent human involvement, leaving nothing at all to register and thus no question as to whom that registration belonged. The only question properly presented, then, is whether the Register acted arbitrarily or capriciously or otherwise in violation of the APA in reaching that conclusion. 

The Register did not err in denying the copyright registration application presented by plaintiff. United States copyright law protects only works of human creation. Plaintiff correctly observes that throughout its long history, copyright law has proven malleable enough to cover works created with or involving technologies developed long after traditional media of writings memorialized on paper. See, e.g., Goldstein v. California, 412 U.S. 546, 561 (1973) (explaining that the constitutional scope of Congress’s power to “protect the ‘Writings’ of ‘Authors’” is “broad,” such that “writings” is not “limited to script or printed material,” but rather encompasses “any physical rendering of the fruits of creative intellectual or aesthetic labor”); Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884) (upholding the constitutionality of an amendment to the Copyright Act to cover photographs).  

In fact, that malleability is explicitly baked into the modern incarnation of the Copyright Act, which provides that copyright attaches to “original works of authorship fixed in any tangible medium of expression, now known or later developed.” 17 U.S.C. § 102(a) (emphasis added). Copyright is designed to adapt with the times. Underlying that adaptability, however, has been a consistent understanding that human creativity is the sine qua non at the core of copyrightability, even as that human creativity is channeled through new tools or into new media. In Sarony, for example, the Supreme Court reasoned that photographs amounted to copyrightable creations of “authors,” despite issuing from a mechanical device that merely reproduced an image of what is in front of the device, because the photographic result nonetheless “represent[ed]” the “original intellectual conceptions of the author.” Sarony, 111 U.S. at 59. 

A camera may generate only a “mechanical reproduction” of a scene, but does so only after the photographer develops a “mental conception” of the photograph, which is given its final form by that photographer’s decisions like “posing the [subject] in front of the camera, selecting and arranging the costume, draperies, and other various accessories in said photograph, arranging the subject so as to present graceful outlines, arranging and disposing the light and shade, suggesting and evoking the desired expression, and from such disposition, arrangement, or representation” crafting the overall image. Id. at 59–60. Human involvement in, and ultimate creative control over, the work at issue was key to the conclusion that the new type of work fell within the bounds of copyright. Copyright has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here. Human authorship is a bedrock requirement of copyright. 

That principle follows from the plain text of the Copyright Act. The current incarnation of the copyright law, the Copyright Act of 1976, provides copyright protection to “original works of authorship fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” 17 U.S.C. § 102(a). The “fixing” of the work in the tangible medium must be done “by or under the authority of the author.” Id. § 101. In order to be eligible for copyright, then, a work must have an “author.” To be sure, as plaintiff points out, the critical word “author” is not defined in the Copyright Act. See Pl.’s Mem. at 24. “Author,” in its relevant sense, means “one that is the source of some form of intellectual or creative work,” “[t]he creator of an artistic work; a painter, photographer, filmmaker, etc.” ... 

By its plain text, the 1976 Act thus requires a copyrightable work to have an originator with the capacity for intellectual, creative, or artistic labor. Must that originator be a human being to claim copyright protection? The answer is yes. 

The 1976 Act’s “authorship” requirement as presumptively being human rests on centuries of settled understanding. The Constitution enables the enactment of copyright and patent law by granting Congress the authority to “promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective The issue of whether non-human sentient beings may be covered by “person” in the Copyright Act is only “fun conjecture for academics,” Justin Hughes, 'Restating Copyright Law’s Originality Requirement', 44 Columbia J L & Arts 383, 408–09 (2021), though useful in illuminating the purposes and limits of copyright protection as AI is increasingly employed. 

Nonetheless, delving into this debate is an unnecessary detour since “[t]he day sentient refugees from some intergalactic war arrive on Earth and are granted asylum in Iceland, copyright law will be the least of our problems.” 

As James Madison explained, “[t]he utility of this power will scarcely be questioned,” for “[t]he public good fully coincides in both cases [of copyright and patent] with the claims of individuals.” The Federalist No. 43 (James Madison). At the founding, both copyright and patent were conceived of as forms of property that the government was established to protect, and it was understood that recognizing exclusive rights in that property would further the public good by incentivizing individuals to create and invent. The act of human creation—and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts—was thus central to American copyright from its very inception. Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them. xx The understanding that “authorship” is synonymous with human creation has persisted even as the copyright law has otherwise evolved. The immediate precursor to the modern copyright law—the Copyright Act of 1909—explicitly provided that only a “person” could “secure copyright for his work” under the Act. 

Copyright under the 1909 Act was thus unambiguously limited to the works of human creators. There is absolutely no indication that Congress intended to effect any change to this longstanding requirement with the modern incarnation of the copyright law. To the contrary, the relevant congressional report indicates that in enacting the 1976 Act, Congress intended to incorporate the “original work of authorship” standard “without change” from the previous 1909 Act. 

The human authorship requirement has also been consistently recognized by the Supreme Court when called upon to interpret the copyright law. As already noted, in Sarony, the Court’s recognition of the copyrightability of a photograph rested on the fact that the human creator, not the camera, conceived of and designed the image and then used the camera to capture the image. See Sarony, 111 U.S. at 60. The photograph was “the product of [the photographer’s] intellectual invention,” and given “the nature of authorship,” was deemed “an original work of art . . . of which [the photographer] is the author.” Id. at 60–61. Similarly, in Mazer v. Stein, the Court delineated a prerequisite for copyrightability to be that a work “must be original, that is, the author’s tangible expression of his ideas.” 347 U.S. 201, 214 (1954). Goldstein v. California, too, defines “author” as “an ‘originator,’ ‘he to whom anything owes its origin,’” 412 U.S. at 561 (quoting Sarony, 111 U.S. at 58). In all these cases, authorship centers on acts of human creativity. 

Accordingly, courts have uniformly declined to recognize copyright in works created absent any human involvement, even when, for example, the claimed author was divine. The Ninth Circuit, when confronted with a book “claimed to embody the words of celestial beings rather than human beings,” concluded that “some element of human creativity must have occurred in order for the Book to be copyrightable,” for “it is not creations of divine beings that the copyright laws were intended to protect.” Urantia Found. v. Kristen Maaherra, 114 F.3d 955, 958–59 (9th Cir. 1997) (finding that because the “members of the Contact Commission chose and formulated the specific questions asked” of the celestial beings, and then “select[ed] and arrange[d]” the resultant “revelations,” the Urantia Book was “at least partially the product of human creativity” and thus protected by copyright); see also Penguin Books U.S.A., Inc. v. New Christian Church of Full Endeavor, 96-cv-4126 (RWS), 2000 WL 1028634, at *2, 10–11 (S.D.N.Y. July 25, 2000) (finding a valid copyright where a woman had “filled nearly thirty stenographic notebooks with words she believed were dictated to her” by a “‘Voice’ which would speak to her whenever she was prepared to listen,” and who had worked with two human co-collaborators to revise and edit those notes into a book, a process which involved enough creativity to support human authorship); Oliver v. St. Germain Found., 41 F. Supp. 296, 297, 299 (S.D. Cal. 1941) (finding no copyright infringement where plaintiff claimed to have transcribed “letters” dictated to him by a spirit named Phylos the Thibetan, and defendant copied the same “spiritual world messages for recordation and use by the living” but was not charged with infringing plaintiff’s “style or arrangement” of those messages). Similarly, in Kelley v. Chicago Park District, the Seventh Circuit refused to “recognize[] copyright” in a cultivated garden, as doing so would “press[] too hard on the[] basic principle[]” that “[a]uthors of copyrightable works must be human.” 635 F.3d 290, 304–06 (7th Cir. 2011). The garden “ow[ed] [its] form to the forces of nature,” even if a human had originated the plan for the “initial arrangement of the plants,” and as such lay outside the bounds of copyright. Id. at 304. Finally, in Naruto v. Slater, the Ninth Circuit held that a crested macaque could not sue under the Copyright Act for the alleged infringement of photographs this monkey had taken of himself, for “all animals, since they are not human” lacked statutory standing under the Act. 888 F.3d 418, 420 (9th Cir. 2018). While resolving the case on standing grounds, rather than the copyrightability of the monkey’s work, the Naruto Court nonetheless had to consider whom the Copyright Act was designed to protect and, as with those courts confronted with the nature of authorship, concluded that only humans had standing, explaining that the terms used to describe who has rights under the Act, like “‘children,’ ‘grandchildren,’ ‘legitimate,’ ‘widow,’ and ‘widower[,]’ all imply humanity and necessarily exclude animals.” Id. at 426. Plaintiff can point to no case in which a court has recognized copyright in a work originating with a non-human. 

Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works. The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions regarding how much human input is necessary to qualify the user of an AI system as an “author” of a generated work, the scope of the protection obtained over the resultant image, how to assess the originality of AI-generated works where the systems may have been trained on unknown pre-existing works, how copyright might best be used to incentivize creative works involving AI, and more. See, e.g., Letter from Senators Thom Tillis and Chris Coons to Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the U.S. Patent and Trademark Office, and Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office (Oct. 27, 2022), https://www.copyright.gov/laws/hearings/Letter-to- USPTO-USCO-on-National-Commission-on-AI-1.pdf (requesting that the United States Patent and Trademark Office and the United States Copyright Office “jointly establish a national commission on AI” to assess, among other topics, how intellectual property law may best “incentivize future AI related innovations and creations”). 

This case, however, is not nearly so complex. While plaintiff attempts to transform the issue presented here, by asserting new facts that he “provided instructions and directed his AI to create the Work,” that “the AI is entirely controlled by [him],” and that “the AI only operates at [his] direction,” Pl.’s Mem. at 36–37—implying that he played a controlling role in generating the work—these statements directly contradict the administrative record. Judicial review of a final agency action under the APA is limited to the administrative record, because “[i]t is black- letter administrative law that in an [APA] case, a reviewing court should have before it neither more nor less information than did the agency when it made its decision.” ... 

Here, plaintiff informed the Register that the work was “[c]reated autonomously by machine,” and that his claim to the copyright was only based on the fact of his “[o]wnership of the machine.” Application at 2. The Register therefore made her decision based on the fact the application presented that plaintiff played no role in using the AI to generate the work, which plaintiff never attempted to correct. See First Request for Reconsideration at 2 (“It is correct that the present submission lacks traditional human authorship—it was autonomously generated by an AI.”); Second Request for Reconsideration at 2 (same). Plaintiff’s effort to update and modify the facts for judicial review on an APA claim is too late. 

On the record designed by plaintiff from the outset of his application for copyright registration, this case presents only the question of whether a work generated autonomously by a computer system is eligible for copyright. In the absence of any human involvement in the creation of the work, the clear and straightforward answer is the one given by the Register: No. 

Given that the work at issue did not give rise to a valid copyright upon its creation, plaintiff’s myriad theories for how ownership of such a copyright could have passed to him need not be further addressed. Common law doctrines of property transfer cannot be implicated where no property right exists to transfer in the first instance. The work-for-hire provisions of the Copyright Act, too, presuppose that an interest exists to be claimed. See 17 U.S.C § 201(b) (“In the case of a work made for hire, the employer . . . owns all of the rights comprised in the copyright.”). Here, the image autonomously generated by plaintiff’s computer system was never eligible for copyright, so none of the doctrines invoked by plaintiff conjure up a copyright over which ownership may be claimed.

In any event, plaintiff’s attempts to cast the work as a work-for-hire must fail as both definitions of a “work made for hire” available under the Copyright Act require that the individual who prepares the work is a human being. The first definition provides that “a ‘work made for hire’ is . . . a work prepared by an employee within the scope of his or her employment,” while the second qualifies certain eligible works “if the parties expressly agree in a written instrument signed by them that the work shall be considered a work made for hire.” 17 U.S.C. § 101 (emphasis added). The use of personal pronouns in the first definition clearly contemplates only human beings as eligible “employees,” while the second necessitates a meeting of the minds and exchange of signatures in a valid contract not possible with a non-human entity. 


15 August 2023

Document Execution

The Attorney-General’s Department consultation on proposed reform to the execution of Commonwealth statutory declarations considers amendment of the Statutory Declarations Act 1959 (Cth) and the Statutory Declarations Regulations 2018. Temporary measures allowing e-execution of a Commonwealth statutory declaration are due to expire on 31 December 2023. 

 The 2021 Modernising Document Execution Consultation undertaken by the Department of the Prime Minister and Cabinet Deregulation Taskforce 

 found strong stakeholder support for the introduction of e-execution and digital execution pathways for statutory declarations. It found that the paper-based system did not meet the needs and expectations of individuals or small businesses, costing time and money. 

The proposed amendment of the Act and the Regulations would establish a framework to allow a Commonwealth statutory declaration to be executed in 1 of 3 ways: 

 • traditional paper-based execution (requiring wet-ink signatures and in-person witnessing) 

• e-execution (allowing electronic signatures and witnessing via audio-visual link), and 

• digital execution (end-to-end online execution, with digital identity providers to verify identity and satisfy witnessing requirements). 

The proposal would involve minor amendments to the Act prescribing the execution options available to validly execute a Commonwealth statutory declaration, supported by regulations setting out the technical requirements for each prescribed execution option. A-G's is "particularly interested in stakeholder views on the proposal to allow digital execution of a Commonwealth statutory declaration", integrated with the Australian Government Digital Identity System and supporting the expansion of services offered through the myGov platform. 

 The A-G's short discussion paper states 

 A Commonwealth statutory declaration is a legal document that contains a written statement about something that the declarant is asserting to be true. It provides a mechanism for the declarant to vouch for the veracity of its contents, where it would be difficult to prove in another way. It is a criminal offence to intentionally make a false statement in a Commonwealth statutory declaration, carrying a penalty of four years imprisonment. 

Currently, execution of Commonwealth statutory declarations requires three elements to be satisfied: the use of the prescribed form, the signing of the declaration by the declarant and the witnessing of the declarant’s signature by a prescribed person. 

The following sets out why each of the options within the execution framework are being considered and how they will work. 

1. Traditional, paper-based execution 

The proposal will maintain the traditional, paper-based execution option for those who do not have access to the required technology, or prefer not to engage with the other technology-based execution options. Under this option, a person would make their declaration on paper and sign it using wet-ink in the presence of a prescribed person. 

2. E-execution 

Electronic execution of Commonwealth statutory declarations will be provided for through the use of electronic signatures, witnessing via audio-visual link and the use of copies for the purpose of execution. This will make the arrangements provided for by the current temporary measures permanent. The proposal would also allow a declarant or witness to sign electronically. 

The department has received positive feedback about the functioning of e-execution under the current temporary measures. Continuing to make e-execution available will provide options for those who want to engage with electronic execution but are unable or unwilling to obtain a digital identity, as required for the digital execution option. 

3. Digital execution 

The department has developed a proposal for digital execution in response to the 2021 consultation, which found that a digital execution pathway could address many problems identified by stakeholders with the paper-based-only system. Particularly, the provision of a digital execution option would benefit those who face barriers engaging with a paper-based process, such as those in rural, remote or regional parts of Australia, and those Australians experiencing low mobility or sensory issues. 

The 2021 consultation recognised that paper based systems are not ‘risk-free’ and digital solutions may be trusted more by the community due to ‘digital innovations…strengthening document security and credibility in other domains.’ 

The proposal for digital execution is designed to be simple and robust, and sit cohesively within the Commonwealth statutory declaration execution framework. The execution requirements will integrate the Australian Government Digital Identity System. This will allow existing digital infrastructure (e.g. myGov and myGovID) to be leveraged to provide a digital document execution service for Australians to execute Commonwealth statutory declarations. 

Reflecting on lessons learned throughout the COVID-19 pandemic, and responding to community reflections of how they wish to engage with legal documents, particularly those administered by Government, providing a digital execution option to be appropriate and responsive. 

The Requirements 

‘Digital execution’ would involve the end-to-end execution of Commonwealth statutory declarations through an online platform who utilises a Digital Identity Provider approved to operate within a digital identity system maintained by the Australian Government. These requirements are intended to ensure that digital execution sits within the safeguards and frameworks set by the Australian Government.