19 November 2022

Pseudolegal

In Michelle Jenkins v Home@Scope Pty Ltd [2022] FWCFB 207 the Fair Work Commission has considered yet another pseudo legal claim regarding dismissal in relation to vaccination.

 [4] For the reasons that follow, permission to appeal is refused. 

Decision Under Appeal 

[5] The Commissioner began by considering s.396 of the Act and was satisfied by all the initial matters. The Commissioner then had regard to s.385, noting subsections (a), (c) and (d) were satisfied, leaving her to determine whether the dismissal was harsh, unjust or unreasonable pursuant to s.385(b) and considering the factors in s.387. 

Valid reason for dismissal – s.387(a) 

...   

[9] The Respondent submitted that the Appellant was terminated because she could no longer meet the inherent requirements of her role. The Appellant submitted that there was no valid reason for her dismissal related to capacity or conduct and made assertions as to the validity of the Directions and Orders. The Appellant also submitted that she was able to perform the administrative part of her role remotely and therefore she should not have been dismissed. Further, the Appellant submitted:

a) her contract of employment falls under federal law and cannot be overridden by a pandemic declaration; 

b) the vaccination mandate is unconstitutional; 

c) under the Nuremberg Code, it is a criminal act to pressure or coerce someone into having a vaccination; and 

d) her vaccination status is protected by the Privacy Act 1988 (Cth). 

[10] The Commissioner rejected the Appellant’s submissions, finding that they could not be sustained. Further, the Commissioner observed that the Directions and Orders were valid at the time of the Decision. The Commissioner noted there is no contest that the Appellant did not provide vaccination information to the Respondent and has at no stage claimed to be an excepted person to the Directions or Orders. 

[11] The Commissioner was therefore satisfied that the effect of the Direction prohibited the Respondent from allowing the Appellant to provide direct care to clients in their homes. The Respondent would have been in breach of its legal obligations if it allowed the Appellant to work outside of her primary residence from 15 October 2021. Further, the Commissioner was satisfied that the direction given by the Respondent was reasonable. ... 

[17] In conclusion, the Commissioner was satisfied that the Appellant’s dismissal was not harsh, unjust or unreasonable and therefore not unfair. The Commissioner dismissed the Appellant’s application. 

Grounds of Appeal 

[18] The Appellant provided lengthy submissions on her sovereignty and the COVID-19 vaccination mandate. While we have not set out these submissions we have read and considered them for this appeal. The Appellant’s grounds of appeal as set out in her F7 – Notice of Appeal are as follows:

Legislated reason I was dismissed for still has not been given to me by either Home@Scope or in the response to my application. 

As per (50) of the decision, you state I did not respond after the 13th when quiet clearly YOU have an email sent to Home@Scope dated after the 13th in which they did NOT respond. 

Evidence in a new case shows that YOU are unlawfully acting under a false coat of arms. 

You have NOT taken into any account the evidence I have provided nor have you provided me with any evidence of the authority of the CHO, PHO or State Premier. 

“Vaccination” there is NO authorised “vaccine” there is an experimental gene therapy that Greg Hunt spoke of, and as for that reason I will be seeking legal advice for an attempt on my life.”

[19] In terms of why the appeal is in the public interest the Appellant submitted:

I was dismissed because I followed a Constitutional Act and Act of the 1900 Constitution. This is in the Interest of the public as this is the constitution we the people follow and is lawful. The seal of the crown being removed in 1953 makes any laws altered without a referendum UNLAWFUL. 

Section 51 xxiii9(A) medical and dental services – as to not authorise civil conscription. Clause 5 of the constitution states “operation of the constitution, shall be binding on the courts, judges and the people of every state and of every part of the Commonwealth.” I believe you have breached this in your decision as I was adhering to my rights of my privacy under the Biosecurity Act 2015 and the Privacy Act. Every Law whether Federal or state is SUBJECT to the Constitution, and if a law is inconsistant (sic) with the Constitution it is INVALID. 

INFORMED CONSENT was NOT given to any living person on this earth. 

It is now being made public that this gene therapy is a form of genocide in which Home@scope are being complicit and not only my decision under a LAWFUL act of not disclosing my medical information which falls under the Nuremberg Code as this is an International LAW. It is now my understanding that millions have died from this experiment hence the reason I believe this to be in the best interest of everyone in the public. 

So I was dismissed for trying to stay alive. How is that not UNFAIR?”

In Ms Michelle Jenkins v Home@Scope Pty Ltd [2022] FWC 2003 the Commission had noted Jenkin's statement

“... Take note: 

I protest the interference of a medical service upon me of unknown consequences, and I protest the inspection that violates my medical privacy. 

I request production of the written law that requires of me to undergo a forced vaccination as a prerequisite of my employment. 

I request the production of the written data that proves the vaccine has undergone the clinical trials required of vaccines to prove its safety. 

I request that the law for mandated vaccinations be made pursuant to the constitutional guarantee. 

I request that the health directions and mandates be proved, for enforcement, that it has been made in the fulfilment of the law that governs this Commonwealth, for which unites and protects us. 

Failure to produce the written law mandating this forced vaccination, within three days of this notice, shall be taken to be unwarranted coercion and workplace harassment for which substantial compensation may be due...”

Unlike the recent Miroch dispute Jenkins does not appear to have sought compensation in the form of bars of silver.

18 November 2022

Algorithmics

'The flaws of policies requiring human oversight of government algorithms' by Ben Green in (2022) 45 Computer Law & Security Review comments 

 As algorithms become an influential component of government decision-making around the world, policymakers have debated how governments can attain the benefits of algorithms while preventing the harms of algorithms. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to effectively oversee algorithmic decision-making. In this article, I survey 41 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms. This institutional approach operates in two stages. First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making and that any proposed forms of human oversight are supported by empirical evidence. Second, these justifications must receive democratic review and approval before the agency can adopt the algorithm. 

 Green argues 

In recent years, governments across the world have turned to automated decision-making systems, often described as algorithms, to make or inform consequential decisions (Calo & Citron, 2021; Eubanks, 2018; Green, 2019; Henley & Booth, 2020). These developments have raised significant debate about when and how governments should adopt algorithms. On the one hand, algorithms bring the promise of making decisions more accurately, fairly, and consistently than public servants (Kleinberg et al., 2018; Kleinberg et al., 2015). On the other hand, the use of algorithms by governments has been a source of numerous injustices (Angwin et al., 2016; Calo & Citron, 2021; Eubanks, 2018; Green, 2019; Henley & Booth, 2020). The algorithms used in practice tend to be rife with errors and biases, leading to decisions that are based on incorrect information and that exacerbate inequities. Furthermore, making decisions via the rigid, rule-based logic of algorithms violates the principle that government decisions should respond to the circumstances of individual people. 

In the face of these competing hopes and fears about algorithmic decision-making, policymakers have explored regulatory approaches that could enable governments to attain the benefits of algorithms while avoiding the risks of algorithms. An emerging centerpiece of this global regulatory effort is to require human oversight of the decisions rendered by algorithms. Human oversight policies enable governments to use algorithms—but only if a human has some form of oversight or control over the final decision.1 In other words, algorithms may assist human decision-makers but may not make final judgments on their own. These policies are intended to ensure that a human plays a role of quality control, protecting against mistaken or biased algorithmic predictions. These policies aim to protect human rights and dignity by keeping a “human in the loop” of automated decision-making (Jones, 2017; Wagner, 2019). In theory, adopting algorithms while ensuring human oversight could enable governments to obtain the best of both worlds: the accuracy, objectivity, and consistency of algorithmic decision-making paired with the individualized and contextual discretion of human decision-making. 

Recent legislation assumes that human oversight protects against the harms of government algorithms. For instance, in its proposed Artificial Intelligence Act, the European Commission asserted that human oversight (along with other mechanisms) is “strictly necessary to mitigate the risks to fundamental rights and safety posed by AI” (European Commission, 2021). Following this logic, many policies position human oversight as a distinguishing factor that makes government use of algorithms permissible. The European Union's General Data Protection Regulation (GDPR) restricts significant decisions “based solely on automated processing” (European Parliament & Council of the European Union, 2016b). The Government of Canada requires federal agencies using high-risk AI systems to ensure that there is human intervention during the decision-making process and that a human makes the final decisions (Government of Canada, 2021). Washington State allows state and local government agencies to use facial recognition in certain instances, but only if high-impact decisions “are subject to meaningful human review” (Washington State Legislature, 2020). 

Despite the emphasis that legislators have placed on human oversight as a mechanism to mitigate the risks of government algorithms, the functional quality of these policies has not been thoroughly interrogated. Policymakers calling for human oversight invoke values such as human rights and dignity as a motivation for these policies, but rarely reference empirical evidence demonstrating that human oversight actually advances those values. In fact, when policies and policy guidance do reference empirical evidence about human-algorithm interactions, they usually express reservations about the limits of human oversight, particularly related to people over-relying on algorithmic advice (Engstrom et al., 2020; European Commission, 2021; UK Information Commissioner's Office, 2020). 

This lack of empirical grounding raises an important question: does human oversight provide reliable protection against algorithmic harms? Although inserting a “human in the loop” may appear to satisfy legal and philosophical principles, research into sociotechnical systems demonstrates that people and technologies often do not interact as expected (Suchman et al., 1999). Hybrid systems that require collaboration between humans and automated technologies are notoriously difficult to design, implement, and regulate effectively (Bainbridge, 1983; Gray & Suri, 2019; Jones, 2015; Pasquale, 2020; Perrow, 1999). Thus, given that human oversight is being enacted into policies across the world as a central safeguard against the risks of government algorithms, it is vital to ensure that human oversight actually provides the desired protections. If people cannot oversee algorithms as intended, human oversight policies would have the perverse effect of alleviating scrutiny of government algorithms without actually addressing the underlying concerns. 

This article interrogates the efficacy and impacts of human oversight policies. It proceeds in four parts. The first two parts lay out the context of my analysis. Section 2 provides background on the tensions and challenges raised by the use of algorithms in government decision-making. Section 3 describes the current landscape of human oversight policies. I survey 41 policy documents from across the world that provide some form of official mandate or guidance regarding human oversight of public sector algorithms. I find that these policies prescribe three approaches to human oversight. 

Section 4 evaluates the three forms of human oversight described in Section 3. Drawing on empirical evidence about how people interact with algorithms, I find that human oversight policies suffer from two significant flaws. First, human oversight policies are not supported by empirical evidence: the vast majority of research suggests that people cannot reliably perform any of the desired oversight functions. This first flaw leads to a second flaw: human oversight policies legitimize the use of flawed and unaccountable algorithms in government. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies create a regulatory loophole: it provides a false sense of security in adopting algorithms and enables vendors and agencies to foist accountability for algorithmic harms onto lower-level human operators. 

Section 5 suggests how to adapt regulation of government algorithms in light of the two flaws to human oversight policies described in Section 4. It is clear that policymakers must stop relying on human oversight as a remedy for the potential harms of algorithms. However, the correct response is not to simply abandon human oversight, leaving governments to depend on autonomous algorithmic judgments. Nor should regulators prohibit governments from ever using algorithms. Instead, legislators must develop alternative governance approaches that more rigorously address the concerns which motivate the (misguided) turn to human oversight. 

I propose a shift from human oversight to institutional oversight for regulating government algorithms. This involves a two-stage process. First, rather than assume that human oversight can address fundamental concerns about algorithmic decision-making, agencies must provide written justification of its decision to adopt an algorithm in high-stakes decisions. If an algorithm violates fundamental rights, is badly suited to a decision-making process, or is untrustworthy, then governments should not use the algorithm, even with human oversight. As part of this justification, agencies should provide evidence that any proposed forms of human oversight are supported by empirical evidence. If there is not sufficient evidence demonstrating that human oversight is effective and that the algorithm improves human decision-making, then governments should not incorporate the algorithm into human decision-making processes. Second, agencies must make these written justifications publicly available. Agencies are not allowed to use the algorithm until its report receives public review and approval. 

Compared to the status quo of blanket rules that enable governments to use algorithms as long as a human provides oversight, this institutional oversight approach will help to prevent human oversight from operating as a superficial salve for the injustices associated with algorithmic decision-making.

17 November 2022

Marks

'Albrecht Durer's Enforcement Actions: A Trademark Origin Story' by Peter Karol in 25 Vanderbilt Journal of Entertainment and Technology Law (forthcoming) comments 

This article offers a reappraisal of a pair of remarkably contemporary enforcement actions brought by the Northern Renaissance artist Albrecht Dürer (1471-1528) against copyists of his work. These cases have long been debated by art, cultural and copyright historians insofar as they appear to reject Dürer’s demand for proto-copyright protection for his prints. But surprisingly little attention has been paid by trademark scholars to the companion holdings—in the same cases—that affirm Dürer’s right to prevent use of his monogram on unauthorized reproductions. 

This article seeks to fill that gap by analyzing Dürer’s cases through the lens of contemporary trademark theory. It argues that, properly contextualized and understood, these cases provide the first complete record we have of tribunals enjoining the unsanctioned use of a famous mark in commerce both to protect consumers from purchasing mislabeled goods and preserve the source-associative power of that sign. In so doing, they show us a path towards recentralizing the role of artists and authors as a core aspect of trademark law’s otherwise industrial legal history.