19 November 2022

Pseudolegal

In Michelle Jenkins v Home@Scope Pty Ltd [2022] FWCFB 207 the Fair Work Commission has considered yet another pseudo legal claim regarding dismissal in relation to vaccination.

 [4] For the reasons that follow, permission to appeal is refused. 

Decision Under Appeal 

[5] The Commissioner began by considering s.396 of the Act and was satisfied by all the initial matters. The Commissioner then had regard to s.385, noting subsections (a), (c) and (d) were satisfied, leaving her to determine whether the dismissal was harsh, unjust or unreasonable pursuant to s.385(b) and considering the factors in s.387. 

Valid reason for dismissal – s.387(a) 

...   

[9] The Respondent submitted that the Appellant was terminated because she could no longer meet the inherent requirements of her role. The Appellant submitted that there was no valid reason for her dismissal related to capacity or conduct and made assertions as to the validity of the Directions and Orders. The Appellant also submitted that she was able to perform the administrative part of her role remotely and therefore she should not have been dismissed. Further, the Appellant submitted:

a) her contract of employment falls under federal law and cannot be overridden by a pandemic declaration; 

b) the vaccination mandate is unconstitutional; 

c) under the Nuremberg Code, it is a criminal act to pressure or coerce someone into having a vaccination; and 

d) her vaccination status is protected by the Privacy Act 1988 (Cth). 

[10] The Commissioner rejected the Appellant’s submissions, finding that they could not be sustained. Further, the Commissioner observed that the Directions and Orders were valid at the time of the Decision. The Commissioner noted there is no contest that the Appellant did not provide vaccination information to the Respondent and has at no stage claimed to be an excepted person to the Directions or Orders. 

[11] The Commissioner was therefore satisfied that the effect of the Direction prohibited the Respondent from allowing the Appellant to provide direct care to clients in their homes. The Respondent would have been in breach of its legal obligations if it allowed the Appellant to work outside of her primary residence from 15 October 2021. Further, the Commissioner was satisfied that the direction given by the Respondent was reasonable. ... 

[17] In conclusion, the Commissioner was satisfied that the Appellant’s dismissal was not harsh, unjust or unreasonable and therefore not unfair. The Commissioner dismissed the Appellant’s application. 

Grounds of Appeal 

[18] The Appellant provided lengthy submissions on her sovereignty and the COVID-19 vaccination mandate. While we have not set out these submissions we have read and considered them for this appeal. The Appellant’s grounds of appeal as set out in her F7 – Notice of Appeal are as follows:

Legislated reason I was dismissed for still has not been given to me by either Home@Scope or in the response to my application. 

As per (50) of the decision, you state I did not respond after the 13th when quiet clearly YOU have an email sent to Home@Scope dated after the 13th in which they did NOT respond. 

Evidence in a new case shows that YOU are unlawfully acting under a false coat of arms. 

You have NOT taken into any account the evidence I have provided nor have you provided me with any evidence of the authority of the CHO, PHO or State Premier. 

“Vaccination” there is NO authorised “vaccine” there is an experimental gene therapy that Greg Hunt spoke of, and as for that reason I will be seeking legal advice for an attempt on my life.”

[19] In terms of why the appeal is in the public interest the Appellant submitted:

I was dismissed because I followed a Constitutional Act and Act of the 1900 Constitution. This is in the Interest of the public as this is the constitution we the people follow and is lawful. The seal of the crown being removed in 1953 makes any laws altered without a referendum UNLAWFUL. 

Section 51 xxiii9(A) medical and dental services – as to not authorise civil conscription. Clause 5 of the constitution states “operation of the constitution, shall be binding on the courts, judges and the people of every state and of every part of the Commonwealth.” I believe you have breached this in your decision as I was adhering to my rights of my privacy under the Biosecurity Act 2015 and the Privacy Act. Every Law whether Federal or state is SUBJECT to the Constitution, and if a law is inconsistant (sic) with the Constitution it is INVALID. 

INFORMED CONSENT was NOT given to any living person on this earth. 

It is now being made public that this gene therapy is a form of genocide in which Home@scope are being complicit and not only my decision under a LAWFUL act of not disclosing my medical information which falls under the Nuremberg Code as this is an International LAW. It is now my understanding that millions have died from this experiment hence the reason I believe this to be in the best interest of everyone in the public. 

So I was dismissed for trying to stay alive. How is that not UNFAIR?”

In Ms Michelle Jenkins v Home@Scope Pty Ltd [2022] FWC 2003 the Commission had noted Jenkin's statement

“... Take note: 

I protest the interference of a medical service upon me of unknown consequences, and I protest the inspection that violates my medical privacy. 

I request production of the written law that requires of me to undergo a forced vaccination as a prerequisite of my employment. 

I request the production of the written data that proves the vaccine has undergone the clinical trials required of vaccines to prove its safety. 

I request that the law for mandated vaccinations be made pursuant to the constitutional guarantee. 

I request that the health directions and mandates be proved, for enforcement, that it has been made in the fulfilment of the law that governs this Commonwealth, for which unites and protects us. 

Failure to produce the written law mandating this forced vaccination, within three days of this notice, shall be taken to be unwarranted coercion and workplace harassment for which substantial compensation may be due...”

Unlike the recent Miroch dispute Jenkins does not appear to have sought compensation in the form of bars of silver.

18 November 2022

Algorithmics

'The flaws of policies requiring human oversight of government algorithms' by Ben Green in (2022) 45 Computer Law & Security Review comments 

 As algorithms become an influential component of government decision-making around the world, policymakers have debated how governments can attain the benefits of algorithms while preventing the harms of algorithms. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to effectively oversee algorithmic decision-making. In this article, I survey 41 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms. This institutional approach operates in two stages. First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making and that any proposed forms of human oversight are supported by empirical evidence. Second, these justifications must receive democratic review and approval before the agency can adopt the algorithm. 

 Green argues 

In recent years, governments across the world have turned to automated decision-making systems, often described as algorithms, to make or inform consequential decisions (Calo & Citron, 2021; Eubanks, 2018; Green, 2019; Henley & Booth, 2020). These developments have raised significant debate about when and how governments should adopt algorithms. On the one hand, algorithms bring the promise of making decisions more accurately, fairly, and consistently than public servants (Kleinberg et al., 2018; Kleinberg et al., 2015). On the other hand, the use of algorithms by governments has been a source of numerous injustices (Angwin et al., 2016; Calo & Citron, 2021; Eubanks, 2018; Green, 2019; Henley & Booth, 2020). The algorithms used in practice tend to be rife with errors and biases, leading to decisions that are based on incorrect information and that exacerbate inequities. Furthermore, making decisions via the rigid, rule-based logic of algorithms violates the principle that government decisions should respond to the circumstances of individual people. 

In the face of these competing hopes and fears about algorithmic decision-making, policymakers have explored regulatory approaches that could enable governments to attain the benefits of algorithms while avoiding the risks of algorithms. An emerging centerpiece of this global regulatory effort is to require human oversight of the decisions rendered by algorithms. Human oversight policies enable governments to use algorithms—but only if a human has some form of oversight or control over the final decision.1 In other words, algorithms may assist human decision-makers but may not make final judgments on their own. These policies are intended to ensure that a human plays a role of quality control, protecting against mistaken or biased algorithmic predictions. These policies aim to protect human rights and dignity by keeping a “human in the loop” of automated decision-making (Jones, 2017; Wagner, 2019). In theory, adopting algorithms while ensuring human oversight could enable governments to obtain the best of both worlds: the accuracy, objectivity, and consistency of algorithmic decision-making paired with the individualized and contextual discretion of human decision-making. 

Recent legislation assumes that human oversight protects against the harms of government algorithms. For instance, in its proposed Artificial Intelligence Act, the European Commission asserted that human oversight (along with other mechanisms) is “strictly necessary to mitigate the risks to fundamental rights and safety posed by AI” (European Commission, 2021). Following this logic, many policies position human oversight as a distinguishing factor that makes government use of algorithms permissible. The European Union's General Data Protection Regulation (GDPR) restricts significant decisions “based solely on automated processing” (European Parliament & Council of the European Union, 2016b). The Government of Canada requires federal agencies using high-risk AI systems to ensure that there is human intervention during the decision-making process and that a human makes the final decisions (Government of Canada, 2021). Washington State allows state and local government agencies to use facial recognition in certain instances, but only if high-impact decisions “are subject to meaningful human review” (Washington State Legislature, 2020). 

Despite the emphasis that legislators have placed on human oversight as a mechanism to mitigate the risks of government algorithms, the functional quality of these policies has not been thoroughly interrogated. Policymakers calling for human oversight invoke values such as human rights and dignity as a motivation for these policies, but rarely reference empirical evidence demonstrating that human oversight actually advances those values. In fact, when policies and policy guidance do reference empirical evidence about human-algorithm interactions, they usually express reservations about the limits of human oversight, particularly related to people over-relying on algorithmic advice (Engstrom et al., 2020; European Commission, 2021; UK Information Commissioner's Office, 2020). 

This lack of empirical grounding raises an important question: does human oversight provide reliable protection against algorithmic harms? Although inserting a “human in the loop” may appear to satisfy legal and philosophical principles, research into sociotechnical systems demonstrates that people and technologies often do not interact as expected (Suchman et al., 1999). Hybrid systems that require collaboration between humans and automated technologies are notoriously difficult to design, implement, and regulate effectively (Bainbridge, 1983; Gray & Suri, 2019; Jones, 2015; Pasquale, 2020; Perrow, 1999). Thus, given that human oversight is being enacted into policies across the world as a central safeguard against the risks of government algorithms, it is vital to ensure that human oversight actually provides the desired protections. If people cannot oversee algorithms as intended, human oversight policies would have the perverse effect of alleviating scrutiny of government algorithms without actually addressing the underlying concerns. 

This article interrogates the efficacy and impacts of human oversight policies. It proceeds in four parts. The first two parts lay out the context of my analysis. Section 2 provides background on the tensions and challenges raised by the use of algorithms in government decision-making. Section 3 describes the current landscape of human oversight policies. I survey 41 policy documents from across the world that provide some form of official mandate or guidance regarding human oversight of public sector algorithms. I find that these policies prescribe three approaches to human oversight. 

Section 4 evaluates the three forms of human oversight described in Section 3. Drawing on empirical evidence about how people interact with algorithms, I find that human oversight policies suffer from two significant flaws. First, human oversight policies are not supported by empirical evidence: the vast majority of research suggests that people cannot reliably perform any of the desired oversight functions. This first flaw leads to a second flaw: human oversight policies legitimize the use of flawed and unaccountable algorithms in government. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies create a regulatory loophole: it provides a false sense of security in adopting algorithms and enables vendors and agencies to foist accountability for algorithmic harms onto lower-level human operators. 

Section 5 suggests how to adapt regulation of government algorithms in light of the two flaws to human oversight policies described in Section 4. It is clear that policymakers must stop relying on human oversight as a remedy for the potential harms of algorithms. However, the correct response is not to simply abandon human oversight, leaving governments to depend on autonomous algorithmic judgments. Nor should regulators prohibit governments from ever using algorithms. Instead, legislators must develop alternative governance approaches that more rigorously address the concerns which motivate the (misguided) turn to human oversight. 

I propose a shift from human oversight to institutional oversight for regulating government algorithms. This involves a two-stage process. First, rather than assume that human oversight can address fundamental concerns about algorithmic decision-making, agencies must provide written justification of its decision to adopt an algorithm in high-stakes decisions. If an algorithm violates fundamental rights, is badly suited to a decision-making process, or is untrustworthy, then governments should not use the algorithm, even with human oversight. As part of this justification, agencies should provide evidence that any proposed forms of human oversight are supported by empirical evidence. If there is not sufficient evidence demonstrating that human oversight is effective and that the algorithm improves human decision-making, then governments should not incorporate the algorithm into human decision-making processes. Second, agencies must make these written justifications publicly available. Agencies are not allowed to use the algorithm until its report receives public review and approval. 

Compared to the status quo of blanket rules that enable governments to use algorithms as long as a human provides oversight, this institutional oversight approach will help to prevent human oversight from operating as a superficial salve for the injustices associated with algorithmic decision-making.

17 November 2022

Marks and Judicial Signifiers

'Albrecht Durer's Enforcement Actions: A Trademark Origin Story' by Peter Karol in 25 Vanderbilt Journal of Entertainment and Technology Law (forthcoming) comments 

This article offers a reappraisal of a pair of remarkably contemporary enforcement actions brought by the Northern Renaissance artist Albrecht Dürer (1471-1528) against copyists of his work. These cases have long been debated by art, cultural and copyright historians insofar as they appear to reject Dürer’s demand for proto-copyright protection for his prints. But surprisingly little attention has been paid by trademark scholars to the companion holdings—in the same cases—that affirm Dürer’s right to prevent use of his monogram on unauthorized reproductions. 

This article seeks to fill that gap by analyzing Dürer’s cases through the lens of contemporary trademark theory. It argues that, properly contextualized and understood, these cases provide the first complete record we have of tribunals enjoining the unsanctioned use of a famous mark in commerce both to protect consumers from purchasing mislabeled goods and preserve the source-associative power of that sign. In so doing, they show us a path towards recentralizing the role of artists and authors as a core aspect of trademark law’s otherwise industrial legal history.

In Choi v Secretary, Department of Communities and Justice [2022] NSWCA 170 - one of many judgments and tribunal decisions involving Ms Choi - the Court considered claims regarding denial of natural justice, defective procedure, conspiracy and invalidity such as a supposed requirement for the wearing of wigs.

The judgment states -

Appeal Ground 2 – Natural justice was not offered 

129. Ms Choi has raised various complaints as to the events on 21 July 2021: in summary, that the primary judge did not robe and did not wear a wig (which she contends was a breach of the Court Attire Policy); that there was a private hearing held in her absence (or that the primary judge had already started the hearing with the respondent in her absence); that the primary judge allowed his associate to email the respondent, without copying Ms Choi into the correspondence during the hearing; that the primary judge was making orders “while being instructed” by a third party (Ms Choi says the third party can only be Ms Kaban, who is alleged to have instructed the primary judge through Bluetooth); that there was a refusal to give audio recording files and written reasons for the orders; and that one ground on which she had consented to determine the matter on the papers conditionally was that his Honour said he had to go on leave and needed to hand the decision down by the end of 2021 but did not hand the judgment down until far later on 9 March 2022; and that the judgment “is not accountable”. As noted, Ms Choi has contended that there was a difference between the official transcript of the oral hearing and her transcript (taken from the unauthorised sound recording). 

130. Ms Choi says that: ... at 10:15 AM, I entered the Virtual courtroom. Two men wearing a jacket with no tie had bee [sic] talking about a tortured history of the proceedings since 2018 and the CourtBook [sic]. The courtroom was filmed in a zoom-in and I could see only an [sic] portrait of Justice Bellew who did not robe with no wig. No opening remark. In short, there was a private hearing in my absence. Also, the Associate emailed the Respondent during the hearing. I requested the Associate to send me that email. However, the Chamber ignored my request . A huge difference between the official Transcript and my Transcript. (the big font and cross-out in the Transcript are different). His Honour was being instructed when giving the oral Judgment. I am curious if one man wearing a brown jacket with no tie was his Honour. 

131. Ms Choi argues that wigs and robes are symbols of tradition and justice; and complains that the conduct of this matter “did not reach the Court’s expectations and cheap enough to reject my request to provide a sound recording and a full-version transcript including reasons for orders”. 

Respondent’s submissions as to the above complaints 

132. The respondent accepts that there was a brief conversation (recorded on the transcript) prior to Ms Choi becoming connected to the virtual courtroom but disputes that this amounts to a denial of natural justice. Insofar as Ms Choi raises issues as to the primary judge not being robed or wearing a wig, the respondent says that the customs relating to the wearing of judicial dress in New South Wales probably originated from the Judges’ Rules of 1635 promulgated by the Judges of King’s Bench; and that the Judges’ Rules did not have force of law and so were not received from England as part of the law of New South Wales. The respondent says that there has been no subsequent legislative intervention and the Supreme Court’s Court Attire Policy, insofar as it relates to robes and wigs, is directed at barristers.  … 

134. As to the suggestion that there was a denial of procedural fairness by reference to the fact that the primary judge was not robed or wearing a wig (or the contention that the primary judge was wearing a brown jacket), even apart from the fact that the Court Attire Policy governs the manner in which barristers are to appear before the Court, it cannot seriously be suggested that the wearing of robes or wigs is an requirement of natural justice or that a failure to do so is contrary to the rule of law. There are indeed many Courts in this country in which wigs are not worn in court hearings (including in recent years the High Court).  … 

Respondent’s submissions as to the above complaints 

149. As to the principal reasons articulated by Bellew J for declining to grant an extension of time in which to file the summons seeking leave, the respondent contends that Ms Choi is an experienced litigant (pointing to the fact that she has, since 2017, commenced approximately 30 sets of proceedings against (among others) the NSW Ombudsman, the Legal Aid Commission of NSW, the Commissioner of NSW Police and the University of Technology, Sydney before the Tribunal, in the Supreme Court and in the High Court) (annexing a list of those proceedings to the respondent’s submissions) and reiterates its position that this was simply a proposed re-litigation of matters previously disputed. ... 

213. What is clear, however, is that one looks to the nature and character of the function that is exercised in order to characterise it as either judicial or administrative: it is undoubtedly a matter of substance rather than form (or, as here suggested by Ms Choi, attire). Thus, whether or not the primary judge was wearing a wig says nothing about whether he was exercising judicial power. (Indeed, as may become apparent if Ms Choi pursues her foreshadowed application for special leave to the High Court, the judges of our ultimate appellate court do not wear wigs; but it would surely not be suggested that in hearing appeals and determining litigious controversies in the High Court without wearing wigs their Honours were exercising administrative rather than judicial functions.) Similarly, what colour suit the primary judge may have worn (Ms Choi expressing the opinion that judges wear black) or whether his Honour was wearing a tie, says nothing about the functions there being exercised. ... 

223. The features which make this a clear case for making [a Teoh] order are the quantity of proceedings commenced by Ms Choi, the disproportionality between the number of those proceedings and the matters in issue, the thousands of pages of material which regularly accompanies them (to none of which was this Court taken on the present applications) and the seriousness of the allegations made in circumstances where, if they were made by a legal practitioner, there would be a clear breach of the applicable professional rules. Among other things, Ms Choi has made allegations that: the Supreme Court issued a “fraudulent official transcript” of the hearing before Bellew J on 21 July 2021; there has been manipulation and/or removal by the Registry of part of the White Folder; the primary judge is in contempt; there has been a breach of the Privacy and Personal Information Protection Act 1998 (NSW) by the Registrar who provided assistance to Ms Choi by drafting a notice of change of address for service; and the primary judge has made orders under instructions from a third party. It is clear that Ms Choi has no compunction in making very serious allegations of fraud and corruption against any number of persons involved in the proceedings (without the necessary detail required to make such serious allegations). 

224. Moreover, following the hearing, Ms Choi has continued to inundate chambers with email correspondence making serious allegations against parties to the proceedings and inappropriate requests of the Court and the other parties. This has the inevitable result that Court time has been occupied in dealing with the matters and the respondents’ time and costs have been expended in responding to them. 

225. In addition, there is the absence of any place of address in New South Wales at which Ms Choi may be served, and against which, if necessary, execution can be levied. We do not express a view as to whether there has been conduct which amounts to either or both of a serious contempt of court and a serious breach of the Court Security Act by what has been published on YouTube, but the potential criminality as well as the practical difficulties in enforcing the costs orders that regularly accompany Ms Choi’s unsuccessful applications make the absence of the local address required by UCPR r 4.5 more than a merely technical breach. 

226. In short, all persons enjoy an important right to invoke the jurisdiction of this Court. However, that right comes with concomitant responsibilities, and it must not be thought that this Court is powerless to prevent its processes from being abused. It is appropriate in those circumstances and having regard to case management principles and the overriding purpose mandated by s 56 of the Civil Procedure Act to make such a direction in the present case.