08 October 2019

Parody

'Fame, Parody, and Policing in Trademark Law' by Mark A. Lemley in (2019) Michigan State Law Review comments
 Trademark owners regularly overreach. They often threaten or sue people they have no business suing, including satirists, parodists, non-commercial users, and gripe sites. When they do, they often justify their aggressive legal conduct by pointing to the need to protect their trademarks by policing infringement. Courts have in fact indicated at various points that policing is, if not strictly necessary, at least a way to strengthen a mark. But courts have never held that efforts to block speech-related uses are necessary or even helpful in obtaining a strong mark. Several scholars have accordingly argued that overzealous policing is unnecessary, that it has harmful effects, and that we ought to punish it. But trademark owners continue to do it, in part because it is largely (though not completely) costless and because if there is even a chance a failure to police will cost you your trademark you won’t want to take the chance. So trademark law currently not only doesn’t prevent overreaching; it affirmatively encourages it. 
In this article, I suggest a way that we can align trademark owner enforcement incentives with good public policy. The presence of unauthorized parodies, satires, and complaint sites involving a mark should be evidence of the fame of the mark, and perhaps even a requirement for status as a famous mark. Taking this approach would be consistent with what we know about how society interacts with trademarks. Famous marks become part of a social conversation in a way that ordinary marks don’t. My approach has empirical support: the best-known brands draw more parodies and criticism sites than non-famous marks, and those parodies don’t interfere with the fame of the mark. And it would give trademark owners an affirmative reason to leave critics, satirists, and parodists alone.

07 October 2019

Infernal Engines

The Immoral Machine' by John Harris in (2019) Cambridge Quarterly of Healthcare Ethics comments
In a recent paper in Nature entitled ‘The Moral Machine Experiment’, Edmond Awad, et al make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called ‘autonomous vehicles’ and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:
(1) Find out what ‘public morality’ will prefer to see happen; 
(2) On the basis of this discovery, claim both popular acceptance of the preferences and persuade would-be owners and manufacturers that the vehicles are programmed with the best solutions to any survival dilemmas they might face; 
(3) Citizen agreement thus characterized is then presumed to deliver moral license for the chosen preferences; 
(4) This yields ‘permission’ to program vehicles to spare or condemn those outside the vehicles when their deaths will preserve vehicle and occupants.
This paper argues that the Moral Machine Experiment fails dramatically on all four counts.
Harris' critique concerns 'The Moral Machine experiment' by Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon and Iyad Rahwan  in (2018) 563 Nature 59–64, in which the authors comment
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics.
They argue
We are entering an age in which machines are tasked not only to promote well-being and minimize harm, but also to distribute the well- being they create, and the harm they cannot eliminate. Distribution of well-being and harm inevitably creates tradeoffs, whose resolution falls in the moral domain. Think of an autonomous vehicle that is about to crash, and cannot find a trajectory that would save everyone. Should it swerve onto one jaywalking teenager to spare its three elderly passengers? Even in the more common instances in which harm is not inevitable, but just possible, autonomous vehicles will need to decide how to divide up the risk of harm between the different stakeholders on the road. Car manufacturers and policymakers are currently struggling with these moral dilemmas, in large part because they cannot be solved by any simple normative ethical principles such as Asimov’s laws of robotics. 
Asimov’s laws were not designed to solve the problem of universal machine ethics, and they were not even designed to let machines distribute harm between humans. They were a narrative device whose goal was to generate good stories, by showcasing how challenging it is to create moral machines with a dozen lines of code. And yet, we do not have the luxury of giving up on creating moral machines. Autonomous vehicles will cruise our roads soon, necessitating agreement on the principles that should apply when, inevitably, life-threatening dilemmas emerge. The frequency at which these dilemmas will emerge is extremely hard to estimate, just as it is extremely hard to estimate the rate at which human drivers find themselves in comparable situations. Human drivers who die in crashes cannot report whether they were faced with a dilemma; and human drivers who survive a crash may not have realized that they were in a dilemma situation. Note, though, that ethical guidelines for autonomous vehicle choices in dilemma situations do not depend on the frequency of these situations. Regardless of how rare these cases are, we need to agree beforehand how they should be solved. 
The key word here is ‘we’. As emphasized by former US president Barack Obama, consensus in this matter is going to be important. Decisions about the ethical principles that will guide autonomous vehicles cannot be left solely to either the engineers or the ethicists. For consumers to switch from traditional human-driven cars to autonomous vehicles, and for the wider public to accept the proliferation of artificial intelligence-driven vehicles on their roads, both groups will need to understand the origins of the ethical principles that are programmed into these vehicles. In other words, even if ethicists were to agree on how autonomous vehicles should solve moral dilemmas, their work would be useless if citizens were to disagree with their solution, and thus opt out of the future that autonomous vehicles promise in lieu of the status quo. Any attempt to devise artificial intelligence ethics must be at least cognizant of public morality. 
Accordingly, we need to gauge social expectations about how autonomous vehicles should solve moral dilemmas. This enterprise, however, is not without challenges. The first challenge comes from the high dimensionality of the problem. In a typical survey, one may test whether people prefer to spare many lives rather than few; or whether people prefer to spare the young rather than the elderly; or whether people prefer to spare pedestrians who cross legally, rather than pedestrians who jaywalk; or yet some other preference, or a simple combination of two or three of these preferences. But combining a dozen such preferences leads to millions of possible scenarios, requiring a sample size that defies any conventional method of data collection. 
The second challenge makes sample size requirements even more daunting: if we are to make progress towards universal machine ethics (or at least to identify the obstacles thereto), we need a fine-grained understanding of how different individuals and countries may differ in their ethical preferences. As a result, data must be collected worldwide, in order to assess demographic and cultural moderators of ethical preferences. 
As a response to these challenges, we designed the Moral Machine, a multilingual online ‘serious game’ for collecting large-scale data on how citizens would want autonomous vehicles to solve moral dilemmas in the context of unavoidable accidents. The Moral Machine attracted worldwide attention, and allowed us to collect 39.61 million decisions from 233 countries, dependencies, or territories (Fig. 1a). In the main interface of the Moral Machine, users are shown unavoidable accident scenarios with two possible outcomes, depending on whether the autonomous vehicle swerves or stays on course (Fig. 1b). They then click on the outcome that they find preferable. Accident scenarios are generated by the Moral Machine following an exploration strategy that focuses on nine factors: sparing humans (versus pets), staying on course (versus swerving), sparing passengers (versus pedestrians), sparing more lives (versus fewer lives), sparing men (versus women), sparing the young (versus the elderly), sparing pedestrians who cross legally (versus jaywalking), sparing the fit (versus the less fit), and sparing those with higher social status (versus lower social status). Additional characters were included in some scenarios (for example, criminals, pregnant women or doctors), who were not linked to any of these nine factors. These characters mostly served to make scenarios less repetitive for the users. After completing a 13-accident session, participants could complete a survey that collected, among other variables, demographic information such as gender, age, income, and education, as well as religious and political attitudes. Participants were geolocated so that their coordinates could be used in a clustering analysis that sought to identify groups of countries or territories with homogeneous vectors of moral preferences. 
Here we report the findings of the Moral Machine experiment, focusing on four levels of analysis, and considering for each level of analysis how the Moral Machine results can trace our path to universal machine ethics. First, what are the relative importances of the nine preferences we explored on the platform, when data are aggregated worldwide? Second, does the intensity of each preference depend on the individual characteristics of respondents? Third, can we identify clusters of countries with homogeneous vectors of moral preferences? And fourth, do cultural and economic variations between countries predict variations in their vectors of moral preferences?
Harris had earlier considered 'Who owns my autonomous vehicle: Ethics and responsibility in artificial and human intelligence' in (2018) 27(4) Cambridge Quarterly of Healthcare Ethics 500–509.

'Law and Technology: Two Modes of Disruption, Three Legal Mindsets and the Big Picture of Regulatory Responsibilities' by Roger Brownsword in (2018) 14 Indian Journal of  Law and Technology comments
This article introduces three ideas that are central to understanding the ways in which law and legal thinking are disrupted by emerging technologies and to maintaining a clear focus on the responsibilities of regulators. The first idea is that of a double disruption that technologicalinnovation brings to the law. While the first disruption tells us that the old rules are no longer fit for purpose and need to be revised and renewed, the second tells us that, even if the rules have been changed, regulators might now be able to dispense with the use of rules (the rules are redundant) and rely instead on technological instruments. 
The second idea is that the double disruption leads to a three-way legal and regulatory mind-set that is divided between: (i) traditional concerns for coherence in the law; (ii) modern concerns with instrumental effectiveness; and (iii) a continuing concern with instrumental effectiveness and risk management but now focused on the possibility of employing technocratic solutions. The third idea is one of a hierarchy of regulatory responsibilities. Most importantly, regulators have a 'stewardship' responsibility for maintaining the 'commons'; then they have a responsibility to respect the fundamental values of a particular human social community; and, finally, they have a responsibility to seek out an acceptable balance of legitimate interests within their community. 
Such disruptions notwithstanding, it is argued that those who have regulatory responsibilities need to be able to think through the regulatory noise to frame questions in the right way and to respond in ways that are rationally defensible and reasonable. In an age of smart machines and new possibilities for technological fixes, traditional institutional designs might need to be reviewed.
Brownsword states
In a series of articles, I have argued that lawyers need to engage more urgently with the regulatory effects of new technologies;' and, while I have argued this in relation to the full spectrum of technological interventions, whether they are 'soft' and 'assistive' or 'hard' and fully 'managerial',  My concerns have been primarily with the employment of hard technologies. For, whereas assistive technologies (such as those surveillance and identification technologies that are employed in criminal justice systems) reinforce the prohibitions and requirements that are prescribed by legal rules, full- scale technological management introduces a radically different regulatory approach by redefining the practical options that are available to regulatees. Instead of seeking to channel the conduct of regulatees by prescribing what they 'ought' or 'ought not' to do, regulators focus on controlling what regulatees actually can or cannot do in particular situations. Instead of finding themselves reminded of their legal obligations, regulatees find themselves  obliged or 'forced' to act in certain ways. 
If lawyers are to get to grips with these new articulations of regulatory power, I have suggested that they frame their inquiries by employing a broad concept of the 'regulatory environment' (one that recognises both normative rule-based and non-normative technology-based regulatory mechanisms); I have identified the 'complexion' of the regulatory environment as an important focus for inquiry (because the use of technological management can compromise the context for the possibility of both autonomous and moral human action); and I have argued that it is imperative that the use of regulatory technologies is authorised in accordance with the ideal of the Rule of Law. 
I have also posed a number of questions about the future of traditional rules of law where 'regulators' (broadly conceived) turn away from rules in favour of technological solutions or where historic regulatory objectives are simply taken care of by automation-such as will be the case, for example, when it is the design of autonomous vehicles that takes care of concerns about human health and safety that have hitherto been addressed by legal  rules directed at human drivers of vehicles. Hence, if we look ahead, what does the increasing use of technological management signify for traditional rules of criminal law, torts, and contracts? Will these rules be rendered redundant, will they be directed at different human addressees, or will they simply be revised? In short, how are traditional laws disrupted by technological innovation and, in an age of technological management, how are rule-based regulatory strategies disrupted? It is questions of this kind that I want to begin to address in the present article. 
Yet, why linger over such questions? After all, the prospect of technological management implies that rules of any kind have a limited future. To the extent that technological management takes on the regulatory roles traditionally performed by legal rules, those rules seem to be redundant;   and, to the extent that technological management does not supersede but co-exists with legal rules, while some rules will be redirected, others will need to be refined and revised (imagine, for example, a legal framework that covers both autonomous and driven vehicles sharing the same roads). Accordingly, the short answer to these questions is that the destiny of legal rules is to be found somewhere in the range of redundancy, replacement, redirection, revision and refinement. Precisely which rules are replaced, which refined, which revised and so on, will depend on both technological development and the way in which particular communities respond to the idea that technologies, as much as rules, are available as regulatory instruments - indeed, that legal rules are just one species of regulatory technologies. 
This short answer, however, does not do justice to the deeper and distinctive disruptive effects of technological development on both legal rules and the regulatory mind-set. Accordingly, in this article, I want to sketch a back-story that features two overarching ideas: one is the idea of a double tech- nological disruption and the other is the idea of a regulatory mind-set that is divided in three ways. With regard to the first of these overarching ideas, the double disruption has an impact on: (i) the substance of traditional legal rules; and then (ii) on the use-or, rather, non-use-of legal rules as the regulatory modality. With regard to the second overarching idea, the ensuing three-way legal and regulatory mind-set is divided between: (i) traditional concerns for coherence in the law; (ii) modern concerns with instrumental effectiveness (relative to specified regulatory purposes) and particularly with seeking an acceptable balance of the interests in beneficial innovation and management of risk; and (iii) a continuing concern with instrumental effec- tiveness and risk management but now focused on the possibility of employ- ing technocratic solutions. 
If what the first disruption tells us is that the old rules are no longer fit for purpose and need to be revised and renewed, then the second disruption tells us that, even if the rules have been changed, regulators might now be able to dispense with the use of rules (the rules are redundant) and rely instead on technological instruments. Moreover, what the disruptions further tell us is that we can expect to find a plurality of competing mind-sets seeking to guide the regulatory enterprise. However, what none of this tells us is how regulators should engage with these disruptions. When there is pressure on regulators to think like coherentists (focusing on the internal consistency and integrity of legal doctrine), when regulators are expected to think in a way that is sensitive to risk and to make instrumentally effective responses, and when there is now pressure to think beyond rules to technological fixes, what exactly are the responsibilities of, and priorities for, regulators? Without some critical distance and a sense of the bigger picture, how are regulators to plot a rational and reasonable course through a conflicted and confusing regulatory discourse? Although these are large questions, they are ones that I also want to begin to address in this article. 
Accordingly, the shape of the article, which is in four principal Parts, is as follows. We start (in Parts II and III) with some questions about the future of traditional legal rules, the backstory to which is one of a double disruption that technological innovation brings to the law and, in consequence, a three- way re-configuration of the legal and regulatory mind-set. While the double disruption is elaborated in Part II of the article, the three elements of the re-configured legal and regulatory mind-set (namely, the coherentist, regulatory-instrumentalist, and technocratic elements) are elaborated in Part III. Given this re-configuration, we need to think about how regulators should engage with new technologies, whether viewing them as regulatory targets or as regulatory tools"; and this invites thoughts about the bigger picture of regulatory responsibilities as well as regulatory roles and institutional competence. Some reflections on the bigger picture are presented in Part IV of the article; and, in Part V, I offer some initial thoughts on the competence of, respectively, the Courts and the Legislature to adopt the appropriate mind- set. While this discussion will not enable us to predict precisely what the future of legal rules will be, it will enable us to appreciate the significance of the disruption to traditional legal mind-sets, to understand the confusing plurality of voices that will be heard in our regulatory discourses, and to have a sense of the priorities for regulators.

NDIS Vetting

The National Disability Insurance Scheme Amendment (Worker Screening Database) Act 2019 (Cth) amends the National Disability Insurance Scheme Act 2013 (Cth) to "establish a database for nationally consistent worker screening, for the purpose of minimising the risk of harm to people with disability from those who work closely with them".

The Act reflects the February 2017 Council of Australian Governments' Disability Reform Council National Disability Insurance Scheme (NDIS) Quality and Safeguarding Framework that features a "nationally recognised approach to worker screening" and the Intergovernmental Agreement on Nationally Consistent Worker Screening for the NDIS, with States and Territories remaining responsible for conducting NDIS worker screening checks, including the application process and risk assessment. Under the Act a centralised database is to be hosted and administered by the NDIS Quality and Safeguards Commissioner, assisted by the NDIS Quality and Safeguards Commission (Commission), to provide and maintain current and accurate information relating to these checks. The database will be accessible to persons or bodies for the purposes of the NDIS. The collection, use and disclosure of information for the purposes of the database will be managed through pre-existing information provisions at Chapter 4 of the 2013 Act.

The total costs of developing and maintaining the database is $13.6 million over the forward estimates, of which the States and Territories are expected to contribute $6.8 million.

 The 2013 Act currently provides for the screening of workers of registered NDIS providers, through the registration requirements in sections 73B, 73E, 73F, 73J and 73T. The National Disability Insurance Scheme (Practice Standards - Worker Screening) Rules 2018 (Worker Screening Rules) provide for worker screening requirements that form part of the NDIS Practice Standards. Compliance with the NDIS Practice Standards (defined at section 73T of the Act) is a condition of registration under section 73F. The Worker Screening Rules require that workers engaged in certain work must have a clearance under the NDIS worker screening legislation of a State or Territory.

The 2019 amendment provides for the Commonwealth Minister to make a determination, via legislative instrument, that a law of a State or Territory is a 'NDIS worker screening law' for the purposes of the definition of that term in section 9. It is intended, that via new section 10B, laws establishing a scheme for the screening of workers in connection with the NDIS can be specified as NDIS worker screening laws as they are made or amended by each State and Territory. A law will only be an NDIS worker screening law once it has been specified in a determination by the Minister under section 10B.

In order to make a determination under new section 10B, the Minister must have the agreement of the State or Territory which has passed the law being specified. The Minister is also limited in this determination, to only specifying laws that, to the Minister's satisfaction, establish a scheme for the screening of workers for purposes including the NDIS.

A legislative note at subsection 10B(1) provides that a legislative instrument made under new section 10B is not subject to disallowance by way of subsection 44(1) of the Legislation Act 2003. This recognises that it is undesirable for Parliament to disallow instruments that have been made for the purposes of a multi-jurisdictional body or scheme, as disallowance would affect jurisdictions other than the Commonwealth. If a determination under new section 10B is disallowed, the Commission would be stifled in its ability to perform this function, or be unable to perform it at all.

The power of the Minister to make a determination under new section 10B cannot be delegated.

The Explanatory Memo states the database
is intended to be a centralised repository of information about persons who have had decisions made about them, or who have applied to have decisions made about them, under NDIS worker screening law. It is intended to be current and up to date, reflecting an accurate picture of whether a person, in working or seeking to work with people with disability, does or does not pose a risk to such people. 
As a necessary precursor to these purposes, another purpose of the database is to share information for the purposes of the NDIS to ensure that the database is current and accurate. It is intended that information in the database may be shared with the following parties at varying levels of detail:
  • State and Territory worker screening units conducting worker screening checks; 
  • registered NDIS providers and their subcontractors; 
  • the National Disability Insurance Agency and its contractors; 
  • persons and bodies providing services under Chapter 2 of the Act; 
  • NDIS providers who are not registered, and their subcontractors; 
  • self-managed participants and plan nominees; 
  • the NDIS Quality and Safeguards Commission; and 
  • the Department of Social Services.
 Paragraph 181Y(3)(d) provides that additional purposes of the database may be determined in an instrument under subsection 181Y(8).
 Information that may be included on the database  includes
  •  paragraph 181Y(5)(a) provides that the database may contain information about a person who has applied for a worker screening check, and information relating to that application. This may include but is not limited to: personal information about the person, the date their application was made and the State or Territory in which their application was made. 
  •  paragraph 181Y(5)(b) provides that the database may contain information about a person who has applied for a worker screening check whose application is no longer being considered, and the reasons for this. This may include but is not limited to: personal information about the person, the date from which their application is no longer being considered, and information to the effect that the applicant has withdrawn the application or the State or Territory worker screening unit has closed the application. 
  •  paragraph 181Y(5)(c) provides that the database may contain information about a person who has been cleared to work with people with disability, and information relating to a decision to that effect (a clearance decision, however described). It is intended that this paragraph cover all decisions to the effect that a person has been cleared to work with people with disability, however this may be described, under an NDIS worker screening law at any point in time. One such way this may be described, is that the person does not pose a risk in working or seeking to work with people with disability. This may include but is not limited to personal information about the person. Information relating to the decision may include but is not limited to: who made the decision, the date it was made, the place it was made, the reasons for that decision, and the time period during which the decision remains in force. 
  •  paragraph 181Y(5)(d) provides that the database may contain information about any interim decision made under NDIS worker screening law whilst a person's application is still being considered. This may, for example, be a decision that a person is prevented from working with people with disability while their application is pending. It is anticipated that a new decision (including another interim decision) may replace the initial interim decision once the application is determined. 
  •  paragraph 181Y(5)(e) provides that the database may contain information about a person who is prevented from working with people with disability, and information relating to a decision to that effect (an exclusion decision, however described). It is intended that this paragraph cover all decisions to the effect that a person has not been cleared to work with people with disability, however this may be described, within NDIS worker screening law at any point in time. One such way this may be described, is that the person does pose a risk in working or seeking to work with people with disability. This may include but is not limited to personal information about the person. Information relating to the decision may include but is not limited to: who made the decision, the date it was made, the place it was made, the reasons for that decision and the time period during which the decision remains in force. 
  •  paragraph 181Y(5)(f) provides that the database may contain information about how long a clearance decision or an exclusion decision, however described, is in force. 
  •  paragraph 181Y(5)(g) provides that the database may contain information about a person who, although previously cleared to work with people with disability (via a decision described in paragraph 181Y(5)(c)), has now had that decision suspended. The database may also include information about a suspension decision. This may include but is not limited to: who decided to suspend the clearance, the date that decision was made, the place it was made, and the time period during which the suspension remains or remained in force. 
  •  paragraph 181Y(5)(h) provides that the database may contain information about a person who, although previously cleared to work with people with disability (via a decision described in paragraph 181Y(5)(c)), has now had that decision revoked. It also provides that the database may contain information about a person who, although previously prevented from working with people with disability (via a decision described in paragraph 181Y(5)(e)), has now had that decision revoked. The database may also include information about a revocation decision. This may include but is not limited to: who decided to revoke the clearance decision or exclusion decision, the date that decision was made, the place it was made, and the time period during which the revocation remains in force. 
  •  paragraph 181Y(5)(i) provides that the database may contain information about employers or potential employers who may hire persons who have made screening applications. The term 'employers' used in this paragraph is intended to include self-managed participants who may hire their own workers. This may include information about the person's potential, current and previous employers, including contact details, period of employment, a description of the role the person was employed in and the period of time they were in that role. 
  •  paragraph 181Y(5)(j) enables the Minister to determine additional information to be contained within the database by way of legislative instrument under subsection 181Y(8). An example of additional determined content of the database may be a new type of decision contemplated by NDIS worker screening law not already covered by subsection 181Y(5). Flexibility in this area will benefit the overall database as States and Territories are yet to implement their NDIS worker screening laws. Additional content to be determined is necessarily limited by the Commissioner's functions and the provisions relating to the collection, use and disclosure of information under the Act.
The database is expected to include information about pending, current and previous decisions made under State and or Territory NDIS worker screening law. 
Subsection 181Y(6) indicates that the range of information that may be contained on the database is intended to be broad and is not limited by the types of information and examples already discussed. The range of information that may be contained on the database is however limited to information necessary for the performance of the Commissioner's function in establishing and maintaining the database, and the purposes of the database as outlined in subsection 181Y(3). It will also be limited by the Commissioner's information collection, use and disclosure powers under the Act. 
Examples of personal information which may be contained in the database include information relating to the identity of persons who have made an application or had a decision about them made under an NDIS worker screening law. This may include: name, date of birth, age, place of birth, address, telephone number, email address and other contact details, employment details, education, Government issued identification numbers and expiry dates as well as a worker screening number allocated to that person. Examples of sensitive information which may be contained in the database include information relating to disability status, Aboriginal and Torres Strait Islander status and cultural and linguistic diversity status. This sensitive information is to be used for policy development, evaluation and research purposes. It is intended that decision information, as obtained from a State or Territory worker screening unit, will only relate to the outcome or result of the decision. The database will not contain information about a person's criminal history, including convictions and charges and any other information relied on to support a decision that is made under NDIS worker screening law. It will also not contain information about a person's sexual identity or preferences. 
This information is intended to promote the accuracy, integrity and effectiveness of the database by ensuring that the information about decisions made under NDIS worker screening law relate to the correct person. It will also assist employers in verifying the employment of a person who has made an application and considering a person's suitability for employment.
 In reference to human rights compatibility the Explanatory Memo states that the amendments promote the rights of persons with disability to be free from exploitation, violence and abuse, consistent with Australia's obligations by ensuring that the supports and services provided through the NDIS are delivered by a suitable workforce.
The Bill supports a nationally consistent approach to worker screening, with the aim of minimising the risk of harm to people with disability from the people who work closely with them. A nationally consistent and recognised worker screening regime promotes the rights of people with disability by:
  • sending a strong signal to the community as a whole about the priority placed on the rights of people with disability to be safe and protected; 
  • reducing the potential for providers to employ workers who pose an unacceptable risk of harm to people with disability; 
  • prohibiting those persons who have a history of harm against people with disability from having more than incidental contact with people with disability when working for a registered NDIS provider; and 
  • deterring individuals who pose an unacceptable risk of harm from seeking work in the sector.
 This recognises that some NDIS participants are amongst the most vulnerable people in the community and that people with disability have the right to be protected from exploitation, violence and abuse. 
The objective of the Bill is to ensure that there is a national repository of decisions made under State and Territory worker screening laws on whether persons who work, or seek to work, with people with disability pose a risk to such people, and to enable this information to be accessible to all States and Territories, and to employers (including self-managed participants) engaging workers in the NDIS. 
It will ensure that there is visibility of workers' screening outcomes across all States and Territories, and that a worker who has been excluded by one State or Territory is excluded nationally. A national worker screening database eliminates the opportunity for people to make multiple attempts in different jurisdictions at gaining a worker screening clearance. 
The paramount objective of this Bill is to protect people with disability from experiencing harm arising as a result of unsafe supports or services provided under the NDIS. 
Consistent with this objective, worker screening has been introduced for roles with registered NDIS providers that have been identified as requiring particular mitigation of the risk of harm to people with disability. Worker screening obligations are not imposed in relation to other roles; however, it will remain open to employers, including self-managed participants, to require worker screening for any worker that they engage in the delivery of NDIS supports and services. This reflects a targeted, measured approach to the risk. 
Criminal history checks and other forms of pre-employment screening are conducted as a matter of routine for a range of occupations to allow employers to make recruitment decisions which support a safe and secure workplace for workers and people with disability. 
However, governments recognise that some individuals, by virtue of their history, have valuable lived experiences to share with people with disability accessing NDIS supports and services. It is recognised that people with lived experience who have committed an offence or misconduct in the past can make significant changes in their lives. 
The Commission will work with State and Territory governments to put in place a nationally consistent, risk-based decision-making framework for considering a person's criminal history and patterns of behaviour over time to guard against the unreasonable exclusion of people who have committed an offence or misconduct from working in the disability sector, where this is not relevant to their potential future risk to people with disability. 
Under the national policy for NDIS worker screening, States and Territories will provide certain review and appeal rights to individual workers who may be subject to an adverse decision. Individuals can seek a review of an adverse decision, consistent with the principles of natural justice and procedural fairness. Where there is an intention to make an adverse decision, States and Territories will disclose reasons for this (except where NDIS worker screening units are required under Commonwealth, State or Territory law to refuse to disclose the information), allow the individual a reasonable opportunity to be heard, and consider the individual's response before finalising the decision. The Bill supports a proportionate approach to safeguards that does not unduly prevent a person from choosing to work in the NDIS market, but ensures the risk of harm to people with disability is minimised, by excluding workers whose behavioural history indicates they pose a risk in the provision of certain services and supports. 
Article 17 of the ICCPR provides that no one shall be subjected to arbitrary or unlawful interference with their privacy. The right to privacy includes respect for informational privacy, including in respect of storing, using and sharing private information and the right to control the dissemination of private information. For interference with privacy not to be arbitrary, it must be in accordance with the provisions, aims and objectives of the ICCPR and should be reasonable in the particular circumstances. Reasonableness in this context incorporates notions of proportionality to the end sought and necessity in the circumstances. 
The NDIS worker screening database is expected to hold information about the identity of persons who have made an application for an NDIS worker screening check, and any pending, current and previous decisions made by the State and or Territory NDIS worker screening unit in relation to that application. This is consistent with the objective to ensure that information about whether a person who works, or seeks to work, with people with disability poses a risk to such people, is current, accurate, and available to all States and Territories, and to employers engaging workers in the NDIS. 
The range of information that may be contained on the database is limited to information necessary for the performance of the Commissioner's functions and for the purposes of the database set out in the Bill. The database will not contain information about a person's criminal history, including convictions and charges and any other information relied on to support a decision that is made under NDIS worker screening law, or information about a person's sexual identity or preferences. 
The range of information that will be shared with persons or bodies will be proportionate and necessary for the objective of minimising the risk of harm to people with disability from the people that work with them. 
States and Territories will have full access to the database as required to effectively implement the national policy, including the ongoing monitoring of people who hold clearances, and the identification of fraudulent or duplicate applications, such as where a person has made multiple attempts to gain a worker screening clearance in a different jurisdiction or under a different name. 
By comparison, employers will have access to a limited subset of information on the database. This is expected to include information about a worker's identity, so that an employer can verify that the person who holds the clearance is the same person that they have engaged or intend to engage. Employers will also have access to information related to whether or not a person they have engaged is cleared to work in certain roles. Employers will not have access to the details of a worker's other employers, or sensitive information relating to a worker's disability status, Aboriginal and Torres Strait Islander status or cultural and linguistic diversity status.