Showing posts with label Standards. Show all posts
Showing posts with label Standards. Show all posts

08 April 2025

Standards

The draft standards identified in the preceding post regarding the LACC accreditation are - 

2. DEFINITIONS AND INTERPRETATION 

2.1 Definitions 

In this document, unless the context requires otherwise –

active learning involves student engagement in critical analysis of the knowledge they acquire, application of that knowledge to factual situations or scenarios, producing solutions supported by legal arguments, and reflection on the process followed. 

Admission Rules means the LACC Model Admission Rules 2015. 

Admitting Authority means the body responsible for all or any of accrediting, monitoring, reviewing and re-accrediting a law course for the purpose of preparing students for admission to the legal profession. 

AQF means the Australian Qualifications Framework. 

assessment method is the manner by which a student’s learning may be tested and evaluated to be able to award a grade. Examples of different assessment methods include examinations, research essays, reflective notes and vivas, class participation, mooting and mock trials, oral examinations, problem solving exercises and practical tests, submissions and advice. 

CALD means the Council of Australian Law Deans. 

CALD Standards means the CALD Standards for Australian Law Schools. 

communication means the imparting or exchanging of information by oral, visual or verbal (including written) means. 

delivery mode means the manner by which the content of the law course is communicated for teaching, learning and assessment purposes. Delivery may be fully in-person, fully online, a blended combination including in-person and online, or by other modes to facilitate distance education. 

direct interaction occurs when two or more persons are communicating and engaging with one another in real time and can hear and, where available, see each other. 

EFTSL means Equivalent Full Time Student Load. 

element means – (a) in the case of a law school that follows the topics listed for a prescribed area of knowledge set out in Schedule 1 of the Admission Rules, one of those topics; or in the case of a law school that follows the topics set out in the guidelines provided for an prescribed area of knowledge set out in that Schedule, a topic included in the law school's curriculum for that area of knowledge. 

in-person means where two or more persons are face-to-face in the physical presence of the others whether on campus or at another location. 

invigilation means supervision whether in-person, online, by technological or other means, or a combination of means, to ensure the academic integrity of the grade  awarded to a student by the assessment method. For example, invigilation may be by using suitable automated supervision software or an examiner observing or supervising a student in the presence of the examiner (whether in-person or online). 

LACC means the Law Admissions Consultative Committee. 

law course means a tertiary academic course in law, whether or not it leads to a degree in law. 

law school includes – (a) an academic unit within a university responsible for conducting a law course  in Australia that leads to a degree or other qualification in law; or another institution conducting a law course  course that leads to a qualification in law, other than a university degree in law, 

online means participation in teaching and learning activities, or assessments, in a virtual or online environment that is connected to, served by, or available through the internet or other telecommunications network. An example is synchronous online learning. 

prescribed area of knowledge means an area of knowledge prescribed in Schedule 1 of the Admission Rules, the teaching of which may include statutory interpretation as set out in the LACC Statement on Statutory Interpretation. (Note:  Law Admissions Consultative Committee, Statement on Statutory Interpretation (2009).)

self-accrediting provider means a registered higher education provider that has been authorised under section 45 of the Tertiary Education Quality and Standards Agency Act 2011 (Cth) to self-accredit courses of study that lead to a higher education award that the provider offers or confers. 

synchronous online learning means direct interaction between a student, teacher and/or other students in a virtual or online environment. Examples include attending live-stream lectures (but not listening to a pre-recorded lecture), videoconference calls and interactive online chatroom discussions. 

teaching method means the way in which the law school communicates and teaches the content of the law course to students, which may depend on the delivery mode. Examples include lectures, workshops, seminars, tutorials, flipped classrooms, group discussions, group work, problem solving, moots, role-play, programmed sessions and simulations (but not student preparation or self-directed study). 

unit means a subject or unit of study that may be undertaken as part of a law course. 

TEQSA means the Tertiary Education Quality and Standards Agency.

Interpretation 

Headings are for convenience only, and do not affect interpretation. 

The following rules also apply in interpreting this document, except where the context makes it clear that a rule is not intended to apply. 

(a) A reference to – 

(i) a legislative provision or legislation (including subordinate legislation) is to that provision or legislation as amended, re-enacted or replaced, and includes any subordinate legislation issued under it; (ii) a document (including this document) is to that document or provision as amended, supplemented or replaced; (iii) a person includes any type of entity or body of persons, whether or not it is incorporated or has a separate legal identity, and any executor, administrator or successor in law of that person; and (iv) anything (including a right, obligation or concept) includes each part of it. 

(b) A singular word includes the plural and vice versa. 

(c) If a word or phrase is defined, any other grammatical form of that word or phrase has a corresponding meaning. 

(d) If an example is given of anything (including a right, obligation or concept) such as by saying it includes something else, the example does not limit the scope of the thing. 

(e) In deciding whether a student will have acquired or demonstrated appropriate understanding and competence in relation to an element or area of knowledge, as the case requires, an Admitting Authority will have regard to –

(i) the Level 7 criteria specified in the AQF; (ii) the Threshold Learning Outcomes for the Bachelor of Laws/LLB or Juris Doctor/JD as the case requires; and (iii) any other matter that the Admitting Authority considers relevant. 

PURPOSES OF THE STANDARDS 

The purposes of these Standards are – 

(a) to assist an Admitting Authority, when accrediting, monitoring, reviewing or re- accrediting a law course, to determine whether that law course –

(i) will provide for a student to acquire and demonstrate appropriate understanding and competence in each element of a prescribed area of knowledge; and (ii) will provide a student with the knowledge and skills to meets the requirements of the LACC Statement on Statutory Interpretation; 

(b) to provide clear, tangible guidance about what evidence is required to satisfy each standard relating to – (i) the delivery of the law course; (ii) the nature of a law course; (iii) the duration of a law course; (iv) the content of a law course; (v) teaching a prescribed area of knowledge; and (vi) assessment of a student's understanding and competence; and 

(c) to provide greater certainty for law schools about the matters which an Admitting Authority will consider relevant when accrediting, monitoring or re-accrediting a law course. 

4. THE STANDARDS 

4.1 The delivery of the law course 

• The law course, or one or more of the units which comprise it, may be delivered fully or partially online. (a) Explanatory note The law school may select the appropriate delivery mode across teaching, learning and assessments, for one or more units or the whole law course. The Admitting Authority may seek information from the law school about the delivery mode offered. 

4.2 The nature of the law course 

• The law course is a tertiary academic course in law, whether or not it leads to a degree in law. 

(a) Explanatory note 

The law course must be "a coherent sequence of units of study leading to the award of a qualification" in law.  This applies when a law course is a single degree and when a law course is part of a combined or double degree, to the law component of that combined or double degree. The qualification must be a degree or another similar qualification in law,  awarded upon successful completion of a tertiary academic course. A law course may be considered for accreditation is “a tertiary academic course … accredited in Australia" for the purposes of these Standards if it is either one of the following - (i) provided by a self-accrediting provider on the National Register of Higher Education Providers; (ii) currently accredited by TEQSA as leading to a regulated higher education award; or (iii) conducted by or on behalf of the New South Wales Legal Profession Admission Board. 

(b) How can a law school show that it has met this standard? 

A law school needs to provide the Admitting Authority with evidence that - the law course leads to a degree or similar qualification in law; and is comprised of a coherent sequence of units of study which form a course designated as a law course; and the law course is – (A) provided by a self-accrediting provider on the National Register of Higher Education Providers; accredited by TEQSA as a course of study leading to a higher education award; or (C) conducted by or on behalf of the New South Wales Legal Profession Admission Board. 

4.3 The duration of the law course 

• The law course includes the equivalent of at least three years' full-time study of law. 

• Intensive or block delivery should only be used for a prescribed area of knowledge where the law school satisfies the Admitting Authority that it is appropriate in all the circumstances. 

(a) Explanatory note 

The total credit points for the  units in the law course must equal or exceed an EFTSL of 3.0. 

The course may be offered in a full-time, part-time or accelerated mode. 

An accelerated mode may include intensives, which are units taught during compressed timeframes outside the usual 12-week semester (i.e. two terms a year) or nine-week trimester (i.e. three terms a year) and might be taught over a winter or summer break, or through block learning models during shorter, but  more frequent, terms. The Admitting Authority may seek further information and data from the law school, for example, in relation to student attendance requirements and whether the intensive or block delivery would enable students to acquire the appropriate level of understanding and competence in the prescribed area/(s) of knowledge and statutory interpretation. 

The LACC Statement on Duration of Legal Studies, provides that the requirement for at least three years’ full-time study refers to three calendar years and that – A law course that can be completed in fewer than three years may be accredited … if the relevant law school satisfies the Admitting Authority that the course is, indeed, the equivalent of a three calendar year full- time course undertaken at the relevant law school, in terms of the breadth and depth of its content, the teaching methods to be employed and the assessment criteria and methodology. 

(b) How can a law school show that it has met this standard? 

A law school needs to provide the Admitting Authority with evidence - (i) that the credit points allocated for the law course in total are equal to or exceed those required for an EFTSL of 3.0; and (ii) if the course can be completed in less than three calendar years, that the course is, indeed, the equivalent of a three calendar year full-time course undertaken at the relevant law school, in terms of the breadth and depth of its content, the teaching methods employed, and the applicable assessment criteria and methodology. 

A law school can give the Admitting Authority the same evidence about the duration of the course that it provided for the purpose of recently being reviewed externally or being accredited by either a self-accrediting provider or by TEQSA. If the law school chooses to do this, unless the Admitting Authority determines otherwise, it will need to – (i) show that the recent review or accreditation required the law school to satisfy a similar standard to that required by the Admitting Authority; and (ii) set out the relevant standard against which it was recently reviewed or accredited; (iii) set out when the review or accreditation occurred and by whom it was conducted, and (iv) give the Admitting Authority copies of the principal documentary evidence that it provided for the purpose of that review or accreditation. 

4.4 The learning outcomes for the law course

• The statement of learning outcomes for the law course is directed to enabling students to acquire and demonstrate appropriate understanding and competence in the prescribed areas of knowledge and statutory interpretation. 

(a) Explanatory note 

TEQSA requires the specified learning outcomes for each course of study to "encompass discipline-related and generic outcomes, including … knowledge and skills required for employment and further study related to the course of study, including those required to be eligible to seek registration to practise where applicable" (emphasis added). 

(b) How can a law school establish that it has met this standard? 

A law school needs to – (i) set out any relevant learning outcomes for the law course; and (ii) show how achieving each of these outcomes will demonstrate that a student has acquired and demonstrated appropriate understanding and competence in each of the prescribed areas of knowledge. 

4.5 Content of the law course 

• The law course includes teaching or other instruction in each of the specified elements in each of the prescribed areas of knowledge set out in Schedule 1 of the Admission Rules. 

• The law course also meets the requirements of the LACC Statement on Statutory Interpretation. 

(a) Explanatory note 

A prescribed area of knowledge need not be taught in a  unit bearing the same name as that used for the area in the Admission Rules. Similarly, the elements of an area of knowledge need not be taught in one or unit; they could be taught in several units. 

An Admitting Authority may consider that the number of hours allocated to teaching a prescribed area of knowledge is relevant when determining whether that area is adequately covered. 

(b) How can a law school show that it has met this standard? 

A law school needs to - (i) describe where each element of each prescribed area of knowledge and statutory interpretation is taught in the law course. This might be done by way of a matrix or by mapping. Evidence could include the course syllabus, unit descriptions or, by way of examples, lecture outlines or reading guides; and (ii) estimate the total teaching hours allocated to the teaching of each prescribed area of knowledge, and describe the teaching methods having regard to the delivery modes for each prescribed area of knowledge indicating the predominant teaching method and delivery mode and the use of other teaching methods and delivery modes; and (iii) the total teaching hours provided should equate to at least 36 hours for each prescribed area of knowledge. If the estimated number of teaching hours for any prescribed area of knowledge is less than 36 hours because of the teaching method used or student research, demonstrate how the learning outcomes will be achieved in that area. 

4.6 Teaching the law course and active learning 

• Each prescribed area of knowledge and any unit subject relating to statutory interpretation is taught by people qualified to teach that area of knowledge. 

• The law school uses teaching methods which enable each student to acquire the appropriate understanding and competence in each element of every prescribed area of knowledge and statutory interpretation. 

• An Admitting Authority will consider the number of hours provided for active learning and/or direct interaction in a prescribed area of knowledge when considering whether a law course will enable a student to acquire an adequate level of understanding and competence. 

• Each student in the law course has ready access to legal information resources that are sufficient in quantity and quality to enable the student to acquire the appropriate understanding and competence in each element of every prescribed area of knowledge. 

(a) Explanatory note The quality of teaching directly affects a student's acquisition of understanding and competence. Three dominant influences upon the quality of teaching are – (i) the qualifications and experience of the teachers; (ii) the teaching methods employed; and (iii) access to legal information resources, particularly library resources. A student needs to acquire both understanding and competence in each 11(b) element of each prescribed area of knowledge and statutory interpretation. Admitting Authorities consider that this will not occur unless the teaching methods demonstrably require active learning. 

Admitting Authorities consider that direct interaction between students and teachers whether in-person or through synchronous online learning remains the primary reliable means of achieving these results. 

How can a law school show that it has met this standard? 

A law school needs to satisfy the Admitting Authority that - 

(i) teachers in the program – • meet the AQF requirement that a teacher should have a degree one level higher than that of the course in which the person teaches, or • have equivalent experience in practice or teaching (which may be demonstrated by reference, say, to a person's specialist practice, scholarship, or standing in the academic community or legal profession), or • if a teacher does not fully meet either of the preceding criteria, that person's teaching is guided and overseen by other staff who do meet one or more of those criteria. 

A law school should provide a complete list of teaching staff (continuing, fixed-term and any casual staff employed at the date upon which accreditation or re-accreditation is sought) and their relevant academic qualifications. The Admitting Authority may request further information about the relevant practice or teaching experience of staff who do not have the requisite higher degree.); 

(ii) the methods generally employed in teaching prescribed areas of knowledge across all delivery modes, enable students to acquire appropriate understanding and competence in each element of that area of knowledge and statutory interpretation. (A law school will need to identify and explain any departures from those generally employed methods, in teaching any particular area of knowledge.); and 

(iii) the design of the law course and its program of instruction primarily comprises provides for at least 18 hours of either or both of – face-to-face instruction and active learning; and (B) instruction and learning involving direct interaction between teacher and student, whether in-person or through synchronous online learning, (A) (iv) (v) and enables students to acquire and demonstrate appropriate understanding and competence in each element of each prescribed area of knowledge and statutory interpretation. 

(A law school will need to provide evidence of the extent to which the design of the law course and its program of instruction provides for active learning and/or direct interaction in each prescribed area of knowledge and statutory interpretation.); and the law school enables each student to have ready access to legal information resources, in paper or in electronic form; and those resources are sufficient in quantity and quality to enable each student to acquire appropriate understanding and competence in each element of each prescribed area of knowledge. xx It would be relevant for an Admitting Authority to know whether the law school’s library has been independently assessed by the CALD Standards Committee and has been independently determined to have met, in this respect, the CALD Standards. 

A law school can give an Admitting Authority the same evidence about teaching each of the prescribed areas of knowledge and statutory interpretation Statutory Interpretation and about its legal information resources that it provided for the purpose of recently being reviewed externally or accredited by either a self-accrediting provider or by TEQSA. Unless the Admitting Authority determines otherwise, the law school will need to – show that the recent review or accreditation required the law school to satisfy a similar standard to that required by these Standards; and set out the relevant standard against which it was reviewed or accredited; and set out when the review or accreditation occurred and by whom it was conducted; and (iv) give the Admitting Authority copies of the principal documentary evidence that it provided for the purpose of that review or accreditation. 

4.7 Assessing understanding and competence 

• Assessment requirements verify that a student has – (i) acquired appropriate understanding and competence in every prescribed area of knowledge; and acquired the relevant knowledge and skills set out in the LACC 

• The law course requires a student to achieve at least a pass grade before satisfactorily completing any subject or unit in which a prescribed area of knowledge or statutory interpretation  is taught or assessed.  

• An Admitting Authority may consider for each unit that covers a prescribed area of knowledge and statutory interpretation, the allocation of assessments, the assessment methods and whether a sufficient proportion of assessments are conducted by invigilation to ensure the law course provides an appropriate level of quality assurance that a student has been awarded a grade that accurately reflects their level of acquired understanding and competence. 

(a) Explanatory note 

An Admitting Authority must be able to rely on a law school’s minimum requirement for completion - a pass grade - as the conclusive indicator that a student has, in fact, acquired an appropriate level of understanding and competence in every element of a prescribed area of knowledge and has acquired the relevant knowledge and skills set out in the LACC's Statement on Statutory Interpretation. 

Invigilation of assessments provides an extra level of quality assurance that the grades awarded to students accurately reflects their level of acquired understanding and competence, particularly in an online learning environment. 

(b)  How can a law school establish that it has met this standard? 

A law school needs to - (i) provide evidence that it requires, and that students are made aware, that all elements of each prescribed area of knowledge and all of the law school's teaching or other instruction in statutory interpretation are assessable; and (ii) provide evidence that its methods of assessment methods in each  unit in which a prescribed area of knowledge is taught confirm that a student has attained an appropriate understanding and competence in that area; and (iii) provide evidence that its methods of assessment methods confirm that a student has achieved all of the outcomes specified in the LACC's Statement on Statutory Interpretation; and (iv) provide evidence that at least 50% of assessments for each unit that covers a prescribed area of knowledge and statutory interpretation is conducted by invigilation; and (v) if grade descriptors apply to prescribed areas of knowledge, set out the descriptor for a pass grade; and (vi) explain the process it uses to satisfy itself that grades awarded accurately reflect the level of student attainment.

09 October 2024

AI Liability

'U.S. Tort Liability for Large-Scale Artificial Intelligence Damages: A Primer for Developers and Policymakers' (RAND, 2024) by Ketan Ramakrishnan, Gregory Smith and Conor Downey comments 

Leading artificial intelligence (AI) developers and researchers, as well as government officials and policymakers, are investigating the harms that advanced AI systems might cause. In this report, the authors describe the basic features of U.S. tort law and analyze their significance for the liability of AI developers whose models inflict, or are used to inflict, large-scale harm. 

Highly capable AI systems are a growing presence in widely used consumer products, industrial and military enterprise, and critical societal infrastructure. Such systems may soon become a significant presence in tort cases as well—especially if their ability to engage in autonomous or semi-autonomous behavior, or their potential for harmful misuse, grows over the coming years. 

The authors find that AI developers face considerable liability exposure under U.S. tort law for harms caused by their models, particularly if those models are developed or released without utilizing rigorous safety procedures and industry-leading safety practices. 

At the same time, however, developers can mitigate their exposure by taking rigorous precautions and heightened care in developing, storing, and releasing advanced AI systems. By taking due care, developers can reduce the risk that their activities will cause harm to other people and reduce the risk that they will be held liable if their activities do cause such harm. 

The report is intended to be useful to AI developers, policymakers, and other nonlegal audiences who wish to understand the liability exposure that AI development may entail and how this exposure might be mitigated.

 Key Findings are 

• Tort law is a significant source of legal risk for developers that do not take adequate precautions to guard against causing harm when developing, storing, testing, or deploying advanced AI systems. 

Under existing tort law, there is a general duty to take reasonable care not to cause harm to the person or property of others. This duty applies by default to AI developers—even if targeted liability rules to govern AI development are never enacted by legislatures or regulatory agencies. In Chapter 3, we discuss the requirements of the duty to take reasonable care, and how AI developers might comply (or fail to comply) with these requirements. 

• There is substantial uncertainty, in important respects, about how existing tort doctrine will be applied to AI development. 

Jurisdictional variation and uncertainty about how legal standards will be interpreted and applied may generate substantial liability risk and costly legal battles for AI developers. Courts in different states may reach different conclusions on important issues of tort doctrine, especially in novel fact situations. Tort law varies significantly across both domestic and international jurisdictions. In the United States, each state has a different body of tort law, which coexists alongside federal tort law. Which state’s tort law applies to a dispute depends on complex choice-of-law rules, which in turn depend on the location of the tortious harm at issue (among other things). Moreover, tort decisions often depend on highly context-specific applications of broad legal standards (such as the negligence standard of “reasonable care”) by lay juries. As a result, tort liability can be difficult to predict, particularly with respect to emergent technologies that pose novel legal questions. In the wake of large-scale harms with effects spread across many states, AI developers may face many costly suits across multiple jurisdictions, each with potentially different liability rules. The tort liability incurred by irresponsible AI development may be sufficiently onerous, in the case of sufficiently large-scale damage, to render an AI developer insolvent or force it to declare bankruptcy. Given the cost and risk of litigating a plausible tort suit, moreover, there will often be strong financial incentive for an AI developer (or its liability insurer) to agree to a costly settlement before a verdict is reached. 

• AI developers that do not employ industry-leading safety practices, such as rigorous red- teaming and safety testing or the installation of robust safeguards against misuse, among others,may substantially increase their liability exposure. 

Tort law gives significant credit to industry custom, standards, and practice when determining whether an agent has acted negligently (and is thus liable for the harms it has caused). If most or many industry actors take a certain sort of precaution, this fact will typically be regarded as strong evidence that failing to take this precaution is negligent. Developers who forgo common safety practices in the AI industry, without instituting comparably rigorous safety practices in their stead, may thus increase the likelihood that they will be found negligent should their models cause or enable harm. Therefore, AI developers may wish to consider employing state-of-the-art safety procedures by, for instance, evaluating models for dangerous capabilities, fine-tuning models to limit unsafe behavior, monitoring and moderating models hosted via an application programming interface (API) for dangerous behavior, investing in strong information security measures for model weights, installing reasonably robust safeguards against misuse in potentially dangerous AI systems, and releasing these systems in ways that minimize the chance that third parties will remove the safeguards installed in them. 

• While developers face significant liability exposure from the risk that third parties will misuse their models, there is considerable uncertainty about how this issue will be treated in the courts, and different states may take markedly different approaches. 

Most American courts today maintain that a defendant will be liable for negligently enabling a third party to cause harm, maliciously or inadvertently, if this possibility was reasonably foreseeable to the defendant. But “foreseeability” is a pliable concept, and in practice some courts will only hold a defendant liable for enabling third-party misbehavior if such behavior was readily or especially foreseeable. The risks of misuse of advanced AI systems are being actively discussed and debated, and several leading AI developers take significant precautions to guard against such risks; these facts will tend to support the determination that such misuse was foreseeable, in the event that it occurs. The fact that many of these risks are of a novel kind, and have not previously materialized, may cut in the opposite direction. In some cases, moreover, courts may decline to hold defendants liable for negligently enabling third parties to cause harm even when the possibility of such misuse is foreseeable. For these reasons, and others, there is a good deal of uncertainty about how liability for third-party misuse will be adjudicated in the courts. It would not be surprising if different states took different positions on this issue, just as different states have taken different positions on liability for enabling the misuse of other dangerous instrumentalities (such as guns). Thus, a careless AI developer could face a series of complex and costly legal battles if its model is misused to inflict harm across many jurisdictions. 

• Safety-focused policymakers, developers, and advocates can strengthen AI developers’ incentives to employ cutting-edge safety techniques by developing, implementing, and publicizing new safety procedures and by formally promulgating these standards and procedures through industry bodies. 

The popularization and proliferation of safe and secure AI development practices by safety-conscious developers and industry bodies can help set industry standards and “customs” that courts may consider when evaluating the liability of other developers, creating stronger incentives for safe and responsible AI development. 

• Policymakers may wish to clarify or modify liability standards for AI developers and/or develop complementary regulatory standards for AI development. 

Our analysis suggests that there remains significant uncertainty as to how existing liability law will be applied if harms are caused by advanced AI systems. This uncertainty could conceivably lead some developers to be too cautious, while pushing other developers to neglect the liability risks associated with unsafe development. Clarifying or modifying liability law might thus facilitate responsible innovation and increase the tort system’s ability to incentivize safe behavior. Legislation might also help to remedy the inherent limitations of the tort liability system. For example, tort liability cannot easily address the fact that certain AI developers might discount serious risks on the basis of idiosyncratic views, or that an AI company’s liability exposure— which is limited by its total assets—might fail to provide it with adequately strong incentives for taking due care. Carefully designed legislation might remedy these shortcomings through the creation of a well-tailored regulatory regime, the clarification or improvement of existing liability law to more clearly identify when a developer or another party is liable for harms, or the establishment of minimum safety requirements for forms of AI development that pose especially significant risks to national security or public welfare.

31 October 2023

Biden AI Executive Order

The Biden Executive Order regarding AI directs the following actions: 

 New Standards for AI Safety and Security 

As AI’s capabilities grow, so do its implications for Americans’ safety and security. With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems: 

Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public. 

Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety. Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI. 

Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world. 

Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure. 

Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI. 

Protecting Americans’ Privacy 

Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions: 

Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data. 

Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies. 

Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data. 

Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems. These guidelines will advance agency efforts to protect Americans’ data. 

Advancing Equity and Civil Rights 

Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions: 

Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination. 

Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI. 

Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis. 

Standing Up for Consumers, Patients, and Students 

AI can bring real benefits to consumers—for example, by making products better, cheaper, and more widely available. But AI also raises the risk of injuring, misleading, or otherwise harming Americans. To protect consumers while ensuring that AI can make Americans better off, the President directs the following actions: 

Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI. Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools. 

Supporting Workers 

AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement. To mitigate these risks, support workers’ ability to bargain collectively, and invest in workforce training and development that is accessible to all, the President directs the following actions: 

Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize. 

Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI. 

Promoting Innovation and Competition 

America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined. The Executive Order ensures that we continue to lead the way in innovation and competition through the following actions: 

Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change. 

Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities. 

Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews. 

Advancing American Leadership Abroad 

AI’s challenges and opportunities are global. The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide. To that end, the President directs the following actions: 

Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak. 

Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable. 

Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure. 

Ensuring Responsible and Effective Government Use of AI 

AI can help government deliver better results for the American people. It can expand agencies’ capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. However, use of AI can pose risks, such as discrimination and unsafe decisions. To ensure the responsible government deployment of AI and modernize federal AI infrastructure, the President directs the following actions: 

Issue guidance for agencies’ use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment. 

Help agencies acquire specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting. 

Accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields.

31 May 2023

Reasonable Security

'Locking Down 'Reasonable' Cybersecurity Duty' by Charlotte Tschider in Yale Law & Policy Review comments 

Following a data breach or other cyberattack, the concept of “reasonable” duty, broadly construed, is essential to a plaintiff’s potential causes of action, such as negligence, negligence per se, breach of contract, breach of fiduciary duty, and any number of statutory claims. The impact of an organization’s discretionary choices, such as whether to take specific security steps for a system, may result in potential risk to an individual, another organization, or the organization itself. Although organizations regularly engage in cybersecurity risk analysis, they may not understand what practices will be considered reasonable in a court of law and are therefore unable to anticipate downstream legal issues. Attorneys are likewise unable to confidently advise their clients on how to best avoid liability. This Article examines, in detail, potential sources for reasonably defining duty, and how organizations and attorneys might consider legal duty through the lens of cybersecurity risk management. 

Specifically, I call for a two-part cybersecurity duty analytic model: static, or objective duty informed by industry practices, and dynamic, or subjective duty informed by situational risk. For some doctrinal areas, this may work primarily as an analytic model, while for others, such as negligence, this could be formalized as a test. By offering a model for analyzing what cybersecurity duty ought to be, organizations can adequately understand how potential legal risk might be evaluated in order to implement practices that protect would-be plaintiffs and avoid liability. Moreover, courts can use this model to determine whether organizations have made decisions that avoid real, foreseeable risk to the plaintiff. Indeed, amidst an increasing frequency and diversity of cyberliability claims, legal analysis informed by actual risk analysis ensures that reasonable, rather than perfect, cybersecurity practices can be developed precedentially over time. 

21 June 2019

AI Standards

Standards Australia has released a very upbeat discussion paper on Strengthening Trust: Hearing Australia’s voice on Artificial Intelligence.

It comments
Artificial Intelligence (AI) is not new, having evolved over time. But it promises to unleash many benefits, ranging from improved mobility, greater job opportunities for some, and more efficient use of resources. Many Australians already know AI through Google Home, Siri and Alexa. They know AI through Google Search, Uber and the algorithms that drive LinkedIn and Facebook. AI, for these reasons, presents economic and social opportunities, but it also presents issues we need to carefully consider and respond to in a manner that engages industry, academia, governments and the broader community. Standards, as an adaptive form of regulation, can play a pivotal role in responding to these issues and accelerate the adoption of trusted AI, not just locally, but globally. 
For a country like Australia, which is a net-importer of such technologies, this is a pivotal consideration. Standards have played a strong and vital role in ICT over recent history, ranging from information security, to data governance and other fundamental factors, such as terminology. We have seen similar developments in relation to the standardisation of AI, with the formation of a joint ISO and IEC Committee in 2017 (JTC 1/SC 42), of which Australia is now a member, through Standards Australia. 
But we need your insights and expertise to make these processes and structures work for industry and the broader Australian community. This is precisely why we want to start this discussion with you. This Discussion Paper presents Australia’s opportunity to shape a standards-based approach to AI, and one that we can channel to shape effective global, and not just local, responses. ... 
Standardisation in the area of AI, through the ISO and IEC, is still in the early stages of development. This presents an opportunity for Australia to work constructively both domestically with Australian stakeholders (through mirror committees) and internationally through the ISO and IEC, to ensure Australia is not just a taker of standards but also a maker of key standards in relation to AI. A recent report similarly argued that, “[i]t is in Australia’s economic interests to continue to work with partners and advocate for a balanced and transparent approach to rule-setting in the development of emerging technology and global digital trade.”  Such a role is envisaged through Australia’s Tech Future, which calls for a global regulatory environment where “[g]lobal rules and standards affecting digital technologies and digital trade support Australia’s interests.” 
Recognising the importance of international standards harmonisation in addressing, managing and regulating new areas of technology, the ISO and the IEC Joint Technical Committee 1 (JTC 1) created Subcommittee 42 – Artificial Intelligence (SC42), in 2017. 
SC 42’s primary objectives are to:
1. Serve as the focus and proponent for JTC 1’s standardisation program on Artificial Intelligence 
2. Provide guidance to JTC 1, IEC, and ISO committees developing Artificial Intelligence applications 
In late 2018, Standards Australia, at the request of stakeholders, formed a mirror committee to JTC 1/SC 42. The role of this mirror committee is essentially to provide an Australian voice and vote on matters concerning JTC 1/SC 42, enabling Australia to play a role in setting global standards concerning AI. It has representation from across the Australian Government, industry and academia. SC 42 currently has nine standards under development, focused variously on terminology, reference architecture and, more recently, trustworthiness. This committee is also driving work on the governance of AI within organisational settings, to ensure the responsible use of AI. 
... Other global standards and principles-based approaches Other standards setting bodies, such as the International Telecommunications Union (ITU) and the Institute of Electrical and Electronic Engineers (IEEE), as well as many of the world’s leading technology companies are also beginning to develop artificial intelligence technologies and frameworks, creating a complicated global landscape. 
For example, the IEEE has released a number of documents regarding the ethical development of AI through their Global Initiative on Ethics of Autonomous and Intelligent Systems, where they consulted across some areas of industry, academia, and government. The IEEE sets out five core principles to consider in the design and implementation of AI and ethics. These include adherence to existing human rights frameworks, improving human wellbeing, ostensibly to ensure accountable and responsible design, transparent technology and the ability to track misuse. 
More recently, the Organisation for Economic Co-operation and Development (OECD) released their own AI Principles, following extensive consultation.  These principles may be a useful input for developing standards to support AI in Australia, given that technical solutions will be required to ensure such principles are meaningful and have impact. ... 
In addition to the OECD, other international bodies have also developed AI ethics principles and guidelines regarding the development and use of AI:
• April 2019 – the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence 
• May 2019 – the OECD’s Principles on AI were endorsed by 42 countries, including Australia. 
• June 2019 – the G20 adopted human-centred AI Principles that draw from the OECD AI Principles
These nascent, but not necessarily connected, developments illustrate the importance of international standards coordination. This is vital to ensuring that AI products and software are safe and can function effectively across and within countries. Data61’s discussion paper Artificial Intelligence: Australia’s Ethics Framework highlights International Standards coordination, observing “[i]nternational coordination with partners overseas, including the International Standards Organisation (ISO), will be necessary to ensure AI products and software meet the required standards”.   
This is in part because many AI technologies used in Australia are created and developed in overseas markets. In order for Australian stakeholders to be standards makers instead of just standards takers in the area of AI it is important to strengthen our participation through international standards fora.  
The paper concludes -
We are seeking your assistance in addressing the following questions. Noting the definitions of artificial intelligence provided above, and drawing on your own experiences, please do address as many of the following questions as possible:
01 Where do you see the greatest examples, needs and opportunities for the adoption of AI? 
02 How could Australians use or apply AI now and in the futur e? ( for example, at home and at work) 
03 How can Australia best lead on AI and what do you consider Australia’ s competitive advantage to be? 
04 What extent, if at all, should standar ds play in providing a practical solution for the implementation of AI? What do you think the anticipated benefits and costs will be? 
05 If standards are relevant, what should they focus on? a) a national focus based on Australian views (i.e. Australian Standards) b) an international focus where Australians provide input through a voice and a vote (i.e. ISO/IEC standards) c) any other approach 
06 What do you think the focus of these standar ds should be? a) Technical (interoperability, common terminology, security etc.) b) Management systems (assurance, safety, competency etc.) c) Governance (oversight, accountability etc.) 
07 Does your organisation currently apply any de facto ‘standards’ particular to your industry or sector? 
08 What are the consequences of no action in r egards to AI standardisation? 
09 Do you have any further comments?

02 May 2019

UK Internet of Things Safety

The UK Government Department for Digital, Culture, Media and Sport (DCMS) has launched a consultation on 'Secure By Design' regulatory proposals regarding consumer Internet of Things (IoT) security, promoted as ensuring that 'millions of household items that are connected to the internet are better protected from cyber attacks.

The Government states
Options that the Government will be consulting on include a mandatory new labelling scheme. The label would tell consumers how secure their products such as ‘smart’ TVs, toys and appliances are. The move means that retailers will only be able to sell products with an Internet of Things (IoT) security label.
The Government will be consulting on options including a mandatory new labelling scheme. The label would tell consumers how secure their products such as ‘smart’ TVs, toys and appliances are. The move means retailers will only be able to sell items with an Internet of Things (IoT) security label. 
The consultation focuses on mandating the top three security requirements that are set out in the current ‘Secure by Design’ code of practice. These include that:
  • IoT device passwords must be unique and not resettable to any universal factory setting. 
  • Manufacturers of IoT products provide a public point of contact as part of a vulnerability disclosure policy. 
  • Manufacturers explicitly state the minimum length of time for which the device will receive security updates through an end of life policy.
Following the consultation, the security label will initially be launched as a voluntary scheme to help consumers identify products that have basic security features and those that don’t.
The Consultation Document states
As the technological advances of the 21st century continue to accelerate, consumers are bringing more and more ‘smart’ devices (i.e. consumer IoT products) into their homes, such as smart TVs, internet connected toys, smart speakers and smart washing machines. The Internet of Things (IoT, also known as ‘internet-connected’ or ‘smart’ products) is already being used across a range of industries and it is delivering significant benefits to the lives of its users.
In the future, we expect an ever increasing number of more developed consumer Internet of Things products and services. These devices will be able to anticipate and meet their users’ needs and will be able to tailor information specifically to them across everything from home energy to security. This will offer users the opportunity to live more fulfilling lives; saving time, effort and money.
As with all new technologies, there are risks. Right now, there are a large number of consumer IoT devices sold to consumers that lack even basic cyber security provisions. This situation is untenable. Often these vulnerable devices become the weakest point in an individual’s network, and can undermine a user’s privacy and personal safety. Compromised devices at scale can also pose a risk for the wider economy through distributed denial of service (DDOS) attacks such as Mirai Botnet in October 2016.
The UK Government takes the issue of consumer IoT security very seriously. We recognise the urgent need to move the expectation away from consumers securing their own devices and instead ensure that strong cyber security is built into these products by design.
We have previously stated our preferred an approach whereby industry self-regulate to address these issues, but that we would consider regulation where necessary. In October 2018 we published a Code of Practice for IoT Security, alongside accompanying guidance, to help industry implement good security practices for consumer IoT.
Despite providing industry with these tools to help address these issues, we continue to see significant shortcomings in many products on the market.
We recognise that security is an important consideration for consumers. A recent survey of 6,482 consumers has shown that when purchasing a new consumer IoT product, ‘security’ is the third most important information category (higher than privacy or design) and among those who didn’t rank ‘security’ as a top-four consideration, 72% said that they expected security to already be built into devices that were already on the market1. It’s clear that there is currently a lack of transparency between what consumers think they are buying and what they are actually buying.
Our ambition is therefore to restore transparency within the market, and to ensure manufacturers are clear and transparent with consumers by sharing important information about the cyber security of a device, meaning users can make more informed purchasing decisions.
Having worked with stakeholders, experts and the National Cyber Security Centre (NCSC), we are now consulting on proposals for new mandatory industry requirements to ensure consumer smart devices adhere to a basic level of security. The proposals set out in this document seek to better protect consumers’ privacy and online security which can be put at risk by insecure devices.
We are mindful of the risk of dampening innovation and applying a strong burden on manufacturers of all shapes and sizes. This is why we have worked to define what baseline security looks like, in line with the ‘top three’ guidelines of the Code of Practice. Our ambition is for the following security requirements to be made mandatory in the UK. These are:

  • All IoT device passwords shall be unique and shall not be resettable to any universal factory default value 

  • The manufacturer shall provide a public point of contact as part of a vulnerability disclosure policy in order that security researchers and others are able to report issues 

  • Manufacturers will explicitly state the minimum length of time for which the product will receive security updates.
Meeting these practical and implementable measures would protect consumers from the most significant risks (such as the Mirai attack in 2016). This would also restore transparency in the sector and allow consumers to identify products that will meet their needs over the lifespan of the product. In addition, mandating vulnerability disclosure policies will enable an effective feedback mechanism to operate, between the security research community and manufacturers.
One of the core aims of the consultation is to listen to feedback on the various implementation options we have developed in partnership with industry and stakeholders. These include the following three options:
● Option A: Mandate retailers to only sell consumer IoT products that have the IoT security label, with manufacturers to self declare and implement a security label on their consumer IoT products 
● Option B: Mandate retailers to only sell consumer IoT products that adhere to the top three guidelines, with the burden on manufacturers to self declare that their consumer IoT products adhere to the top three guidelines of the Code of Practice for IoT Security and the ETSI TS 103 645 
● Option C: Mandate that retailers only sell consumer IoT products with a label that evidences compliance with all 13 guidelines of the Code of Practice, with manufacturers expected to self declare and to ensure that the label is on the appropriate packaging
Later this year, the security label will initially be run on a voluntary basis until regulation comes into force and the government will make a decision on which measures to take forward into legislation following analysis of the responses received through this consultation. We recognise that any regulation will need to mature over time, and additional information for this approach is within the consultation stage impact assessment ‘mandating security requirements for consumer IoT products’

02 March 2019

Payments, innovation and Incentives

Payment Transactions Under the EU Second Payment Services Directive (PSD2) – An Outsider’s View' by Benjamin Geva in (2018) Texas International Law Journal comments 
In its proposal for a Directive on payment services in the internal market (hereafter: the Proposal), the Commission of the European Communities (“the Commission”) purported to provide for “a harmonised legal framework” designed to create “a Single Payment Market where improved economies of scale and competition would help to reduce cost of the payment system.” Being “complemented by industry’s initiative for a Single Euro Payment Area (SEPA) aimed at integrating national payment infrastructures and payment products for the euro-zone,” the Proposal was designed to “establish a common framework for the Community payments market creating the conditions for integration and rationalisation of national payment systems.” Focusing on electronic payments, and designed to “leave maximum room for self-regulation of industry,” the Proposal purported to “only harmonise what is necessary to overcome legal barriers to a Single Market, avoiding regulating issues which would go beyond this matter.” Stated otherwise, the measure was designed to fall short of providing for a comprehensive payment law.
'Jefferson's Taper' by Jeremy N. Sheff comments 
This Article reports a new discovery concerning the intellectual genealogy of one of American intellectual property law’s most important texts. The text is Thomas Jefferson’s often-cited letter to Isaac McPherson regarding the absence of a natural right of property in inventions, metaphorically illustrated by a “taper” that spreads light from one person to another without diminishing the light at its source. I demonstrate that Thomas Jefferson likely copied this Parable of the Taper from a nearly identical passage in Cicero’s De Officiis, and I show how this borrowing situates Jefferson’s thoughts on intellectual property firmly within a natural law theory that others have cited as inconsistent with Jefferson’s views. I further demonstrate how that natural law theory rests on a pre-Enlightenment Classical Tradition of distributive justice in which distribution of resources is a matter of private judgment guided by a principle of proportionality to the merit of the recipient — a view that is at odds with the post-Enlightenment Modern Tradition of distributive justice as a collective social obligation that proceeds from an initial assumption of human equality. Jefferson’s lifetime correlates with the historical pivot in the intellectual history of the West from the Classical Tradition to the Modern Tradition, but modern readings of the Parable of the Taper, being grounded in the Modern Tradition, ignore this historical context. Such readings cast Jefferson as a proto-utilitarian at odds with his Lockean contemporaries, who supposedly recognized property as a pre-political right. I argue that, to the contrary, Jefferson’s Taper should be read from the viewpoint of the Classical Tradition, in which case it not only fits comfortably within a natural law framework, but points the way toward a novel natural-law-based argument that inventors and other knowledge-creators actually have moral duties to share their knowledge with their fellow human beings.
'Unfair Disruption' (Stanford Law and Economics Olin Working Paper No. 532) by Mark A. Lemley and Mark P. McKenna comments
New technologies disrupt existing industries. They always have, and they probably always will. Incumbents don’t like their industries to be disrupted. And they often rely on intellectual property (IP), unfair competition, or related legal doctrines as tools to prevent disruptive entry. What that means is that many of the cases in these areas are really about whether competition from new players can force incumbents to change their business models, generally to the advantage of particular players and the detriment of others. These cases are, in an important sense, all unfair competition cases; they are about the ways in which the law permits new entrants to compete with incumbents.
Unfortunately, we lack any comprehensive way of thinking about market disruption in these settings. As a result, courts react quite differently to disruptive technology or business models in different cases. As one example, consider intellectual property (IP) cases brought against new technologies. Sometimes courts find the disruptive technology to infringe existing IP rights. New technology might fit within the legal definition of a prior invention, appropriately construed. Sometimes the technology might not itself infringe any prior invention, but makes it easier for third parties to infringe IP rights and is deemed illegal for that reason.
Other areas of law reflect similarly mixed feelings about market disruption. Business tort claims like unjust enrichment—and even nominally procompetitive laws like antitrust—are often asserted by companies with a vested interest in restricting a competitor’s new technology. We have seen similar variability in antitrust, unfair competition, and business tort cases. Antitrust and unfair competition cases are brought against incumbents that try to prevent competition, but they are also brought by incumbents upset that their markets are being disrupted. Whether those laws encourage or inhibit market disruption depends critically on what kinds of competition courts deem “unfair.”
Our goal in this paper is to address the broader question of when competition by market disruption is “unfair.” In our view, courts are often overly receptive to market disruption arguments because they tend to be concerned about upsetting the status quo and affecting the settled expectations of market players, particularly when presented with arguments that some new technology will radically alter the industry.
Courts should intervene to prevent market disruption only when they have very good reasons—reasons connected to the fundamental policy concerns of the legal systems called upon to prevent the disruption. To achieve that goal, we must know what the legitimate ends of the asserted law are. Sometimes the legal doctrine used to prevent market disruption is one like unjust enrichment, interference with economic advantage, or unfair competition that doesn’t have a clear animating principle. We think those doctrines should be disfavored, and courts should employ them only when they are tied to some independent metric for deciding whether the defendant’s conduct is unfair or unjust. Other doctrines, like antitrust and IP, have clearer purposes. There, we can evaluate legal challenges to market disruption by testing the fit between the goals of the statute and its use in a particular case.
Courts in many types of cases have recognized this problem and begun to develop tools for dealing with them. But IP law has lagged behind, rarely even recognizing that what seem to be cases of infringement are really challenges to market disruption. We suggest a test that helps separate legitimate cases of IP infringement from cases of pure market disruption. Drawn from the antitrust injury doctrine, our test would treat market disruption as relevant to an IP case only if the disruption is traceable to the act of infringement itself. If the plaintiff would suffer the same injury from a market intervention that is not infringing, that injury cannot be evidence of IP infringement.

15 January 2019

Building Regulation

Claddings that help your multi-storey building to go up like a roman candle? The Senate Economics Committee report on Non-conforming building products: the need for a coherent and robust regulatory regime last month comments
Confidence in the materials we use to build our domestic, commercial and public buildings is of paramount importance to all. Australians have a right to feel secure and safe in their built environment. As such, safety has always been a key motivator in the design and implementation of modern building regulations and construction codes. Often it is impossible for consumers and end users of building products to know whether a product is fit-for-purpose; trust is placed in those with the appropriate technical knowledge to ensure Australians are protected when they purchase or use building products, or that the appropriate product has been used in the place where they may work or live. 
Recent failures, such as the importation of asbestos-containing building products and the 2014 Lacrosse apartment building fire in Melbourne's Docklands, have highlighted the need for continued vigilance of building materials used in Australia. This is to ensure that building products and building practices in general, conform with the relevant building regulations and standards to guarantee public safety, along with building integrity and investment confidence in Australian building and construction. 
Non-conforming building products in Australia 
This inquiry into non-conforming building products in Australia was brought about following a number of industry-led forums that highlighted the growing body of evidence of the use of non-conforming building materials in the Australian construction industry. The inquiry has examined a range of issues surrounding the production, sourcing and use of non-conforming and non-compliant building products. 
A non-conforming product or material is one that claims to be something it is not, and does not meet the required Australian standard for the material—for example, the use of inferior grade material, or a product that contains illegal materials such as asbestos. A non-compliant building product is, one that has been used in a situation where its use does not comply with the requirements for such a material under the National Construction Code (NCC). 
As the inquiry's terms of reference detail, significant issues were raised by stakeholders regarding the impact of non-conforming products in industry supply chains (including the importers of products and the manufacturers and fabricators of products), workplace safety and the variety of risks and costs that could be passed on to Australian customers. Alongside these issues, the committee took evidence relating to the use of non-compliant building materials. The inquiry also considered and examined the effectiveness of the current Australian building regulatory frameworks that are designed to ensure that building products conform to, and have been used or installed in compliance with, the relevant Australian Standards. 
Inquiry's interim reports 
Through the course of the inquiry, the committee has tabled three interim reports in relation to the issues raised by submitters and at public hearings as outlined in Chapter 1. 
The interim reports were:
Interim report: Safety—'not a matter of good luck'—4 May 2016; 
Interim report: aluminium composite cladding—6 September 2017 [noted here]; and 
Interim report: protecting Australians from the threat of asbestos— 22 November 2017. 
The first interim report, in May 2016, raised a range of concerns; including, the illegal importation of building products containing asbestos; the 2014 Lacrosse apartment fire in Melbourne and the use of non-compliant aluminium composite cladding; and the national recall of Infinity electric cable. The committee found that there had been a serious breakdown in the regulation and oversight of both non-conforming and non-compliant building products. In particular, the committee highlighted the weakness in the regulatory regime, including the certification process and the disjointed regulation of the use of building products, both manufactured in Australia and overseas. Based on the findings in the first interim report, the committee made one recommendation which was to continue the inquiry. 
In September 2017, the committee tabled its second interim report—Interim report: aluminium composite cladding. This report focused on the issues raised around the use of polyethylene (PE) core Aluminium Composite Panels (ACPs) that had significantly contributed to the Lacrosse fire in Melbourne in 2014 and the tragic Grenfell Tower fire in London in 2017. The report found that deregulation and privatisation of building certification processes and the absence of proper regulatory controls, coupled with the increase in ACP product importation, led to the proliferation and installation of non-compliant building products. Importantly, the report was also critical of the lack of any timely government response to the Lacrosse fire, as well as any meaningful resolution between governments, the Building Ministers' Forum, and the Senior Officers' Group on possible steps forward in dealing with the proliferation of ACP panels. The committee's report put forward eight recommendations to address the importation and use of ACP panels and strengthen the regulatory system including recommending banning the importation of ACP panels and a national licencing scheme for all trades and professionals (See Appendix 3 for list of recommendations). 
In November 2017, the committee tabled its third interim report titled, Interim report: protecting Australians from the threat of asbestos. Like its predecessor, this report concentrated on one topic, the illegal importation of asbestos. This report made 26 recommendations addressing how best to combat the intentional and unintentional importation of asbestos in building and other materials, including complete machinery (See Appendix 4 for list of recommendations). 
Final inquiry report 
This final report outlines many of the common issues across the prior three reports. It also supports the compliance concerns raised in the Building Ministers' Forum report, Building Confidence—Improving the effectiveness of compliance and enforcement systems for the building and construction industry across Australia, prepared by Professor Peter Shergold and Ms Bronwyn Weir, and draws attention to the progress being made in dealing with non-conforming products in some jurisdictions. Specifically, the committee was encouraged by the proactive work undertaken by the Queensland Government in their new legislation designed to strengthen the chain of responsibility for the importation and distribution of building materials. As such, Recommendation 6 of this report suggests that other jurisdictions also move to implement similar legislation to ensure responsibility and accountability is spread more evenly across supply chains. 
Recommendation 6 
The committee recommends that the Building Ministers' Forum give further consideration to introduce a nationally consistent approach that increases accountability for participants across the supply chain. Specifically, the committee recommends that other states and territories pass legislation similar to Queensland's Building and Construction Legislation (Non-conforming Building Products—Chain of Responsibility and Other Matters) Amendment Act 2017. 
Where to next? 
By and large, many of the 13 recommendations of this final report echo those recommendations put forward in the previous interim reports. The committee is cognisant that the Building Ministers' Forum is already moving on some of these issues as highlighted by the Shergold and Weir report. Nevertheless, the committee would encourage both the government and the Building Ministers' Forum to increase the level of momentum in implementing these recommendations and, moreover, those recommendations that have been raised previously. These include, expediting mandatory third party certification for high risk products, including a national register of non-compliant products if feasible, and the introduction of a national licencing scheme. A simple change that the committee put forward previously, and one which it strongly believes would assist stakeholders, is to consider making all Australian Standards freely available. All forms of legal requirements should be freely available, where feasible, so that stakeholders can inform themselves adequately of their obligations under the relevant law.
The Committee goes on to state
The recommendations contained in this report are aimed at strengthening accountability and compliance and providing greater information to stakeholders, in turn, allowing stakeholders to make informed choices and ensuring the development of a coherent and robust regulatory regime for building materials in Australia. The committee believes that the areas that would benefit from urgent action by the Building Ministers' Forum include the following recommendations: 1, 3, 5, 6 and 10. 
Recommendation 1  
The committee recommends that the Building Ministers' Forum develop improved consultative mechanisms with industry stakeholders. In addition, the Building Ministers' Forum should amend the terms of reference for the Senior Officers' Group and the Building Regulators Forum to include annual reporting requirements on progress to address non-conforming building products. 
Recommendation 3 
The committee calls on the Building Ministers' Forum to expedite its consideration of a mandatory third-party certification scheme for high-risk building products and a national register for these products. 
Recommendation 5 
The committee recommends that the Building Ministers' Forum, through the Senior Officers' Group, examine international approaches—including the European Union's regulations and processes—for testing of high-risk products prior to import and determine if they can be suitably adapted to benefit and enhance Australian requirements. 
Recommendation 10 
The committee gives in-principle support to Recommendation 12 of the Shergold and Weir Report '[t]hat each jurisdiction establishes a building information database that provides a centralised source of building design and construction documentation' so regulators are better placed to identify where non-compliant building products have been installed. 
The committee has also identified a range of specific recommendations (numbers: 2, 4, 7, 8, 9, 11, 12, and 13) that it believes are best placed for government to progress and, as indicated earlier, a number of these have been proposed in earlier interim reports. 
Recommendation 2 
The committee recommends that the Australian Government develop a confidential reporting mechanism through which industry and other stakeholders can report non-conforming building products. 
Recommendation 4 
The committee recommends that where an importer intends to import goods that have been deemed high-risk, the Australian Government require the importer, prior to the importation of the goods, to conduct sampling and testing by a NATA accredited authority (or a NATA equivalent testing authority in a another country that is a signatory to a Mutual Recognition Arrangement). 
Recommendation 7 
The committee recommends that the Australian Government work with state and territory governments to establish a national licensing scheme, with requirements for continued professional development for all building practitioners. 
Recommendation 8  
The committee strongly recommends that the Australian Government consider making all Australian Standards freely available. 
Recommendation 9 
The committee recommends that the Australian Government consult with industry stakeholders to determine the feasibility of developing a national database of conforming and non-conforming products. 
Recommendation 11 
The committee recommends the Australian Government consider imposing a penalties regime for non-compliance with the National Construction Code such as revocation of accreditation or a ban from tendering for Commonwealth funded construction work and substantial financial penalties. 
Recommendation 12 
The committee recommends that the Australian Government consider the merits of requiring manufacturers, importers and suppliers to hold mandatory recall insurance for high-risk building products. 
Recommendation 13 
The committee recommends that the Australian Government review the Customs Act 1901 (and other relevant legislation) to address the challenges of enforcing the existing importation of asbestos offence, with the aim to close loopholes and improve the capacity of prosecutors to obtain convictions against entities and individuals importing asbestos. This review should include consideration of increasing the threshold required to use 'mistake of fact' as a legal defence. The committee strongly advocates that the Australian Government and Building Ministers' Forum move quickly to adopt and implement these recommendations to provide greater confidence in building products and to protect all Australians.