'Regulation of (generative) AI requires continuous oversight' - an AustLII submission by Graham Greenleaf, Andrew Mowbray and Philip Chung to the Department of Industry, Science and Resources in response to Discussion Paper 'Safe and Responsible AI in Australia' comments
Australia’s Department of Industry, Science and Resources invited interested parties to make submissions on a Discussion Paper (DP) Safe and Responsible AI in Australia. This submission by researchers from the Australasian Legal Information Institute (AustLII) addresses the most important general issues identified in the Discussion Paper and suggests the best strategies to address them.
In the paper, we make the following submission:
1 The rate of take-up of automated decision-making (ADM) systems in Australia, to identify those that may pose considerable risk to Australia, should be examined at the outset of determining an AI policy for Australia, and kept under regular review.
2 Regulation should not be aimed at ‘AI generally’, but should be aimed at two things: (i) Regulation of the use of specific applications of underlying AI technologies; and (ii) Regulation, by imposition of conditions on any use of a particular underlying AI technology, and therefore (for practical purposes) of their development; but should not aim to prevent research into the technology or its application
3 The definition of ‘AI’ should be altered so that it refers to ‘without explicit programming, or which achieves a similar result by other means’. ‘Hallucinations should be dropped, and ‘deceptive and reckless mis-statements’ or ‘fabrications’ used instead.
4 There should be a set of principles used to guide Australian regulation, that the principles should be based on international consensus, should be as consistent as possible across all Australian jurisdictions, and should be as comprehensive as needed.
5 We recommend ‘Ten guiding principles’, a modification of the ‘Australian principles’. The NSW Framework’s approach to implementation should be taken into account.
6 For Australia to be able to safely and responsibly regulate AI (and particularly generative AI), there needs to be a continuous source of expert advice which will regularly report to existing regulatory bodies, to government and to the public, updating them on whether there are significant changes to our ability to uphold the principles on which regulation of AI is based, and make proposals concerning changes needed.
7 An ‘Australian Advisory Board on Regulation of AI’ (the AI Board) would have a remit of two to three years, independence so that it could give frank advice, and an obligation to produce six monthly reports. It should preferably consist of ten members or fewer.
8 Australia should aim to provide inputs to influence international developments where possible. In the short term, the most advanced international source of AI regulation is likely to be the European Union. In the longer term, there may be steps toward a binding international agreement concerning AI (or generative AI at least).
9 A risk-based approach to regulation of AI should be adopted and implemented in Australia by Commonwealth legislation and regulations. Only the most dangerous applications should be brought within the regulatory structure in the first instance.
10 The Commonwealth’s initial ‘AI Framework Act’ should involve at least the following: a. Create an Australian AI Board. b. Require assessment by government of the take-up of AI in Australia. c. Include the Ten Guiding Principles for (Generative) AI we recommend. d. Implement a risk-based approach to regulation. e. Make transparency mandatory for all AI applications impacting upon Australian individuals and organisations.