It comments
Artificial Intelligence (AI) is not new, having evolved over time. But it promises to unleash many benefits, ranging from improved mobility, greater job opportunities for some, and more efficient use of resources. Many Australians already know AI through Google Home, Siri and Alexa. They know AI through Google Search, Uber and the algorithms that drive LinkedIn and Facebook. AI, for these reasons, presents economic and social opportunities, but it also presents issues we need to carefully consider and respond to in a manner that engages industry, academia, governments and the broader community. Standards, as an adaptive form of regulation, can play a pivotal role in responding to these issues and accelerate the adoption of trusted AI, not just locally, but globally.
For a country like Australia, which is a net-importer of such technologies, this is a pivotal consideration. Standards have played a strong and vital role in ICT over recent history, ranging from information security, to data governance and other fundamental factors, such as terminology. We have seen similar developments in relation to the standardisation of AI, with the formation of a joint ISO and IEC Committee in 2017 (JTC 1/SC 42), of which Australia is now a member, through Standards Australia.
But we need your insights and expertise to make these processes and structures work for industry and the broader Australian community. This is precisely why we want to start this discussion with you. This Discussion Paper presents Australia’s opportunity to shape a standards-based approach to AI, and one that we can channel to shape effective global, and not just local, responses. ...
Standardisation in the area of AI, through the ISO and IEC, is still in the early stages of development. This presents an opportunity for Australia to work constructively both domestically with Australian stakeholders (through mirror committees) and internationally through the ISO and IEC, to ensure Australia is not just a taker of standards but also a maker of key standards in relation to AI. A recent report similarly argued that, “[i]t is in Australia’s economic interests to continue to work with partners and advocate for a balanced and transparent approach to rule-setting in the development of emerging technology and global digital trade.” Such a role is envisaged through Australia’s Tech Future, which calls for a global regulatory environment where “[g]lobal rules and standards affecting digital technologies and digital trade support Australia’s interests.”
Recognising the importance of international standards harmonisation in addressing, managing and regulating new areas of technology, the ISO and the IEC Joint Technical Committee 1 (JTC 1) created Subcommittee 42 – Artificial Intelligence (SC42), in 2017.
SC 42’s primary objectives are to:
1. Serve as the focus and proponent for JTC 1’s standardisation program on Artificial Intelligence
2. Provide guidance to JTC 1, IEC, and ISO committees developing Artificial Intelligence applications
In late 2018, Standards Australia, at the request of stakeholders, formed a mirror committee to JTC 1/SC 42. The role of this mirror committee is essentially to provide an Australian voice and vote on matters concerning JTC 1/SC 42, enabling Australia to play a role in setting global standards concerning AI. It has representation from across the Australian Government, industry and academia. SC 42 currently has nine standards under development, focused variously on terminology, reference architecture and, more recently, trustworthiness. This committee is also driving work on the governance of AI within organisational settings, to ensure the responsible use of AI.
... Other global standards and principles-based approaches Other standards setting bodies, such as the International Telecommunications Union (ITU) and the Institute of Electrical and Electronic Engineers (IEEE), as well as many of the world’s leading technology companies are also beginning to develop artificial intelligence technologies and frameworks, creating a complicated global landscape.
For example, the IEEE has released a number of documents regarding the ethical development of AI through their Global Initiative on Ethics of Autonomous and Intelligent Systems, where they consulted across some areas of industry, academia, and government. The IEEE sets out five core principles to consider in the design and implementation of AI and ethics. These include adherence to existing human rights frameworks, improving human wellbeing, ostensibly to ensure accountable and responsible design, transparent technology and the ability to track misuse.
More recently, the Organisation for Economic Co-operation and Development (OECD) released their own AI Principles, following extensive consultation. These principles may be a useful input for developing standards to support AI in Australia, given that technical solutions will be required to ensure such principles are meaningful and have impact. ...
In addition to the OECD, other international bodies have also developed AI ethics principles and guidelines regarding the development and use of AI:
• April 2019 – the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence
• May 2019 – the OECD’s Principles on AI were endorsed by 42 countries, including Australia.
• June 2019 – the G20 adopted human-centred AI Principles that draw from the OECD AI Principles
These nascent, but not necessarily connected, developments illustrate the importance of international standards coordination. This is vital to ensuring that AI products and software are safe and can function effectively across and within countries. Data61’s discussion paper Artificial Intelligence: Australia’s Ethics Framework highlights International Standards coordination, observing “[i]nternational coordination with partners overseas, including the International Standards Organisation (ISO), will be necessary to ensure AI products and software meet the required standards”.
This is in part because many AI technologies used in Australia are created and developed in overseas markets. In order for Australian stakeholders to be standards makers instead of just standards takers in the area of AI it is important to strengthen our participation through international standards fora.The paper concludes -
We are seeking your assistance in addressing the following questions. Noting the definitions of artificial intelligence provided above, and drawing on your own experiences, please do address as many of the following questions as possible:
01 Where do you see the greatest examples, needs and opportunities for the adoption of AI?
02 How could Australians use or apply AI now and in the futur e? ( for example, at home and at work)
03 How can Australia best lead on AI and what do you consider Australia’ s competitive advantage to be?
04 What extent, if at all, should standar ds play in providing a practical solution for the implementation of AI? What do you think the anticipated benefits and costs will be?
05 If standards are relevant, what should they focus on? a) a national focus based on Australian views (i.e. Australian Standards) b) an international focus where Australians provide input through a voice and a vote (i.e. ISO/IEC standards) c) any other approach
06 What do you think the focus of these standar ds should be? a) Technical (interoperability, common terminology, security etc.) b) Management systems (assurance, safety, competency etc.) c) Governance (oversight, accountability etc.)
07 Does your organisation currently apply any de facto ‘standards’ particular to your industry or sector?
08 What are the consequences of no action in r egards to AI standardisation?
09 Do you have any further comments?