The government discussion paper - comments due May this year - on the Australian Artificial Intelligence Ethics Framework
states
The ethics of artificial intelligence are of growing importance. Artificial intelligence (AI) is changing societies and economies around the world. Data61 analysis reveals that over the past few years, 14 countries and international organisations have announced AU$86 billion for AI programs. Some of these technologies are powerful, which means they have considerable potential for both improved ethical outcomes as well as ethical risks. This report identifies key principles and measures that can be used to achieve the best possible results from AI, while keeping the well-being of Australians as the top priority.
Countries worldwide are developing solutions.
Recent advances in AI-enabled technologies have prompted a wave of responses across the globe, as nations attempt to tackle emerging ethical issues (Figure 1). Germany has delved into the ethics of automated vehicles, rolling out the most comprehensive government-led ethical guidance on their development available. New York has put in place an automated decisions task force, to review key systems used by government agencies for accountability and fairness. The UK has a number of government advisory bodies, notably the Centre for Data Ethics and Innovation. The European Union has explicitly highlighted ethical AI development as a source of competitive advantage.
An approach based on case studies.
This report examines key issues through exploring a series of case studies and trends that have prompted ethical debate in Australia and worldwide (see Figure 2).
...
Artificial intelligence (AI) holds enormous potential to improve society.
While a “general AI” that replicates human intelligence is seen as an unlikely prospect in the coming few decades, there are numerous “narrow AI” technologies which are already incredibly sophisticated at handling specific tasks [3]. Medical AI technologies and autonomous vehicles are just a few high profile examples of AI that have potential to save lives and transform society.
The benefits come with risks.
Automated decisions systems can limit issues associated with human bias, but only if due care is focused on the data used by those systems and the ways they assess what is fair or safe. Automated vehicles could save thousands of lives by limiting accidents caused by human error, but as Germany’s Transport Ministry has highlighted in its ethics framework for AVs, they require regulation to ensure safety.
Existing ethics in context, not reinvented.
Philosophers, academics, political leaders and ethicists have spent centuries developing ethical concepts, culminating in the human-rights based framework used in international and Australian law. Australia is a party to seven core human rights agreements which have shaped our laws. An ethics framework for AI is not about rewriting these laws or ethical standards, it is about updating them to ensure that existing laws and ethical principles can be applied in the context of new AI technologies.
Core principles for AI
1. Generates net-benefits. The AI system must generate benefits for people that are greater than the costs.
2. Do no harm. Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.
3. Regulatory and legal compliance. The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws.
4. Privacy protection. Any system, including AI systems, must ensure people’s private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm.
5. Fairness. The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.
6. Transparency and Explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.
7. Contestability. When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.
8. Accountability. People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.
Data is at the core of AI.
The recent advances in key AI capabilities such as deep learning have been made possible by vast troves of data. This data has to be collected and used, which means issues related to AI are closely intertwined with those that relate to privacy and data. The nature of the data used also shapes the results of any decision or prediction made by an AI, opening the door to discrimination when inappropriate or inaccurate datasets are used. There are also key requirements of Australia’s Privacy Act which will be difficult to navigate in the AI age.
Predictions about people have added ethical layers.
Around the world, AI is making all kinds of predictions about people, ranging from potential health issues through to the probability that they will end up re-appearing in court. When it comes to medicine, this can provide enormous benefits for healthcare. When it comes to human behaviour, however, it’s a challenging philosophical question with a wide range of viewpoints. There are benefits, to be sure, but risks as well in creating self-fulfilling prophecies . The heart of big data is all about risk and probabilities, which humans struggle to accurately assess.
AI for a fairer go.
Australia’s colloquial motto is a “fair go” for all. Ensuring fairness across the many different groups in Australian society will be challenging, but this cuts right to the heart of ethical AI. There are different ideas of what a “fair go” means. Algorithms can’t necessarily treat every person exactly the same either; they shou ld operate according to similar principles in similar situations. But while like goes with like, justice sometimes demands that different situations be treated differently. When developers need to codify fairness into AI algorithms, there are various challenges in managing often inevitable trade-offs and sometimes there’s no “right” choice because what is considered optimal may be disputed. When the stakes are high, it’s imperative to have a human decision-maker accountable for automated decisions—Australian laws already mandate it to a degree in some circumstances.
Transparency is key, but not a panacea.
Transparency and AI is a complex issue. The ultimate goal of transparency measures are to achieve accountability, but the inner workings of some AI technologies defy easy explanation. Even in these cases, it is still possible to keep the developers and users of algorithms accountable. An analogy can be drawn with people: an explanation of brain chemistry when making a decision doesn’t necessarily help you understand how that decision was made—an explanation of that person’s priorities is much more helpful. There are also complex issues relating to commercial secrecy as well as the fact that making the inner workings of AI open to the public would leave them susceptible to being gamed.
Black boxes pose risks.
On the other hand, AI “black boxes” in which the inner workings of an AI are shrouded in secrecy are not acceptable when public interest is at stake. Pathways forward involve a variety of measures for different situations, ranging from explainable AI technologies, testing, regulation that requires transparency in the key priorities and fairness measures used in an AI system, through to measures enabling external review and monitoring. People should always be aware when a decision that affects them has been made by an AI, as difficulties with automated decisions by government departments have already been before Australian courts.
Justifying decisions.
The transparency debate is one component feeding into another debate: justifiability. Can the designers of a machine justify what their AI is doing? How do we know what it is doing? An independent, normative framework can serve to inform the development of AI, as well as justify or revise the decisions made by AI. This document is part of that conversation.
Privacy measures need to keep up with new AI capabilities.
For decades, society has had rules about how fingerprints are collected and used. With new AI-enabled facial recognition, gait and iris scanning technologies, biometric information goes well beyond fingerprints in many respects. Incidents like the Cambridge Analytica scandal demonstrate how far-reaching privacy breaches can be in the modern age, and AI technologies have the potential to impact this in significant ways. We may need to further explore what privacy means in a digital world.
Keeping the bigger picture in focus.
Discussions on the ethics of autonomous vehicles tend to focus on issues like the “trolley problem” where the vehicle is given a choice of who to save in a life-or-death situation. Swerve to the right and hit an elderly person, stay straight and hit a child, or swerve to the left and kill the passengers? These are important questions worth examining, but if widespread adoption of autonomous vehicles can improve safety and cut down on the hundreds of lives lost on Australian roads every year, then there is a risk that lives could be lost if relatively far-fetched scenarios dominate the discussion and delay testing and implementation. The values programmed into autonomous vehicles are important, though they need to be considered alongside potential costs of inaction.
AI will reduce the need for some skills and increase the demand for others
Disruption in the job market is a constant. However, AI may fuel the pace of change. There will be challenges in ensuring equality of opportunity and inclusiveness. An ethical approach to AI development requires helping people who are negatively impacted by automation transition their careers. This could involve training, reskilling and new career pathways. Improved information on risks and opportunities can help workers take proactive action. Incentives can be used to encourage the right type of training at the right times. Overall, acting early improves the chances of avoiding job-loss or ongoing unemployment.
AI can help with intractable problems.
Long-standing health and environmental issues are in need of novel solutions, and AI may be able to help. Australia’s vast natural environment is in need of new tools to aid in its preservation, some of which are already being implemented. People with serious disabilities or health problems are able to participate more in society thanks to AI-enabled technologies.
International coordination is crucial.
Developing standards for electrical and industrial products required international coordination to make devices safe and functional across borders. Many AI technologies used in Australia won’t be made here. There are already plenty of off-the-shelf foreign AI products being used. Regulations can induce foreign developers to work to Australian standards to a point, but there are limits. International coordination with partners overseas, including the International Standards Organisation (ISO), will be necessary to ensure AI products and software meet the required standards.
Implementing ethical AI.
AI is a broad set of technologies with a range of legal and ethical implications. There is no one-size-fits all solution to these emerging issues. There are, however, tools which can be used to assess risk and ensure compliance and oversight. The most appropriate tools can be selected for each individual circumstance.
A toolkit for ethical AI
1. Impact Assessments: Auditable assessments of the potential direct and indirect impacts of AI, which address the potential negative impacts on individuals, communities and groups, along with mitigation procedures.
2. Internal or external review: The use of specialised professionals or groups to review the AI and/or use of AI systems to ensure that they adhere to ethical principles and Australian policies and legislation.
3. Risk Assessments: The use of risk assessments to classify the level of risk associated with the development and/or use of AI.
4. Best Practice Guidelines: The development of accessible cross industry best practice principles to help guide developers and AI users on gold standard practices.
5. Industry standards: The provision of educational guides, training programs and potentially certification to help implement ethical standards in AI use and development
6. Collaboration: Programs that promote and incentivise collaboration between industry and academia in the development of ‘ethical by design’ AI, along with demographic diversity in AI development.
7. Mechanisms for monitoring and improvement: Regular monitoring of AI for accuracy, fairness and suitability for the task at hand. This should also involve consideration of whether the original goals of the algorithm are still relevant.
8. Recourse mechanisms: Avenues for appeal when an automated decision or the use of an algorithm negatively affects a member of the public.
9. Consultation: The use of public or specialist consultation to give the opportunity for the ethical issues of an AI to be discussed by key stakeholders.
Best practice based on ethical principles.
The development of best practice guidelines can help industry and society achieve better outcomes. This requires the identification of values, ethical principles and concepts that can serve as their basis.
The paper states
This report covers civilian applications of AI. Military applications are out of scope.
This report also acknowledges research into AI ethics occurring as part of a project by the Australian Human Rights Commission, as well as work being undertaken by the recently established Gradient Institute. This work complements research being conducted by the Australian Council of Learned Academies (ACOLA) and builds upon the Robotics Roadmap for Australia by the Australian Centre for Robotic Vision.
From a research perspective, this framework sits alongside existing standards, such as the National Health and Medical Research Council (NHMRC) Australian Code for the Responsible Conduct of Research and the NHMRC’s National Statement on Ethical Conduct in Human Research.