The draft states
The eight ethical principles outlined below guide the development of professional and practice standards regarding the research and deployment of machine learning (ML) systems and artificial intelligence (AI) tools in medicine, specifically with regards to clinical radiology and radiation oncology. These tools should at all times reflect the needs of patients, their care and their safety, and they should respect the clinical teams that care for them.
These principles are intended to complement existing medical ethical frameworks (see appendices), which are insufficient for the emerging use of machine learning and artificial intelligence in medicine. In order to bridge this gap,
RANZCR has developed an additional eight ethical principles to guide the following:
- development of standards of practice for research in AI tools
- development of standards of practice for deployment of AI tools in medicine
- upskilling of radiologists and radiation oncologists in ML and AI, and
- ethical use of ML and AI in medicine.
Principle One: Safety
The first and foremost consideration in the development, deployment or utilisation of ML systems or AI tools ought to be patient safety and quality of care, with the evidence base to support this.
Principle Two: Avoidance of Bias
Machine learning systems and artificial intelligence tools are limited by their algorithmic design and the data they have access to, making them prone to bias. As a general rule, ML systems and AI tools trained on greater volumes and varieties of data should be less biased. Moreover, bias in algorithmic design should be minimised by involving a range of perspectives and skill sets in the design process. The data on which ML systems and AI tools is based should be representative of the target patient population on which the system or tool is being used. The characteristics of the training data set and the environment in which it was tested must be clearly stated when marketing an AI tool to provide transparency and facilitate implementation in appropriate clinical settings. Particular care must be taken when applying an AI tool trained on a general population to indigenous or minority groups. To minimise bias, the same standard of evidence used for other clinical interventions must be applied when regulating ML systems and AI tools, and their limitations must be transparently stated.
Principle Three: Transparency and Explainability
ML systems and AI tools can produce results which are difficult to interpret or replicate. When used in medicine, the doctor must be capable of interpreting how a decision was made and weighing up the potential for bias. This may require upskilling for medical practitioners. When designing a ML system or AI tool, consideration must be given to how the decision made can be understood and explained by a discerning medical practitioner.
Principle Four: Privacy and Protection of Data
Healthcare data is amongst the most sensitive data which can be held about an individual. Every effort must be made to store a patient’s data securely and in line with relevant laws and best practice. Patient data must not be transferred from the clinical environment at which care is provided without the patient’s consent or approval from an ethics board. Where data is transferred or otherwise used for AI research, it must be de-identified such that the patient’s identity cannot be reconstructed.
Principle Five: Decision Making on Diagnosis and Treatment
Medicine is based on a special relationship between the doctor and the patient. The doctor is the trusted advisor on complex medical conditions, test results and procedures, who then communicates findings to the patient clearly and sensitively, answers questions and agrees next steps. Whereas ML systems and AI tools can enhance decision-making capacity, final decisions on care for a patient are recommended by the doctor with due consideration given to the patient’s presentation, history and preferences.
Principle Six: Liability for Decisions Made
Liability for decisions made about patient care rests principally with the responsible medical practitioner. However, given the multiple potential applications of ML systems and AI tools in the patient journey, there may be instances were liability is shared between: • The medical practitioner caring for the patient; • Hospital or practice management who took the decision to deploy the systems or tools; and • The company which developed the ML system or AI tool. The potential for shared liability needs to be identified and recorded upfront when researching or implementing ML systems or AI tools.
Principle Seven: Application of Human Values
ML systems and AI tools are programmed to operate in line with a specific world view. The role of the doctor is to apply humanitarian values (from their training and the ethical framework in which they operate) and consideration of that patient’s personal values to any circumstances in which ML systems or AI tools are used in medicine.
Principle Eight: Governance
ML and AI are fast moving areas with potential to add great value but also to do harm. A hospital or practice using ML systems or AI tools must have accountable governance committees to oversee implementation and to ensure compliance with ethical principles and standards.