06 August 2024

Divides

'Digital disengagement and impacts on exclusion' (UK Parliamentary Office of Science and Technology, 2024) comments

 • Digital disengagement refers to people that have limited access to the internet or digital devices for motivational or personal reasons, rather than other forms of digital exclusion, such as access or affordability barriers. However, reasons behind digital exclusion can be inter-related. 
 
• Digital disengagement and other forms of digital exclusion are negatively associated with social, health, employment, and financial inequalities, and can compound existing inequalities. 
 
• In 2024, Ofcom estimated that 6% (1.7 million) of UK households did not have the internet at home. It is not clear how many are disengaged due to motivational reasons. However, multiple surveys indicate that lack of interest is the most cited reason for being offline. Other motivational reasons include fear of scams, or lack of confidence and skills. 
 
• Levels of digital engagement are on a spectrum. People may engage with some aspects of digital technology but not others, depending on factors associated with the task, device, confidence, or current life circumstances. 
 
• Stakeholders expressed policy considerations including refreshing the 2014 Digital Inclusion Strategy, improving accessibility, developing digital skills, empowering choice when using technology, and preserving non-digital services and solutions.

The Note goes on to state

Digital disengagement and impacts on exclusion 

Digital disengagement refers to motivational and personal reasons for not being online or using digital devices. It is closely linked to digital exclusion, which broadly refers to people who cannot fully participate in society because they have limited access to internet or digital devices, or are unable to use them. 

Issues associated with digital exclusion (Box 1) are well recognised in academic research and UK policy. Motivational barriers have not been researched in as much depth, or been as much of a focus in policy compared to ability, access and affordability. 

It can be challenging to separate issues associated with digital disengagement and other forms of digital exclusion. This POSTnote focuses on digital disengagement and references digital exclusion more broadly where relevant and inter-linked.

Key issues associated with digital exclusion 

Motivation: Motivational or personal barriers preventing people from engaging online include lack of interest, low confidence, mistrust in the internet, or challenges with using the technology due to inaccessibility (see potential reasons for disengagement). People may make a deliberate choice to not engage with some digital activities, such as owning a smartphone (see selective engagement). 

Ability: Those lacking basic digital skills are excluded by not being able to navigate the online environment.  Lack of skills can also affect motivation. 

Access: Digital exclusion can result from people lacking the infrastructure to access the internet, such as not having adequate broadband or devices to connect with 

Affordability: Digital exclusion can occur if people cannot afford the costs of being online.  Ofcom’s 2024 Adult’s Media Use and Attitudes report found that 17% of people who did not have internet at home, did not have internet because of reasons relating to cost, for example of broadband and devices.

What is digital disengagement? 

Disengagement may be an active choice, as people have differing views on the benefits of online engagement and preferences for non-digital options. It can alternatively be due to external factors that are beyond the person’s control. 

Digital disengagement does not necessarily mean that people are completely offline. Online engagement may be selective based on the type of task, for example (see selective engagement).

Survey data from Ofcom and others indicate that motivational barriers are the most common reason for being offline (Figure 1). 

In 2023, respondents were over four times more likely to specify lack of interest for non-use, compared to cost (Figure 1). 

It is not clear exactly what proportion of the public are disengaged, as people may have multiple reasons for being offline and disengagement is not clearly defined. Research on motivational factors is often based on small sample sizes, potentially as those offline are a smaller and harder to reach proportion of the public (see future research and representation). The proportion of the public that do not have internet at home has been decreasing, which indicates that the population still not regularly online are increasingly those that are not interested (Figure 1). 

AI Implementation in Hospitals

'AI Implementation in Hospitals: Legislation, Policy, Guidelines and Principles, and Evidence about Quality and Safety' (Australian Commission on Safety and Quality in Health Care, 2024) comments 

 To harness the enormous benefits of Artificial Intelligence (AI) in healthcare, implementation and use must be done safely and responsibly. The Commission engaged Macquarie University and the University of Wollongong to undertake a literature review and environmental scan to identify principles that enable the safe and responsible implementation of AI in healthcare. It presents evidence from the contemporary published literature about AI implemented in acute care as well as current, published legislation, policies, guidelines, and principles for AI implementation in healthcare. ... 

The purpose of this report is to provide a review of the recent literature and undertake an environmental scan to identify principles that enable the safe and responsible implementation of AI in healthcare. It presents evidence from the contemporary published literature about AI implemented in acute care as well as current, published legislation, policies, guidelines, and principles for AI implementation in healthcare. The findings will be considered by the Australian Commission on Safety and Quality in Health Care (ACSQHC) for future development of resources to assist healthcare organisations in evaluating and implementing AI. 

Policy scan and principles for safe and responsible AI in healthcare 

Chapters 2 and 3 report the findings from an environmental scan of international (USA, UK, New Zealand, Canada, Singapore), intergovernmental (WHO, OECD and EU) and national legislation and policy to gain insight about principles (e.g. guidelines, governing ideas, and strategies) for implementation of AI in acute care. The review covers both cross-sectoral legislation and policy that is relevant in healthcare, as well as healthcare-specific legislation and policy. 

Key findings from the environmental scan of national and international legislation and policy are:

• Governance of AI in healthcare is not limited to new AI-specific laws, but also involves primary legislation and policy (e.g. privacy laws, human and consumer rights law, and data protection laws). 

• Similar to Australia, national ethics frameworks are common in the reviewed countries and influence policy formulation. These frameworks are designed to support healthcare organisations in those jurisdictions by guiding the implementation of AI in their practice. The US Department of Health and Human Services drew on a national ethics framework to develop a playbook to guide health departments in embedding ethical principles in AI development, acquisition, and deployment (1). Internationally, governance approaches include establishing dedicated regulatory and oversight authorities (including healthcare-specific bodies), requiring risk-based or impact assessments, provisions to increase transparency or prohibit discrimination, regulatory sandboxing, as well as formal tools or checklists. 

• Australia’s National Ethics Framework is commonly used to frame Australian policy. The Australian Government has commenced development of a national risk-based approach to cross-sectoral AI regulation (2), based on four principles: i/ balanced and proportionate (achieved via risk-based assessment); ii/ collaborative and transparent (achieved via public engagement and expert involvement); iii/ consistent with international requirements; iv/ putting community first. This national approach will shape the future of AI governance and implementation in health services; in some jurisdictions, such as NSW, good progress has been made on developing state-based governance frameworks, including in health (see Section 3.3.1 page 49-50). The NSW Government’s AI Ethics Principles are embedded in the NSW AI Assurance Framework, which applies to uses of AI in the NSW health system. 

• Current developments in Australian governance and regulation of AI in healthcare include governance via existing cross-sectoral approaches (e.g. privacy and consumer law), regulation of software as a medical device, and specific health governance proposals from research and health organisations. The most significant developments in the healthcare sector are policy initiatives by the Australian Alliance for Artificial Intelligence in Healthcare (AAAiH) (73), The Royal Australian and New Zealand College of Radiologists (3), and the Australian Medical Association (4).

 Legislative and policy environment

• The AAAiH National Policy Roadmap Process has recommended, by consensus, that Australia establish an independent National AI in Healthcare Council to oversee AI governance in health. This Council should be established urgently. Its work should be shaped by the National AI Ethics Principles and the recommendations made by consensus in the National Policy Roadmap process. One of the key issues to address is practical guidance on clarifying consent and transparency requirements. The Roadmap also recommended that the Council engage individual professional bodies to develop profession-specific codes of conduct, and oversee accreditation regarding minimum AI safety and quality standards of practice covering cybersecurity threats, patient data storage and use, and best practice for deployment, governance and maintenance of AI. Such accreditation could fall under the remit of the ACSQHC’s accreditation scheme. 

• AAAiH’s recommendation for a risk-based safety framework also called for the improvement of national post-market safety monitoring so that cases of AI-related patient risk and harm are rapidly detected, reported and communicated. 

• Both the AAAiH and the Medical Technology Association of Australia (MTAA) recommended development of a formal data governance framework as well as mechanisms to provide industry with ethical and consent-based access to clinical data to support AI development and leverage existing national biomedical data repositories. 

• The Australian legislative and policy environment for AI is rapidly changing: upcoming developments include changes in cross-sectoral legislation (e.g. privacy law) and an intended national risk-based approach to AI legislation 

. • Review of Australian guidance documents showed that detailed legal analysis of privacy requirements with respect to AI implementation in healthcare (see 3.3.4 Privacy and confidentiality), and detailed legal analysis of accountability and liability in AI use (see 3.3.7 Accountability and liability), may be warranted, as these are not as well resolved in Australia as in some other jurisdictions. This could potentially support legal reform. 

Key issues for health organisations and clinicians 

• Ensure high quality, local, practice-relevant evidence of AI system performance before implementation. 

• Significant training and support for clinicians and other health workers is required during the implementation and integration of AI systems into existing clinical information systems or digital health solutions (e.g., electronic medical records, EMR). Training includes skill development to use the AI system, but also includes training in ethical and liability considerations, cybersecurity, and capacity to inform patients about the use of AI in their care (see Chapter 6, section 6.10 for details). 

• Ensure AI implementation, and organisational policy, complies with existing legislation (e.g. data privacy, consumer law, and cybersecurity policy) and relevant AI ethics frameworks. 

• AI governance should build on existing governance processes in healthcare organisations e.g. for patient safety, digital health and research ethics. This is necessary to ensure safe and responsible use of AI, as well as clarify lines of individual and organisation responsibility over AI-assisted clinical and administrative decision-making that comply with existing liability rules. 

• Strengthen engagement with consumers, communities, and stakeholders in healthcare AI implementation to ensure trustworthiness, and to shape implementation and use of consumer- or patient-facing AI. An example of policy-orientated community engagement is illustrated by a national Australian citizens’ jury convened to deliberate about AI implementation in healthcare. See Box 2 in Chapter 3, section 3.3.2 for the jury’s recommendations.   

• Implementation of AI in health services should ensure appropriate Aboriginal and Torres Strait Islander governance, by connecting AI governance processes in health systems to existing Aboriginal and Torres Strait Islander governance structures. Implementation should be in line with principles of Indigenous Data Sovereignty. 

• Transparency and consent are key issues for implementation of AI in health services. Governance of transparency and consent should draw on existing expertise and governance systems in healthcare organisations, including clinical ethics committees, research ethics committees, digital health committees, consumer governance committees and risk management structures. In developing approaches to transparency and consent, health organisations should note that: o Fundamental requirements for consent in clinical contexts—that a person must have capacity, consent voluntarily and specifically, and have sufficient information about their condition, options, and material risks and benefits—remain unchanged by the use of AI. o There is limited guidance available regarding requirements for consent to the use of AI as an element of clinical care. o Across the policy documents reviewed, there is strong agreement that there should be transparency about the fact that AI is being used. o Also consider transparency regarding training data, data bias, AI system performance and evaluation methods. o Risk-based assessment could require greater transparency for higher-risk applications. o As noted above, consent and transparency are potential areas of focus for a National Council on AI in Health. 

• Implement risk assessment frameworks to address the risk of bias, discrimination or unfairness in initial evaluation and ongoing monitoring of AI systems. See Appendix A for an example of an impact assessment tool. 

• Ensure use of existing patient safety and quality systems for monitoring AI incidents and safety events (including hazards and near miss events) as well as post-market safety monitoring so that cases of AI-related patient risk and harm are rapidly detected, reported and managed. 

Chapters 4 and 5 report findings from a scoping review of the literature to identify principles for safe and responsible implementation of AI at the health service level. The review covers 75 primary studies about AI systems deployed in acute care that were published in the peer-reviewed literature from 2021-2023 as well as nine studies reporting emerging safety problems associated with AI in healthcare. For healthcare organisations, safe and responsible AI in builds on best-practice approaches for digital health.

Key findings and principles are

AI in acute care settings 

Key finding 1: AI technologies are being applied in a wide variety of clinical areas, with studies identifying clear clinical use cases for their implementation. The most common clinical tasks supported by AI systems are diagnosis and procedures. All the AI systems identified in the literature search were based on traditional machine learning (ML) techniques and most were assistive requiring clinicians to confirm or approve AI provided information or decisions. Up until December 2023, no studies had evaluated the implementation of AI in hospital operations or the clinical use of foundation models or generative AI in routine patient care. 

Principle 1: Take a problem-driven approach to AI implementation, an AI system should address specific clinical needs. Confirm the specific clinical use case before implementation i.e. the types of patients and condition where the AI system is intended to improve care delivery and patient outcomes. 

Approach to AI implementation 

Key finding 2: The literature demonstrated multiple ways in which health services implemented AI systems such as to: i/ develop AI systems in-house; ii/ co-develop in partnership with technology companies; and iii/ purchase AI systems from commercial vendors (including AI systems subject to medical device regulation). Evidence of engagement with hospital ethics committees or clinical governance boards from a responsible use perspective was poorly reported in the studies reviewed.  

Principle 2: Deployment of AI systems that have been developed externally or internally, is a highly complex process and should be undertaken in partnership with key stakeholders including healthcare professionals and patients. Consultation should occur with those who have specialist skills traversing clinical safety, governance, ethics, IT system architecture legal and procurement, and include the specific healthcare professionals as well as patient representatives and/or patient liaison officers. 

Principle 3: When purchasing AI systems from commercial vendors, assess clinical applicability and feasibility of implementation in the care setting. Consider the system performance and whether the ML model will transport from its training and validation environment to the local clinical setting of interest. Consider feasibility of testing the AI using localised de-identified data sets or localised synthetic datasets to illicit utility and performance of the AI system in the local clinical area of interest, before conducting pilot implementation projects. AI system performance 

Key finding 3: AI system performance was usually assessed against a comparator (e.g. human or another device). Evaluation metrics such as sensitivity, specificity, positive predictive value, accuracy and F1 score were commonplace amongst the literature. 

Principle 4: Ensure AI is fit for clinical purposes by assessing evidence for system performance against a comparator. Evaluate performance in the local context of interest using localised de-identified datasets or synthetic datasets, before conducting pilot implementation projects to measure AI system performance and answer any evidence gaps in prior assessments. 

Key finding 4: Emerging evidence highlights the impact of distributional shift, stemming from disparities between the dataset on which AI systems are trained and deployment datasets. However, studies describing implementation lacked any reported quality assurance measures, such as post-deployment monitoring, auditing, or performance reviews. 

Principle 5: Monitor AI system performance in-situ post deployment, by means of electronic dashboards or other performance monitoring/auditing methods to rapidly detect and mitigate the effects of distributional shift. This should be underpinned by technical support as well as processes around planned and unplanned system downtime. 

Safety of AI in healthcare 

Key finding 5: Emerging evidence underscores safety concerns associated with AI systems and their impact on patient care. Although literature reporting on AI-related adverse events has been limited, evidence from the US FDA’s post-market safety monitoring emphasises the necessity of examining issues with AI systems beyond the known limitations of ML algorithms. Predominantly, issues with data acquisition were observed, while problems with use i.e. the misapplication of AI and its intended purposes were four times more likely to lead to patient harm that technical issues. 

Principle 6: A whole-of-system approach to safe AI implementation is needed. Ensure that AI systems are effectively integrated into IT infrastructure as they are highly reliant on data and integration with the IT infrastructure and other clinical information systems. Data quality and requirements for any accompanying changes to the EMR and other supporting clinical information systems need to be assessed to ensure data provided to the AI system is fit for purpose and its output is accurately displayed to users. 

Role of AI in clinical task, clinical workflow, usability, and safe use 

Key finding 6: AI systems in the literature were predominantly assistive or providing autonomous information meaning users were required to confirm or approve AI provided information or decisions, and still had overall autonomy over the task at hand. However, problems with the use of AI were more likely to harm patients compared to algorithm issues in safety events reported to the US FDA’s post- market safety monitoring. 

Principle 7: Ensure that users are aware of the intended use of AI systems (see Box 3). Training around the intended use and safe use of AI should be developed in consultation with the AI developer, clinical governance, patient safety and clinical leaders. The training should be maintained and updated throughout the life cycle of the AI system. 

Key finding 7: End user engagement to devise clinical workflows and training ahead of deployment were less well reported in the literature. When understanding interaction and adoption of AI systems into healthcare workflows, user experience data and user metrics uncovered facilitators and barriers.  

Principle 8: Integrate AI systems with clinical workflow. Devise clinical workflows for AI systems in a real- world care setting to ensure AI is seamlessly integrated into practice. Evaluate early to ensure AI fits local  requirements and address any issues. A pilot implementation can be used to test and refine integration with clinical workflow and supporting systems. 

Principle 9: Identify issues with system usability via user metrics and short, regular survey requests. Address these issues promptly by collaboration with the AI developer and clinicians using the system. 

Clinical utility and effects on decision-making 

Key finding 8: Decision change outcomes such as incorrect/correct decisions and the rate at which clinicians make decisions, their decision velocity, help to characterise effects of AI systems on clinical decision-making. Confidence, acceptability and trust in the AI system were important factors in decision change. 

Principle 10: Limitations of the AI system abilities must be made clear to all staff engaging with the AI system. This can be fostered by collaboration with the AI developer and strong engagement with clinicians in both pre-deployment and post deployment phases. AI incidents and safety events (including hazards and near miss events) should be easy to report and escalate. 

Principle 11: Before-and-after studies or historical cohort studies can be utilised to assess the clinical utility and safety of AI compared to a time period when AI was not implemented. 

Effects on care delivery and patient outcomes 

Key finding 9: Care process changes were not well described in the literature. However, clinical outcomes were ubiquitously reported as primary, secondary and exploratory outcomes, with many studies having a clinical outcome as the study primary endpoint. 

Principle 12: Ensure AI systems are suitably embedded i.e. their use and clinical utility in a particular context is established using formative evaluation methods during implementation before conducting clinical trials to assess impact on care delivery and patient outcomes. 

Conclusion 

The adoption of AI technologies in Australian healthcare is still in its early stages. By safely and responsibly implementing the current generation of AI, Australian health services can prepare for the future. This involves building on existing governance processes, strengthening engagement with consumers, utilising the available data infrastructure, and establishing robust processes for evaluating the performance, clinical utility, and usefulness of AI assistance based on current best practices for implementing digital health systems. Preparation is crucial as healthcare AI systems evolve from making recommendations to autonomously performing clinical tasks. Moreover, Australia has the opportunity to provide guidance to other countries seeking to use modern AI systems to improve care delivery and patient outcomes effectively and safely.

05 August 2024

Phakes

Another article on phakes in Africa. 'Prevalence of substandard, falsified, unlicensed and unregistered medicine and its associated factors in Africa: a systematic review' by Biset Asrade Mekonnen, Muluabay Getie Yizengaw and Minichil Chanie Worku in (2024) 17(1) Journal of Pharmaceutical Policy and Practice comments 

Substandard, falsified, unlicensed, and unregistered medicines pose significant risks to public health in developed and developing countries. This systematic review provides an overview of the prevalence of substandard, falsified, unlicensed, and unregistered medicine and its associated factors in Africa. 

Articles published from April 2014 to March 2024 were searched in Google Scholar, Science Direct, PubMed, MEDLINE, and Embase. The search strategy focused on open-access articles published in peer-reviewed scientific journals and studies exclusively conducted in African countries. The quality of the studies was assessed according to the Medicine Quality Assessment Reporting Guidelines (MEDQUARG). This systematic review was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA). 

Of the 27 studies, 26 had good methodological quality after a quality assessment. Of the 7508 medicine samples, 1639 failed at least one quality test and were confirmed to be substandard/falsified medicines. The overall estimated prevalence of substandard/falsified medicines in Africa was 22.6% (1718/7592). The average prevalence of unregistered medicines was 34.6% (108/312). Antibiotics, antimalarial, and antihypertensive medicines accounted for 44.6% (712/1596), 15.6% (530/3530), 16.3% (249/1530), and 16.3% (249/1530), respectively. Approximately 60.7% (91/150) were antihelmintic and antiprotozoal medicines. Poor market regulatory permission, Free trade zones, poor registration, high demand, and poor importation standards contribute to the prevalence of these problems. 

Substandard, falsified, and unregistered medicines are highly prevalent in Africa, and attention has not been paid to the problem. Antibiotics, antimalarial, anthelmintic, and antiprotozoal are the most commonly reported substandard, falsified, and unregistered medicines. A consistent supply of high-quality products, enhancement of registration, market regulatory permission, and importation standards are essential to counter the problems in Africa. Preventing these problems is the primary duty of every responsible nation to save lives.

04 August 2024

Power

'The Imperial Supreme Court' by Mark A Lemley in (2022) 136(1) Harvard Law Review 97 comments  

The past few years have marked the emergence of the imperial Supreme Court. Armed with a new, nearly bulletproof majority, conservative Justices on the Court have embarked on a radical restructuring of American law across a range of fields and disciplines. Unlike previous shifts in the Court, this one isn’t marked by debates over federal versus state power, or congressional versus judicial power, or judicial activism versus restraint. Nor is it marked by the triumph of one form of constitutional interpretation over another. On each of those axes, the Court’s recent opinions point in radically different directions. The Court has taken significant, simultaneous steps to restrict the power of Congress, the administrative state, the states, and the lower federal courts. And it has done so using a variety of (often contradictory) interpretative methodologies. The common denominator across multiple opinions in the last two years is that they concentrate power in one place: the Supreme Court. 

My goal in this essay is not to criticize these decisions on the merits, though there is much to criticize; lots of others will do that. Nor do I aim simply to make the legal realist point that the Justices will do what they want in the cases before them, though the last few Terms provide ample evidence for that claim too. Rather, my argument is that the Court has begun to implement the policy preferences of its conservative majority in a new and troubling way: by simultaneously stripping power from every political entity except the Supreme Court itself. The Court of late gets its way, not by giving power to an entity whose political predilections are aligned with the Justices’ own, but by undercutting the ability of any entity to do something the Justices don’t like. We are in the era of the imperial Supreme Court. 

In Part I, I show that the Court has not been favoring one branch of government over another, or favoring states over the federal government, or the rights of people over governments. Rather, it is withdrawing power from all of them at once. I also show that this result cannot be explained by any consistent judicial philosophy. The Court is happy to embrace conflicting philosophies to achieve the ends it wants in the case before it. In Part II, I suggest that the imperial Supreme Court is something new and dangerous and that we must consider more radical options to protect the American form of government.