27 December 2023

Devices

'Off-Label Preemption' by David A Simon in (2024) Wisconsin Law Review (Forthcoming) comments 

A significant body of scholarship examines when federal law regulating drugs and devices preempts state law claims against manufacturers for defective products based on uses approved by the Food and Drug Administration (FDA)—what are called on-label uses. Yet scholars have paid little attention to how preemption applies to claims against manufacturers that promote uses FDA has not approved—what are called off-label uses. The omission is significant. Off-label use is widespread (comprising a significant portion of all uses) and risky (frequently unsupported by scientific evidence). In private lawsuits against manufacturers that promote off-label uses, preemption is often the linchpin issue. Courts analyzing whether federal law preempts state law claims based on off-label promotion have reached wildly inconsistent results. Despite the issue’s importance, few scholars have systematically evaluated the off-label preemption landscape or provided a coherent rationale for how courts should apply preemption doctrine to state law claims based on off-label promotion. 

 This Article does both by developing a theory of off-label preemption that anchors doctrinal analysis to FDA’s central function: ex ante risk evaluation of drugs and devices. Emphasizing FDA ex ante review of risks delivers three significant payoffs. First, it provides an organizing principle that explains the Supreme Court’s seemingly fractured preemption jurisprudence. Second, the theory unifies conflicting approaches to preemption of state law claims based on off-label promotion. Finally, it offers a normative reason for why preemption should not apply to state law claims based on off-label promotion of any drug or device.

23 December 2023

AI governance platitudes and pieties

The UN multi-stakeholder High-level Advisory Body on Artificial Intelligence interim report on governing AI for humanity states 

1. Artificial intelligence1 (AI) increasingly affects us all. Though AI has been around for years, capabilities once hardly imaginable have been emerging at a rapid, unprecedented pace. AI offers extraordinary potential for good — from scientific discoveries that expand the bounds of human knowledge to tools that optimize finite resources and assist us in everyday tasks. It could be a game changer in the transition to a greener future, or help developing countries transform public health and leapfrog challenges of last mile access in education. Developed countries with ageing populations could use it to tackle labour shortages. 
 
2. Yet, there are also risks. AI can reinforce biases or expand surveillance; automated decision-making can blur accountability of public officials even as AI-enhanced disinformation threatens the process of electing them. The speed, autonomy, and opacity of AI systems challenge traditional models of regulation, even as ever more powerful systems are developed, deployed and used. 
 
3. The opportunities and the risks of AI for people and society are evident and have seized public interest. They also manifest globally, with geostrategic tensions over access to the data, compute, and talent that fuel AI, with talk of a new AI arms race. Nor are the benefits and risks equitably distributed. There is a real danger, even if humanity harnesses only the positive aspects of AI, that those will be limited to a club of the rich. 
 
4. This technology cries out for governance, not merely to address the challenges and risks but to ensure we harness its potential in ways that leave no one behind. A key measure of our success is the extent to which AI technologies help achieve the Sustainable Development Goals (SDGs). As an example, Box 1 illustrates AI’s potential in tackling climate change and its impact (SDG 13) .... 
 
5. The High-level Advisory Body on AI was formed to analyse and advance recommendations for the international governance of AI. We interpret this mandate not merely as considering how to govern AI today, but also how to prepare our governance institutions for an environment in which the pace of change is only going to increase. AI governance must therefore reflect qualities of the technology itself and its rapidly evolving uses — agile, networked, flexible — as well as being empowering and inclusive, for the benefit of all humanity. 
 
6. Our work does not take place in a normative or institutional vacuum. The UN is guided by rules and principles to which all of its member states commit. These shared and codified norms and values are the lodestar for all of its work, including AI governance. Norms including commitments to the UN Charter, the Universal Declaration of Human Rights, and international law including environmental law and international humanitarian law, are applicable to AI. Institutions created in support of multilateral objectives from peace and security to sustainable development have roles to play in cultivating the opportunities while safeguarding against risks. 
 
7. Nonetheless, we share the sense of urgency held by complementary governance initiatives on AI, including those by states as well as regional and intergovernmental processes such as the EU, the G7, the G20, UNESCO, and the OECD, among others. More inclusive engagement is needed, however, as many communities — particularly in the Global South or Global Majority — have been largely missing from these discussions, despite the potential impact on their lives. A more cohesive, inclusive, participatory, and coordinated approach is needed, involving diverse communities worldwide, especially those from the Global South or Global Majority. 
 
8. The United Nations holds no panacea for the governance of AI. But its unique legitimacy as a body with universal membership founded on the UN Charter, agreed universally, as well as its commitment to embracing the diversity of all peoples of the world, offer a pivotal node for sharing knowledge, agreeing on norms and principles, and ensuring good governance and accountability. Within the UN system, plans for the Global Digital Compact and the Summit of the Future in September 2024 offer a pathway to timely action. 
 
9. The Advisory Body comprises individuals diverse by geography and gender, discipline and age; it draws expertise from government, civil society, the private sector, and academia. Intense and wide-ranging discussions yielded broad agreement that there is a global governance deficit in respect of AI and that the UN has a role to play. 
 
10. In this report, we first identify opportunities and enablers that can help harness the potential benefits of AI for humanity. Second, we highlight risks and challenges that AI presents now and in the foreseeable future. Third, we argue that addressing the global governance deficit requires clear principles, as well as novel functions and institutional arrangements to meet the moment. The report concludes with preliminary recommendations and next steps, which will be elaborated in our final report by August 2024. 
 
11. Though we are confident of the broad direction, we know that we do not take this journey alone. We look forward to consulting widely on next steps to ensure that more voices and views are included, and that AI serves our common good. 
 
The Global Governance Deficit 
 
12. Though AI is transforming our world, its development and rewards are currently concentrated among a small number of private sector actors in an even smaller number of states. The harms are also unevenly spread. Global governance with equal participation of all member states is needed to make resources accessible, make representation and oversight mechanisms broadly inclusive, ensure accountability for harms, and ensure that geopolitical competition does not drive irresponsible AI or inhibit responsible governance. 
 
13. The United Nations lies at the heart of the rules-based international order. Its legitimacy comes from being a truly global forum founded on international law, in the service of peace and security, human rights, and sustainable development. We believe that this 5 offers the institutional and normative foundation for collective action in global governance of AI. Apart from considerations of equity, access, and prevention of harm, the very nature of the technology itself — AI systems being transboundary in structure, function, application, and use by a wide range of actors — necessitates a global approach. 
 
14. Pieces of this puzzle are being filled by self-regulatory initiatives, national and regional laws, and the work of multilateral forums. Yet, gaps remain and the challenge is clear: a global governance framework is needed for this rapidly developing suite of technologies and its use by various actors, be they the developers or users of the technology. AI presents distinctly global challenges and opportunities that the UN is uniquely positioned to address, turning a patchwork of evolving initiatives into a coherent, interoperable whole, grounded in universal values agreed by its member states, adaptable across contexts. 
 
15. The next three sections outline roles an institution or a network of institutions anchored in the UN’s universal framework could play in expanding the benefits of AI and mitigating its risks, as well as the principles and functions that will best achieve these ends. 
 
Opportunities and Enablers 
 
16. AI has the potential to transform access to knowledge and increase efficiency around the world. A new generation of innovators is pushing the frontiers of AI science and engineering. AI is increasing productivity and innovation in sectors from healthcare to agriculture, in both advanced and developing economies. Alongside such growth are questions about which enablers are required to ensure benefits are spread equitably and safely across humanity, and that disruptive impacts, including on jobs, are addressed and managed. An important question for policy makers is how to grow successful AI ecosystems around the world while holding established and emerging players accountable. . 
 
Sectoral opportunities 
 
AI will have a greater impact in some sectors rather than others. Among the most promising are agriculture and food security, health, education, protection of the environment, resilience to natural disasters and combating climate change. For example, AI has been used to create early- warning systems for floods, now covering over 80 countries, as well as wildfires, and food insecurity. AI is being used to monitor endangered species (e.g., dolphins, whales) and to optimize agricultural practices. Within each field, there are myriad possibilities. AI is broadening access to quality care, for example in the maternal health care space in Sub- Saharan Africa. Similarly, possibilities exist with respect to environmental problems, making education more accessible, helping ease poverty and hunger, and making cities safer. Scientific opportunities AI is transforming the way in which scientific research is performed and is expanding the frontier of scientific advancement, including by accelerating molecular and genomic research. AI systems show special promise for accelerating the work of scientists across many disciplines and a potential paradigm shift in the way science is practised, from helping explore new discovery spaces to automating experimentation at scale. For example, AI-powered tools that predict protein structures are being used by over a million researchers for drug discovery and to advance understanding of diseases like tuberculosis, as well as many previously neglected diseases. In the healthcare space, AI is powering diagnostic tools to help doctors with more timely detection of various types of cancers and eye-related diseases, thereby saving lives. In the energy space, AI is playing a role in optimizing energy systems and advancing the transition to renewable energies. For example, AI has been used to boost the value of wind energy, control tokamak plasmas in nuclear fusion, and enable carbon capture. There is scope for the UN to encourage progress in AI-enabled science by focusing attention on questions worth solving for the global good. Public sector opportunities Crucially, AI may drive progress in areas where market forces alone have traditionally failed. These range from extreme weather forecasting and monitoring biodiversity, to expanding educational opportunities or access to quality healthcare, and optimizing energy systems. Governments and the public sector can improve services for citizens and strengthen delivery for vulnerable communities by leveraging AI for social good. Opportunities for the UN to harness AI Finally, the use of AI can contribute to accelerating progress towards achieving the Sustainable Development Goals and enhance the role and effectiveness of the UN in promoting sustainable development, human rights and peace and security. For example, the UN can use AI to monitor the development of crisis situations in different parts of the world including human right abuses or for measuring progress on the SDGs. While many have noted the potential of AI to contribute to many of the 17 SDGs, many have also noted significant barriers to fully leveraging the potential of AI to help make progress. The UN and other international organizations have started to build promising AI use cases and demonstrations in areas such as prediction of food insecurity, managing relief operations and weather forecasting. 
 
Key enablers for harnessing AI for humanity 
 
17. The development of AI is now driven by data, compute, and talent, sometimes supplemented by manual labelling labour. Currently, only well-resourced member states 7 and large technology companies have access to the first three, leading to a concentration of influence. In addition to global shortages of crucial hardware such as GPUs, there is also a dearth of top technical talent in the field of AI. It has been suggested that open model development may alter this dynamic, though the impact and safety of open models is still being analysed and debated. 
 
18. The AI opportunity arrives at a difficult time, especially for the Global South. An “AI divide” lurks within a larger digital and developmental divide. According to ITU estimates for 2023, more than 2.6 billion people still lack access to the Internet. The basic foundations of a digital economy — broadband access, affordable devices and data, digital literacy, electricity that is reliable and affordable are not there. Fiscal space is constrained and the international environment for trade and investment flows is challenging. Critical investments will be needed in basic infrastructure such as broadband and electricity, without which the ability to participate in the development and use of AI will be severely limited. Even outside the Global South, taking advantage of AI will require efforts to develop local AI ecosystems, the ability to train local models on local data, as well as fine-tuning models developed elsewhere to suit local circumstances and purposes. 
 
19. Access and benefits must go hand in hand. Entrepreneurs in regions lagging in AI capacity require and deserve the ability to create their own AI solutions. This requires national investments in talent, data, and compute resources, as well as national regulatory and procurement capacity. Domestic efforts should be supplemented by international assistance and cooperation not only among governments but also private sector players. Rallying scientists to solve societal challenges could be a key enabler for harnessing AI’s potential for humanity. Open-Source and sharing of data and models could play an important role in spreading the benefits of AI and developing beneficial data and AI value chains across borders. 
 
20. Enablers (‘common rails’) for AI development, deployment and use would need to be balanced with ‘guard rails’ to manage impact on societies and communities. A litmus test will be the extent to which AI governance efforts yield human augmentation rather than human replacement or alienation as the outcome. Some AI development relies on cheap and exploitable labour in the Global South. Even in the Global North, there are questions related to valuing artistic expression, intellectual property, and the dignity of human labour. Equitable access to these technologies and relevant skills to make full use of them are needed if we are to avoid “AI divides” within and across nations. 
 
Governance as a key enabler 
 
21. AI can and should be deployed in support of the Sustainable Development Goals. But doing so cannot rely on current market practices alone, nor should it rely on the benevolence of a handful of technology companies. Any governance framework should shape incentives globally to promote these larger and more inclusive objectives and help identify and address trade-offs. 
 
22. Comparisons with other sectors offer potential lessons. Mechanisms such as Gavi, the Vaccine Alliance, may suggest short-term examples for ensuring that the benefits are shared. Repositories of AI models that can be adapted to different contexts could be the equivalent of generic medicines to expand access, in ways that do not promote AI concentration or consolidation. 
 
23. Some of these societally beneficial aspirations may be realized by advances in AI research itself; others may be addressed by leveraging novel market mechanisms to level the playing field, or by incentivizing actors to reach all communities and enable benefits to be accessible to all. But many will not. Ensuring that AI is deployed for the common good, and that its benefits are distributed equitably, will require governmental and intergovernmental action with innovative ways to incentivize participation from private sector, academia and civil society. A more lasting solution is to enable federated access to the fundamentals of data, compute, and talent that power AI — as well as ICT infrastructure and electricity, where needed. Here, the European Organization for Nuclear Research (CERN), which operates the largest particle physics laboratory in the world, and similar international scientific collaborations may offer useful lessons. A ‘distributed-CERN' reimagined for AI, networked across diverse states and regions, could expand opportunities for greater involvement. Other examples of open science relevant to AI include the European Molecular Biology Laboratory (EMBL) in biology or ITER, the International Thermonuclear Experimental Reactor. 
 
Risks and Challenges 
 
24. Along with ensuring equitable access to the opportunities created by AI, greater efforts must be made to confront known, unknown, and as yet unknowable harms. Today, increasingly powerful systems are being deployed and used in the absence of new regulation, driven by the desire to deliver benefits as well as to make money. AI systems can discriminate by race or sex. Widespread use of current systems can threaten language diversity. New methods of disinformation and manipulation threaten political processes, including democratic ones. And a cat and mouse game is underway between malign and benign users of AI in the context of cybersecurity and cyber defence.  
 
Risks of AI 
 
25. We examined AI risks firstly from the perspective of technical characteristics of AI. Then we looked at risks through the lens of inappropriate use, including dual-use, and broader considerations of human-machine interaction. Finally, we looked at risks from the perspective of vulnerability. 
 
26. Some AI risks originate from the technical limitations of these systems. These range from harmful bias to various information hazards such as lack of accuracy and “hallucinations" or confabulations, which are known issues in generative AI. 
 
27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the latest example of technologies being deployed for malevolent ends. They can pose serious risks to societal trust and democratic debate. 
 
28. Still others relate to human-machine interaction. At the individual level, this includes excessive trust in AI systems (automation bias) and potential de-skilling over time. At the societal level, it encompasses the impact on labour markets if large sections of the workforce are displaced, or on creativity if intellectual property rights are not protected. Societal shifts in the way we relate to each other as humans as more interactions are mediated by AI cannot also be ruled out. These may have unpredictable consequences for family life and for physical and emotional well-being. 
 
29. Another category of risk concerns larger safety issues, with ongoing debate over potential “red lines” for AI — whether in the context of autonomous weapon systems or the broader weaponization of AI. There is credible evidence about the increasing use of AI-enabled systems with autonomous functions on the battlefield. A new arms race might well be underway with consequences for global stability and the threshold of armed conflict. Autonomous targeting and harming of human beings by machines is one of those “red lines” that should not be crossed. In many jurisdictions, law-enforcement use of AI, in particular real-time biometric surveillance, has been identified as an unacceptable risk, violating the right to privacy. There is also concern about uncontrollable or uncontainable AI, including the possibility that it could pose an existential threat to humanity (even if there are debates over whether and how to assess such threats). 
 
30. Putting together a comprehensive list of AI risks for all time is a fool’s errand. Given the ubiquitous and rapidly evolving nature of AI and its use, we believe that it is more useful to look at risks from the perspective of vulnerable communities and the commons. We have attempted an initial categorization as per this approach (Box 3), which will be developed further into a risk assessment framework, building on existing efforts. There will be dynamicity about risks as technology, its adoption, and use evolve. This speaks to the need to keep risks under review through interdisciplinary science and evidence- based approaches. Adaptable risk management frameworks that can be tuned as per the experience of different regions at different times would also be needed. The UN can provide a valuable space for such mutual learning and agile adaptation. ... 
 
31. There is not yet a consensus on how to assess or address these risks. Nevertheless, as the precautionary principle provides on environmental dilemmas, scientific uncertainty about risks should not lead to governance paralysis. Achieving consensus and acting on it requires global cooperation and coordination, including through shared risk monitoring mechanisms. International organizations have decades of relevant experience with dual use technologies, from chemical and biological weapons to nuclear energy, based in treaty law and other normative frameworks, that could be applied in addressing risks of AI. 
 
32. We also recognize the need to be proactive. There are important lessons in recent experiences with other globally scalable, high-impact technologies, such as social media. Even as diverse societies process the impact and implications of AI, the need for effective global governance to share concerns and coordinate responses is clear. 
 
33. We must identify, classify, and address AI risks, including building consensus on which risks are unacceptable and how they can be prevented or pre-empted. Alertness and horizon-scanning for unanticipated consequences from AI is also needed, as such systems are introduced in increasingly diverse and untested contexts. To achieve this, we must overcome technical, political, and social challenges. 
 
Challenges to be addressed 
 
34. Many AI systems are opaque, either because of their inherent complexity or commercial secrecy as to their inner workings. Researchers and governance bodies have difficulty in accessing information or fully interrogating proprietary datasets, models, and systems. Further, the science of AI is at an early stage, and we still do not fully understand how advanced AI systems behave. This lack of transparency, access, compute and other resources, and understanding hinders the identification of where risks come from, and where responsibility for managing those risks (or compensating for harm) should lie. 
 
35. Despite AI’s global reach, governance remains territorial and fragmented. National approaches to regulation that typically end at physical borders may lead to tension or conflict if AI does not respect those borders. Mapping, avoiding, and mitigating risks will require self-regulation, national regulation, as well as international governance efforts. There should be no accountability deficits. 36. We also need to meet member states where they are and assist them with what they need in their own contexts given their specific constraints in terms of participation in and adherence to global AI governance, rather than telling them where they should be and what they should do based on a context to which they cannot relate. 
 
37. In addition to technical and political hurdles, these challenges exist in a broader social context. Digital technologies are impacting the ‘software’ of societies challenging governance writ large. Moreover, there are human and environmental costs of AI — hardware as well as software — must be accounted for throughout its lifecycle, as human lives and our environment are at the beginning and end of all AI-integrated processes. 
 
38. Besides misuse, we also note countervailing worries about missed uses — failing to take advantage of and share the benefits of AI technologies out of an excess of caution. Leveraging AI to improve access to education might raise concerns about young people’s data privacy and teacher agency. However, in a world where hundreds of millions of students do not have access to quality education resources, there may be downsides of not using technology to bridge the gap. Agreeing on and addressing such trade-offs will benefit from international governance mechanisms that enable us to share information, pool resources, and adopt common strategies. 
 
International Governance of AI 
 
The AI governance landscape 
 
39. There is, today, no shortage of guides, frameworks, and principles on AI governance. Documents have been drafted by the private sector and civil society, as well as by national, regional, and multilateral bodies, with varying degrees of impact. In technology terms, governance efforts have been focused on data, models, and benchmarks or evaluations. Applications have also been under focus, especially where there are existing sectoral governance arrangements, say for health or dual-use technologies. These efforts can be anchored in specific governance arrangements, such as the EU AI Act or the U.S. Executive Order and they can be associated with incentives for participation and compliance. Figure 1 presents a simplified schema for considering the emerging AI governance landscape, which the Advisory Body will develop further in the next phase of its work. ... 
 
40. Existing AI governance efforts have yielded similarities in language, such as the importance of fairness, accountability, and transparency. Yet there is no global alignment on implementation, either in terms of interoperability between jurisdictions or 13 in terms of incentives for compliance within jurisdictions. Some favour binding rules while others prefer non-binding nudges. Trade-offs are debated, such as how to balance access and safety — or whether the focus should be on present day or potential future harms. Different models may also require different emphasis in governance. A lack of common standards and benchmarks among national and multinational risk management frameworks, as well as multiple definitions of AI used in such frameworks, have complicated the governance landscape for AI, notwithstanding the need for space for different regulatory approaches to co-exist reflecting the world’s social and cultural diversity. 
 
41. Meanwhile, technical advances in AI and its use continue accelerating, expanding the gap in understanding and capacity between technology companies developing AI, companies and other organizations using AI across various sectors and societal spaces, and those who would regulate its development, deployment, and use. 
 
42. The result is that, in many jurisdictions AI governance can amount to self-policing by the developers, deployers, and users of AI systems themselves. Even assuming the good faith of these organizations and individuals, such a situation does not encourage a long- term view of risk or the inclusion of diverse stakeholders, especially those from the Global South. This must change. 
 
Toward principles and functions of international AI governance 
 
43. The Advisory Body is tasked with presenting options on the international governance of AI. We reviewed, among others, the functions performed by existing institutions of governance with a technological dimension, including FATF, FSB, IAEA, ICANN, ICAO, ILO, IMO, IPCC, ITU, SWIFT and UNOOSA2. These organizations offer inspiration and examples of global governance and coordination. 
 
44. The range of stakeholders and potential applications presented by AI and their uses in a wide variety of contexts makes unsuitable an exact replication of any existing governance model. Nonetheless, lessons can be learned from examples of entities that have sought to: (a) build scientific consensus on risks, impact, and policy (IPCC); (b) establish global standards (ICAO, ITU, IMO), iterate and adapt them; (c) provide capacity building, mutual assurance and monitoring (IAEA, ICAO); (d) network and pool research resources (CERN); (e) engage diverse stakeholders (ILO, ICANN); (f) facilitate commercial flows and address systemic risks (SWIFT, FATF, FSB). 45. Rather than proposing any single model for AI governance at this stage, the preliminary recommendations offered in this interim report focus on the principles that should guide the formation of new global governance institutions for AI and the broad functions such institutions would need to perform. The subfunctions listed in Table 1 below are informed by a survey of existing research on AI governance as well as a gap-analysis of nine current AI governance initiatives, namely, China’s interim measures for the management of AI services, the Council of Europe’s draft Convention on AI, the EU AI 2 See the list of abbreviations in the annex. 14 Act, the G7 Hiroshima Process, the Global Partnership on AI, the OECD AI Principles, the Partnership on AI and the Foundation Model Forum, the UK AI Safety Summit, and the U.S. Executive Order 14110.

The  Preliminary Recommendations are

A. Guiding Principles 
 
Guiding Principle 1. AI should be governed inclusively, by and for the benefit of all 46. Despite its potential, many of the world’s peoples are not yet in a position to access and use AI in a manner that meaningfully improves their lives. Fully harnessing the potential of AI and enabling widespread participation in its development, deployment, and use is critical to driving sustainable solutions to global challenges. All citizens, including those in the Global South, should be able to create their own opportunities, harness them, and achieve prosperity through AI. All countries, big or small, must be able to participate in AI governance. 
 
47. Affirmative and corrective steps, including access and capacity building, will be needed to address the historical and structural exclusion of certain communities, for instance women and gender diverse actors, from the development, deployment, use, and governance of technology, and to turn digital divides into inclusive digital opportunities. 
 
Guiding Principle 2. AI must be governed in the public interest 
 
48. The development of AI systems is largely concentrated in the hands of technology companies. The refinement, deployment and use of AI will involve other actors including but not limited to the original developers (be they companies, small AI labs, other organizations as well as countries) but include deployers and users who will range from individuals, to companies, organizations, and governments and who will bring a wide variety of incentives to their approaches. 
 
49. As shown by the experience with social media, AI products and services can scale rapidly across borders and categories of users. For this reason, as well as wider considerations of opportunities and risks, AI must be governed in the broader public interest. “Do no harm” is necessary, but not sufficient. A broader framing is needed for accountability of companies and other organisations that build, deploy and control AI as well as those that use AI across multiple sectors of the economy and society across the lifecycle of AI. This cannot rely on self-regulation alone: binding norms enforced by member states consistently are needed to ensure that public interests, rather than private interests, prevail. 
 
50. AI will be used by people and organizations, across multiple sectors, each with different use-cases and complexities and risks. Governance efforts must bear in mind public policy goals related to diversity, equity, inclusion, sustainability, societal and individual well-being, competitive markets, and healthy innovation ecosystems. They must also integrate implications of missed uses for economic and social development. Governance in this context should expand representation of diverse stakeholders, as well as offer greater clarity in delineating responsibilities between public and private sector actors. Governing in the public interest also implies investments in public technology, infrastructure, and the capacity of public officials. 
 
Guiding Principle 3. AI governance should be built in step with data governance and the promotion of data commons 
 
51. Data is critical for many major AI systems. Its governance and management in the public interest cannot be divorced from other components of AI governance (Figure 1). Regulatory frameworks and techno-legal arrangements that protect privacy and security of personal data, consistent with applicable laws, while actively facilitating the use of such data will be a critical complement to AI governance arrangements, consistent with local or regional law. The development of public data commons should also be encouraged with particular attention to public data that is critical for helping solve societal challenges including climate change, public health, economic development, capacity building and crisis response, for use by multiple stakeholders. 
 
Guiding Principle 4. AI governance must be universal, networked and rooted in adaptive multi- stakeholder collaboration 
 
52. Any AI governance effort should prioritize universal buy-in by different member states and stakeholders. This is in addition to inclusive participation, in particular lowering entry barriers for previously excluded communities in the Global South (Guiding Principle 1). This is key for emerging AI regulations to be harmonized in ways that avoid accountability gaps. 
 
53. Effective governance should leverage existing institutions that will have to review their current functions in light of the impact of AI. But this is not enough. New horizontal coordination and supervisory functions are required and they should be entrusted to a new organizational structure. New and existing institutions could form nodes in a network of governance structures. There is a clear momentum across diverse states for this to happen as well as growing awareness in the private sector for a well-coordinated and interoperable governance framework. Civil society concerns regarding the impact of AI on human rights point in a similar direction. 
 
54. Such an AI governance framework can draw on best practices and expertise from around the world. It must also be informed by understanding of different cultural ideologies driving AI development, deployment, and use. Innovative structures within this governance framework would be needed to engage the private sector, academia, and civil society alongside governments. Inspiration may be drawn from past efforts to engage the private sector in pursuit of public goods, including the ILO’s tripartite structure and the UN Global Compact. 
 
Guiding Principle 5. AI governance should be anchored in the UN Charter, International Human Rights Law, and other agreed international commitments such as the Sustainable Development Goals 
 
55. The UN has a unique normative and institutional role to play; aligning AI governance with foundational UN values — notably the UN Charter and its commitment to peace and security, human rights, and sustainable development — offers a robust foundation and compass. The UN is positioned to consider AI’s impact on a variety of global economic, social, health, security, and cultural conditions, all grounded in the need to maintain universal respect for, and enforcement of, human rights and the rule of law. Several UN agencies have already done important work on the impact of AI on fields from education to arms control. 
 
56. The Global Digital Compact and the Roadmap for Digital Cooperation are examples of multi-stakeholder deliberations towards a global governance framework of technologies including AI. Strong involvement of UN member states, empowering UN agencies and involving diverse stakeholders, will be vital to empowering and resourcing a global AI governance response. 
 
B. Institutional Functions 
 
57. We consider that to properly govern AI for humanity, an international governance regime for AI should carry out at least the following functions. These could be carried out by individual institutions or a network of institutions. ... 
 
 58. Figure 2 summarizes our recommended institutional functions for international AI governance. At the global level, international organizations, governments, and private sector would bear primary responsibility for these functions. Civil society, including academia and independent scientists, would play key roles in building evidence for policy, assessing impact, and holding key actors to account during implementation. Each set of functions would have different loci of responsibility at different layers of governance — private sector, government, and international organizations. We will further develop the concept of shared and differentiated responsibilities for multiple stakeholders at different layers of the governance stack in the next phase of our work. 
 
Institutional Function 1: Assess regularly the future directions and implications of AI 
 
59. There is, presently, no authoritative institutionalized function for independent, inclusive, multidisciplinary assessments on the future trajectory and implications of AI. A consensus on the direction and pace of AI technologies — and associated risks and opportunities — could be a resource for policymakers to draw on when developing domestic AI programmes to encourage innovation and manage risks. 
 
60. In a manner similar to the IPCC, a specialized AI knowledge and research function would involve an independent, expert-led process that unlocks scientific, evidence-based insights, say every six months, to inform policymakers about the future trajectory of AI development, deployment, and use (subfunctions 1-3 in Table 1). This should include arrangements with companies on access to information for the purposes of research and horizon-scanning. This function would help the public better understand AI, and drive consensus in the international community about the speed and impact of AI’s evolution. It would produce regular shared risk assessments, as well as establishing standards to measure the environmental and other impacts of AI. This Advisory Body is in a way the start of such an experts-led process, which would need to be properly resourced and institutionalised. 
 
61. The extent of AI’s negative externalities is not yet fully clear. The role of AI in disintermediating aspects of life that are core to human development could fundamentally change how individuals and communities function. As AI capabilities further advance, there is the potential for profound, structural adjustments to the way we live, work, and interact. A global analytical observatory function could coordinate research efforts on critical social impacts of AI, including its effects on labour, education, public health, peace and security, and geopolitical stability. Drawing on expertise and sharing knowledge from around the world, such a function could facilitate the emergence of best practices and common responses. 
 
Institutional Function 2: Reinforce interoperability of governance efforts emerging around the world and their grounding in international norms through a Global AI Governance Framework endorsed in a universal setting (UN) 
 
62. AI governance arrangements should be interoperable across jurisdictions and grounded in international norms, such as the Universal Declaration of Human Rights (Principle 4 18 above). They should leverage existing UN organizations and fora such as UNESCO and ITU for reinforcing interoperability of regulatory measures across jurisdictions. AI governance efforts could also be coordinated through a body that harmonises policies, builds common understandings, surfaces best practices, supports implementation and promotes peer-to-peer learning (subfunctions 7-10 in Table 1). A Global AI Governance Framework could support policymaking and guide implementation to avoid AI divides and governance gaps across public and private sectors, regions, and countries as well as clarifying the principles and norms under which various organizations should operate. As part of this framework, special attention should be paid to capacity-building both in the private and public sectors as well as dissemination of knowledge and awareness across the world. Best practices such as human rights impact assessments by private and public sector developers of AI systems could be spread through such a framework, which may a need an international agreement. 
 
Institutional Function 3: Develop and harmonize standards, safety, and risk management frameworks 
 
63. Several important initiatives to develop technical and normative standards, safety, and risk management frameworks for AI are underway, but there is a lack of global harmonization and alignment (subfunction 11 in Table 1). Because of its global membership, the UN can play a critical role in bringing states together, developing common socio-technical standards, and ensuring legal and technical interoperability. 
 
64. As an example, emerging AI safety institutes could be networked to reduce the risk of competing frameworks, fragmentation of standardization practices across jurisdictions, and a global patchwork with too many gaps. Care should, however, be taken not to overemphasise technical interoperability without parallel movement on other functions and norms. While there is greater awareness of socio-technical standards, more research, active involvement of civil society and transdisciplinary cooperation is needed to develop such standards. 
 
65. Further, new global standards and indicators to measure and track the environmental impact of AI as well as its energy and natural resources consumption (i.e. electricity and water) could be defined to guide AI development and help achieve SDGs related to the protection of the environment. 
 
Institutional Function 4: Facilitate development, deployment, and use of AI for economic and societal benefit through international multi-stakeholder cooperation 
 
66. In addition to standards for preventing harm and misuse, developers and users, especially in the Global South, need critical enablers such as standards for data labelling and testing, data protection and exchange protocols that enable testing and deployment across borders for startups as well as legal liability, dispute resolution, business development, and other supporting mechanisms. Existing legal, financial, and technical arrangements need to evolve to anticipate complex adaptive AI systems of the future, and this will require taking into account lessons learnt from forums such as FATF, 19 SWIFT and equivalent mechanisms. In addition, for most countries and regions, capacity development in the public sector is urgently required to facilitate responsible and beneficial use of AI as well as participate in international multi-stakeholder cooperative frameworks to develop enablers for AI (subfunctions 4, 5 and 11 in Table 1). 
 
Institutional Function 5: Promote international collaboration on talent development, access to compute infrastructure, building of diverse high-quality datasets, responsible sharing of open- source models, and AI-enabled public goods for the SDGs 
 
67. A new mechanism (or mechanisms) is required to facilitate access to data, compute, and talent in order to develop, deploy, and use AI systems for the SDGs through upgraded local value chains, giving independent academic researchers, social entrepreneurs, and civil society access to the infrastructure and datasets needed to build their own models and to conduct research and evaluations. This may require networked resources and efforts to build common datasets and data commons for use in the public interest, responsible sharing of open-source models, computational resources, and scale education and training. 
 
68. Pooling expert knowledge and resources analogous to CERN, EMLB or ITER, as well as the technology diffusion functions of the IAEA, could provide a much-needed boost to the SDGs (subfunction 6 in Table 1). Creating incentives for private sector actors to share and make available tools for research and development can also complement such functions. Experts from the Global South are often invisible at global conferences on AI. This needs to change. 
 
69. Opening access to data and compute should also be accompanied by capacity-building, in particular in the Global South. To facilitate local creation, adoption, and context- specific tuning of models, it would be important to track positive uses of AI, incentivize and assess AI-enabled public goods. Private sector engagement would be crucial in leveraging AI for the SDGs. Analogous to commitments made by businesses under the Global Compact, this could include public promises by technology and other companies to develop, deploy, and use AI for the greater good. In the larger context of the Global Digital Compact, it could also include reporting on the ways in which AI is supporting the Sustainable Development Goals. Institutional Function 6: Monitor risks, report incidents, coordinate emergency response 
 
70. The borderless nature of AI tools, which can proliferate across the globe at the stroke of a key, pose new challenges to international security and global stability. AI models could lower the barriers for access to weapons of mass destruction. AI-enabled cyber tools increase the risk of attacks on critical infrastructure and dual-use AI can be used to power lethal autonomous weapons which could pose a risk to international humanitarian law and other norms. Bots can rapidly disseminate harmful information, with increasingly human characteristics, in a manner that can cause significant damage to markets and public institutions. The possibility of rogue AI escaping control and posing still larger risks cannot be ruled out. Given these challenges, capabilities must be 20 created at a global level to monitor, report, and rapidly respond to systemic vulnerabilities and disruptions to international stability (subfunctions 13, 14 in Table 1). 
 
71. For example, a techno-prudential model, akin to the macro-prudential framework used to increase resilience in central banking and bringing together those developed at the national level, may help to similarly insulate against AI risks to global stability. Such a model must be grounded in human rights principles. 
 
72. Reporting frameworks can be inspired by existing practices of the IAEA for mutual reassurance on nuclear safety and nuclear security, as well as the WHO on disease surveillance. 
 
Institutional Function 7: Compliance and accountability based on norms 
 
73. We cannot rule out that legally binding norms and enforcement would be required at the global level. A regional effort for an AI treaty is already underway and the issue of lethal autonomous weapons is under consideration in the framework of a treaty on conventional weapons. Non-binding norms could also play an important role, alone or in combination with binding norms. The UN cannot and should not seek to be the sole arbiter of AI governance. However, in certain fields, such as challenges to international security, it has unique legitimacy to elaborate norms (subfunction 12 in Table 1). It can also help ensure that there are no accountability gaps, for example by encouraging states to report analogous to reporting on the SDGs targets and the Universal Periodic Review that facilitates monitoring, assessing, and reporting on human rights practices (subfunctions 15 in Table 1). This would need to be done in a timely and accurate way. Inspired by existing institutions such as the WTO, dispute resolution can also be facilitated through global forums. 
 
74. At the same time, the legitimacy of any global governance institution depends on accountability of that institution itself. International governance efforts must demonstrate resolute transparency in objectives and processes and make all efforts to gain the trust of citizen stakeholders, including by preventing conflicts of interest. ...
 
Conclusion 
 
75. To the extent that AI impacts our lives — how we work and socialize, how we are educated and governed, how we interact with one another daily — it raises questions more fundamental than how to govern it. Such questions of what it means to be human in a fully digital and networked world go well beyond the scope of this Advisory Body. Yet they are implicated in the decisions we make today. For governance is not an end but a means, a set of mechanisms intended to exercise control or direction of something that has the potential for good or ill. 
 
76. We aspire to be both comprehensive in our assessment of the impact of AI on people’s lives and targeted in identifying the unique difference the UN can make. We hope it is apparent that we see real benefits of AI; equally, we are clear-eyed about its risks. 
 
77. The risks of inaction are also clear. We believe that global AI global governance is essential to reap the significant opportunities and navigate the risks that this technology presents for every state, community, and individual today. And for the generations to come. 
 
78. To be effective, the international governance of AI must be guided by principles and implemented through clear functions. These global functions must add value, fill identified gaps, and enable interoperable action at regional, national, industry, and community levels. They must be performed in concert across international institutions, national and regional frameworks as well as the private sector. Our preliminary recommendations set out what we consider to be core principles and functions for any global AI governance framework. 
 
79. We have taken a form follows function approach and do not, at this stage, propose any single model for AI governance. Ultimately, however, AI governance must deliver tangible benefits and safeguards to people and societies. An effective global governance framework must bridge the gap between principles and practical impact. In the next phase of our work, we will explore options for institutional forms for global AI governance, building on the perspectives of diverse stakeholders worldwide. 
 
Next Steps 
 
80. Rather than proposing any single model for AI governance at this stage, the foregoing preliminary recommendations focus on the principles and functions to which any such regime must aspire. 
 
81. Over the coming months we will consult — individually and in groups — with diverse stakeholders around the world. This includes participation at events tasked with discussing the issues in this report as well as engagement with governments, the private sector, civil society, and research and technical communities. We will also pursue our research, including on risk assessment methodologies and governance interoperability. Case studies will be developed to help think about landing issues identified in the report in specific contexts. We also intend to dive deep into a few areas, including Open- Source, AI and the financial sector, standard setting, intellectual property, human rights, and the future of work by leveraging existing efforts and institutions. ...
 

22 December 2023

Nousferato and Academic Brochuremanship

'“Nousferatu”: Are corporate consultants extracting the lifeblood from universities?' by Deb Verhoeven and Ben Eltham in (2023) Review of Education, Pedagogy, and Cultural Studies comments 

Universities and management consultants are locked in a danse macabre. We turn to the vampire genre to elaborate on the relationship of consulting companies to the university sector, focusing on the University of Alberta in Canada and Monash University in Australia. .... The essay argues that consultants and universities are engaged in a mutually dependent relationship designed to sustain each other at the expense of the public. 

As a heuristic tool for describing and processing the nature of change in university workplaces, the horror genre leaps to hand. There is a rich literature on the “zombiefication” of academics and academic cultures (Gora & Whelan, 2010; Katz, 2016; Ryan, 2012; Whelan et al., 2013). For Whelan et al. (2013), the zombie serves to describe the lifelessness of contemporary universities that have emerged, staggering, from apocalyptic refinancing and restructuring. They elaborate the figure of the zombie as a sign of “what it means to occupy the field of contemporary higher education” (Whelan et al., 2013, p. 3). The scenes they depict draw feverishly from George Romero’s classic horror movie, The Night of the Living Dead (1968), in which universities are the ramshackle farmhouses harboring haggard academic survivors against marauding waves of voracious bureaucratic zombies … or alternatively, in which academics are unwillingly “infected” by institutionalized zombie processes, designed to devitalize and demonize (Whelan et al., 2013, p. 5). 

For Suzanne Ryan, the zombie motif is important, not as an explanation of desperate if doomed pushback against the prospect of corporate contagion, but for making sense of the compliant acceptance of institutional mutation by academic employees. She asks, “Does our active collusion in undermining our own interests indicate the depth of zombiefication to which we have sunk, or is it simply a symptom of a stressed and shrinking workforce?” (Ryan, 2012, p. 6). Ryan answers her own conundrum by suggesting that zombiefication is an individual tactic of withdrawal—a way for academics to survive the cognitive dissonance between their values and their workplaces, or a temporary psychological shelter from the storm of neo-liberalism. Academics, according to Ryan, neither accept nor resist, but are suspended in a lifeless stasis, consoling themselves that one day they might return with a vengeance. As Ken Gelder (2013) notes, these readings are themselves subject to a kind of zombiefication at the level of rhetoric. For Louise Katz (2016), the adoption of “Zombilingo” within universities is as deadening as it is dominant. Katz acknowledges the mutual dependence of Zombilingo and the vampiric rhetoric of corporatization, or “Corpspeak,” in the academy:

Although Corpspeak and Zombilingo are closely related, there are important differences. Corpspeak consists of linguistic imports into education from the business imaginary… Zombilingo, on the other hand, exports the vocabulary of critical or creative thinkers into the business realm; these are then sold back to the academy having undergone a kind of psychic surgery. (Katz, 2016, p. 10) 

Whelan et al. (2013) suggest that the gothic towers of colleges past still cast a shadow over the bandaged Franken-universities that have replaced them, in the form of a haunted longing: 

The ‘ivory tower’ model of the university, along with most of the other traditional archetypes of the institution, is … an undead, lingering ghoul. Given the changes that have radically reconstituted the sector over the last 30 years, these traditional imaginings are indeed dead, and yet bizarrely still alive. (Whelan et al., 2013, p. 5) 

We question the implicit proposal of a pre-history of university innocence corrupted by brutal exterior forces into unrecognizable monstrosities. Rather than see universities or academics as victims of involuntary transformation who have retreated into sordid states of survival, we might wonder at the ways in which universities, and many managerial academics, have actively participated in the systems that now characterize these workplaces. After all, universities have a long history of collaboration, and indeed instantiation, by the forces of capital and extraction. In the US, many of the so-called “land grant” universities were founded on land expropriated from First Nations and embraced principles of white settlement and colonization, as well as rampant real estate development (Ford, 2002; Sorber, 2019; Stein, 2020). Many European universities have likewise benefited from colonial exploitation and slavery. In 2018, for instance, Bristol University estimated that as much as 89% of the funds used to found the institution were derived from donations by wealthy traders with links to tobacco and chocolate cultivated by slave labor in the American south and the Caribbean (University of Bristol, 2022). Australian universities have their own history of “settler colonial epistemic violence” (Bennett et al., 2023). 

To conceptualize a less simplistic narrative of imperiled contemporary universities, we turn instead to a different horror tradition: the seductive allure of vampire fiction. The zombie differs from the vampire for its mindless lack of agency and its decrepitude. Zombie narratives are stories about the evacuation of content. In vampire stories, on the other hand, content hemorrhages and contaminates. Vampire narratives are stories explicitly about cultural interpretation and the constancy of revision (see Verhoeven, 1993). Vampires mutate and invoke mutation. The meanings around them are also subject to transmutation. As noted vampire scholar Nina Auerbach (1995) observes, unlike zombies who are without individual personalities, “There is no such creature as “The Vampire”; there are only vampires” (p. 5). Vampires flourish across multiple formats—films, TV shows, novels, music, poetry—and adapt in each of these different forms. Their tastes and talents shift according to different locations and historical circumstances. Vampires are familiar to us, not necessarily because they proliferate in so many cultural formats but because they encapsulate our own, situated desires and anxieties. They are both preternatural and yet contrived by intimate relationships. The vampires we conjure are the vampires we simultaneously want (to be) and want not. 

The taboo against the vampire, then, is also a proscription against the recognition that the desire for the other might also be a desire to become the other (and vice versa) …[V]ampiric desire is both self-reproducing and incorporating of its object-choice. You are what you (rep)eat. (Verhoeven, 1993, p. 203) 

This aporetic impulse at the heart of vampire tropes may help us understand the febrile impulse of university executives, and sometimes even the layers of academics beneath them, to countenance the predatory underside of institutional ideation. For Auerbach, “Vampirism springs not only from paranoia, xenophobia, or immortal longings, but also from generosity and shared enthusiasm” (Auerbach, 1995, p. vii). In this sense, vampires work together to reestablish the systems they menace, and this makes them especially useful for understanding the mutually beneficial role of consultants in the processes of corporatization of public institutions like universities. As Brunsson and Olsen (1993) have written, the goal of management consultancies is almost always identified as change, but the most obvious effect is in fact affirmation of the status quo. “Change” becomes simultaneously excavated and rich with possibility. University managers, emboldened by strategic planning consultants, extoll their newfound prowess at “agility” and “transcending boundaries,” their appreciation for the sublime wonder of untrammeled “expansion” and “bold transformation,” their rapacious appetite for “inclusion,” giddy with excitement for tomorrow and the vertiginous thrills of ever-deepening “impact” and ever-rising “rank,” always moving inexorably forward. Their snappy missions and glossy strategic plans, almost without exception recall the postwar scientistic triumphalism of Vannevar Bush’s “endless frontier” (Bush, 1945) 

Our own respective universities are pointed cases. The University of Alberta’s 2023–2033 Draft Strategic Plan abandons gravity, and launches like the opening credits for a Star Trek Enterprise episode:

Our mission is to advance education and research to the benefit of Alberta and beyond. We prepare new generations of thinkers, builders and leaders who will help our province thrive into the future… Over the past three years, the University of Alberta has undertaken a bold transformation, building a new academic structure that transcends traditional disciplinary boundaries. We stand ready for the future: to accelerate collaboration across disciplines, focused on collective priorities; to educate students to solve problems and collaborate for real-world impact; to embrace partnership, collaboration, and community like never before. (University of Alberta, 2023a, p. 4)

Monash University pitches a similarly expansive vision:

“In every age, people grapple with realizing hopes, surmounting testing circumstances and quelling threats. Universities have a role in understanding and providing ideas and solutions to shape and respond to the challenges they, their partners and communities experience” (Monash University, 2021, p. 6). 

Universities often reserve their widest over-reach for research promotion:

“Think enterprising and you think Monash. We have a long and proud distinguished history of ground-breaking translational research, that together with our partners, has changed the world” (Monash University, .d.).

These pithy mission-ary statements are intended to illustrate imagined points-of-difference between universities in an intensely competitive higher education “market,” and consequently they have most meaning to other proximate universities. Take, for example, the pyrrhic battle-of-the-brands being slugged out between the rival universities of Alberta which has all but reached the rocky point of peak-provincial. The University of Alberta landed the first blow with its forward-facing, history-effacing suite of strategic planning documents gathered under the catchphrase “U of A for Tomorrow” (University of Alberta, 2023b). In response, the University of Calgary counter-punched with the derivative, yet more insatiably frontward strategic plan titled, “UCalgary: Ahead of Tomorrow” (University of Calgary, 2023). This motto, which seems to stem from overactive use of a thesaurus, doesn’t bear a nuanced semiotic analysis; being either oxymoronic or palpably impossible. Nothing, however, quite matches the inadvertent repercussions of Deakin University’s 2012 rebranding, at breathtaking expense, as “Worldly” (hint: it doesn’t actually mean global). 

Such slogans and sentiments do not spring up unbidden in the minds of university managers. Consultants are assiduous tradespeople of these brazen institutional imaginaries. Just as the marketers hone polished brochures and clickable social media tiles, consultants assist university managements in the construction of a vision of the university as contemporary, competent, and efficient—the very model of a modern major institution. Mazzucato and Collington (2023), drawing on a phrase coined in the 1960s by NASA procurement manager Ernest Brackett, call this a type of imaginary “brochuremanship” (p. 166). In this respect, Marginson’s (2000) telling phrase of the “enterprise university” has never been more appropriate than in the context of management consultants advising on metrics, cost controls, and organizational efficiencies. Consultants burnish the university’s self-image, providing talking points for leaders before their “change management” video addresses (Parker, 2002), and assisting the comms team with their packaging of organizational upheaval, in a process that Alvesson (2013) has compared to “the image” construction work first explored by Boorstin (1971), and the simulacra of Baudrillard (1994).

Digital Humanism

'What is digital humanism? A conceptual analysis and an argument for a more critical and political digital (post)humanism' by Mark Coeckelbergh in (2024) 17 Journal of Responsible Technology offers a critical, posthumanist, and political version of 'digital humanism'.

 In this paper I take up the role of a critical friend of digital humanism and further unpack and analyse its concept and vision. I present my summary and analysis of what digital humanism is currently about and can be about in the form of 6 components. Then I offer some potential objections and challenges in order to bring out some of the less highlighted aspects in the current visions, to stimulate further discussion about controversial issues related to digital humanism such as specific views of human-technology relations and anthropocentrism, and to argue for a more critical, more posthumanist, and more political version of digital humanism. 

Let me first note that digital humanism should not be confused with digital humanities, which refers to the application of digital technologies to the study of humanities. While the latter can be part of digital humanist practice, or can be done in a way that aligns with digital humanism's aims, one needs to distinguish between a specific range of methods of research and its impact on the humanities (digital humanities) and an intellectual concept and (potentially) political movement that seeks to transform technological practices and societal institutions (digital humanism). Here I focus on the latter. 

In order to define and clarify the concept of digital humanism, I propose to distinguish between at least 6 components: 

The first component is about the image of the human in the digital age. Digital humanists see in the current technological age a tendency to see and treat humans as machines. They assert that this is not the case and defend the human, or at least humanistic definitions of the human, against what they see as the computerization and digitalization of the human. 

What has been called a ‘negative anthropology’ (Coeckelbergh, 2019b, 365) has a long history going back to at least Descartes: modern humanist thinkers see humans as non-machines. The human is thus defined in opposition to machines (hence a negative anthropological definition). Today and more broadly, the human is defined in opposition to digital technologies. For example, it is said that humans are not robots, not artificial intelligence(s), and so on, and that they should not be treated as such. 

The second component is the idea that humans should keep control over digital technology. It is argued that they lost this control, or may lose this control soon, when digital technologies take over tasks from humans through automation. Again, the human has to be defended, this time as the one who should be in control of technology. 

A typical discussion in this area concerns AI, which is often seen as out of control, or at least at serious risk of getting out of control. To bring AI under human control is then seen as a crucial digital humanist goal. 

The third component is that digital technology should be aligned with human goals and values. In this sense, a human-centric ethics is called for. More specifically, the (arguably stronger or more radical) view is that human(istic) values should be implemented in the development of the technology, at an early stage. Instead of waiting for digital tech to be implemented and then merely regulate afterwards in response to its effects, the idea is that we should already intervene in the development process: digital technologies should be designed in a way that aligns them with human(istic) values and principles such as human dignity, democracy, inclusivity, fairness, accountability, human rights, and so on. 

This idea is not new – consider for instance concepts such as value-sensitive design (Friedman et al., 2017) and responsible innovation (Owen et al., 2013) – but is now defended under the banner of digital humanism with an explicit normative message: we need to make sure that technology is linked to humanistic values, that we do not destroy these values in the name of digital technological progress and innovation, but instead promote them in and through digital technologies. 

The fourth component is about interdisciplinarity and education. In order to safeguard the human(istic) and bring about these changes, but also to move towards a more holistic understanding of humanity's problems, it is important that technical and scientific experts work together with humanities and social sciences scholars and vice versa. Education should also be changed in this direction, for example via curricula that bring technology ethics to developers, engineers, and (data) scientists. 

In addition it is also important to educate humanities people about new and emerging digital technologies. Often they are unaware of the deeper influence digital technologies have on our societies and indeed on our thinking. More generally, education about digital technologies is important for anyone living in and through the current digital transformation. Yet for humanists, the reason for giving everyone a basic education in digital technologies is not so much an economic one, as is for instance emphasized in EU policy (e.g., European Commission, 2020); the point is not just to give people knowledge and skills to find a job but to contribute to their formation and development as humans, which should not be one-sided.

The fifth component is about community. Classical humanism in the Renaissance and the Enlightenment flourished because of the communication and community that linked thinkers, writers, and artists. This also often led to friendships and political action. While there is currently less emphasis on this community component, at least in their definitions and visions, digital humanists also actively engage in community building in order to achieve the goals of the other components. And just as in the old days of humanism, technologies and media play a key role in this. Like Renaissance and Enlightenment thinkers, who used the printing press and the novel as tools of communication and community building, digital humanists use the new digital technologies such as digital social media not only for their research and writing but also for building a local and global community of digital humanists. They also still write (open) letters and manifestos, this time remediated by digital technologies such as e-mail and Twitter (now X). 

Finally, digital humanism has a clear political component, which emphasizes the systemic aspects and calls for more or less radical political reforms to bring about the other components. For example, the Vienna Manifesto on Digital Humanism starts with the Tim Berners-Lee quote ‘The system is failing’ and calls for action including regulation (Werthner et al., 2022, xi). And as mentioned, Fuchs questions the capitalist system. Digital humanism is thus already political. Digital humanists also defend a specific form of societal and political organization: democracy. For example, the first principle of the Vienna Manifesto reads: Digital technologies should be designed to promote democracy and inclusion. 

This political dimension should not surprise us, since digital technologies, like all technologies, are already political themselves: they are not only used for political purposes, but also have political consequences and take shape within specific political and social constellations. For example, AI can be seen as political (Coeckelbergh, 2022). Any movement that seeks to change technology, therefore, is by definition a political movement. But in addition to this deeper political aspect, digital humanists also have these explicit political aims, which opens up the possibility of engagement with not only tech policy also wider political and societal issues.