The UN multi-stakeholder High-level Advisory Body on Artificial Intelligence interim report on governing AI for humanity states
1. Artificial intelligence (AI) increasingly affects us all. Though AI has been around for years, capabilities once hardly imaginable have been emerging at a rapid, unprecedented pace. AI offers extraordinary potential for good — from scientific discoveries that expand the bounds of human knowledge to tools that optimize finite resources and assist us in everyday tasks. It could be a game changer in the transition to a greener future, or help developing countries transform public health and leapfrog challenges of last mile access in education. Developed countries with ageing populations could use it to tackle labour shortages.
2. Yet, there are also risks. AI can reinforce biases or expand surveillance; automated decision-making can blur accountability of public officials even as AI-enhanced disinformation threatens the process of electing them. The speed, autonomy, and opacity of AI systems challenge traditional models of regulation, even as ever more powerful systems are developed, deployed and used.
3. The opportunities and the risks of AI for people and society are evident and have seized public interest. They also manifest globally, with geostrategic tensions over access to the data, compute, and talent that fuel AI, with talk of a new AI arms race. Nor are the benefits and risks equitably distributed. There is a real danger, even if humanity harnesses only the positive aspects of AI, that those will be limited to a club of the rich.
4. This technology cries out for governance, not merely to address the challenges and risks
but to ensure we harness its potential in ways that leave no one behind. A key measure of our success is the extent to which AI technologies help achieve the Sustainable Development Goals (SDGs). As an example, Box 1 illustrates AI’s potential in tackling climate change and its impact (SDG 13) ....
5. The High-level Advisory Body on AI was formed to analyse and advance recommendations for the international governance of AI. We interpret this mandate not merely as considering how to govern AI today, but also how to prepare our governance institutions for an environment in which the pace of change is only going to increase. AI governance must therefore reflect qualities of the technology itself and its rapidly evolving uses — agile, networked, flexible — as well as being empowering and inclusive, for the benefit of all humanity.
6. Our work does not take place in a normative or institutional vacuum. The UN is guided by rules and principles to which all of its member states commit. These shared and codified norms and values are the lodestar for all of its work, including AI governance. Norms including commitments to the UN Charter, the Universal Declaration of Human Rights, and international law including environmental law and international humanitarian law, are applicable to AI. Institutions created in support of multilateral objectives from peace and security to sustainable development have roles to play in cultivating the opportunities while safeguarding against risks.
7. Nonetheless, we share the sense of urgency held by complementary governance initiatives on AI, including those by states as well as regional and intergovernmental processes such as the EU, the G7, the G20, UNESCO, and the OECD, among others. More inclusive engagement is needed, however, as many communities — particularly in the Global South or Global Majority — have been largely missing from these discussions, despite the potential impact on their lives. A more cohesive, inclusive, participatory, and coordinated approach is needed, involving diverse communities worldwide, especially those from the Global South or Global Majority.
8. The United Nations holds no panacea for the governance of AI. But its unique legitimacy as a body with universal membership founded on the UN Charter, agreed universally, as well as its commitment to embracing the diversity of all peoples of the world, offer a pivotal node for sharing knowledge, agreeing on norms and principles, and ensuring good governance and accountability. Within the UN system, plans for the Global Digital Compact and the Summit of the Future in September 2024 offer a pathway to timely action.
9. The Advisory Body comprises individuals diverse by geography and gender, discipline and age; it draws expertise from government, civil society, the private sector, and academia. Intense and wide-ranging discussions yielded broad agreement that there is a global governance deficit in respect of AI and that the UN has a role to play.
10. In this report, we first identify opportunities and enablers that can help harness the potential benefits of AI for humanity. Second, we highlight risks and challenges that AI presents now and in the foreseeable future. Third, we argue that addressing the global governance deficit requires clear principles, as well as novel functions and institutional arrangements to meet the moment. The report concludes with preliminary recommendations and next steps, which will be elaborated in our final report by August 2024.
11. Though we are confident of the broad direction, we know that we do not take this journey alone. We look forward to consulting widely on next steps to ensure that more voices and views are included, and that AI serves our common good.
The Global Governance Deficit
12. Though AI is transforming our world, its development and rewards are currently concentrated among a small number of private sector actors in an even smaller number of states. The harms are also unevenly spread. Global governance with equal participation of all member states is needed to make resources accessible, make representation and oversight mechanisms broadly inclusive, ensure accountability for harms, and ensure that geopolitical competition does not drive irresponsible AI or inhibit responsible governance.
13. The United Nations lies at the heart of the rules-based international order. Its legitimacy comes from being a truly global forum founded on international law, in the service of peace and security, human rights, and sustainable development. We believe that this
5
offers the institutional and normative foundation for collective action in global governance of AI. Apart from considerations of equity, access, and prevention of harm, the very nature of the technology itself — AI systems being transboundary in structure, function, application, and use by a wide range of actors — necessitates a global approach.
14. Pieces of this puzzle are being filled by self-regulatory initiatives, national and regional laws, and the work of multilateral forums. Yet, gaps remain and the challenge is clear: a global governance framework is needed for this rapidly developing suite of technologies and its use by various actors, be they the developers or users of the technology. AI presents distinctly global challenges and opportunities that the UN is uniquely positioned to address, turning a patchwork of evolving initiatives into a coherent, interoperable whole, grounded in universal values agreed by its member states, adaptable across contexts.
15. The next three sections outline roles an institution or a network of institutions anchored in the UN’s universal framework could play in expanding the benefits of AI and mitigating its risks, as well as the principles and functions that will best achieve these ends.
Opportunities and Enablers
16. AI has the potential to transform access to knowledge and increase efficiency around the world. A new generation of innovators is pushing the frontiers of AI science and engineering. AI is increasing productivity and innovation in sectors from healthcare to agriculture, in both advanced and developing economies. Alongside such growth are questions about which enablers are required to ensure benefits are spread equitably and safely across humanity, and that disruptive impacts, including on jobs, are addressed and managed. An important question for policy makers is how to grow successful AI ecosystems around the world while holding established and emerging players accountable. .
Sectoral opportunities
AI will have a greater impact in some sectors rather than others. Among the most promising are agriculture and food security, health, education, protection of the environment, resilience to natural disasters and combating climate change. For example, AI has been used to create early- warning systems for floods, now covering over 80 countries, as well as wildfires, and food insecurity. AI is being used to monitor endangered species (e.g., dolphins, whales) and to optimize agricultural practices. Within each field, there are myriad possibilities.
AI is broadening access to quality care, for example in the maternal health care space in Sub- Saharan Africa. Similarly, possibilities exist with respect to environmental problems, making education more accessible, helping ease poverty and hunger, and making cities safer.
Scientific opportunities
AI is transforming the way in which scientific research is performed and is expanding the frontier of scientific advancement, including by accelerating molecular and genomic research. AI systems show special promise for accelerating the work of scientists across many disciplines and a potential paradigm shift in the way science is practised, from helping explore new discovery spaces to automating experimentation at scale. For example, AI-powered tools that predict protein structures are being used by over a million researchers for drug discovery and to advance understanding of diseases like tuberculosis, as well as many previously neglected diseases. In the healthcare space, AI is powering diagnostic tools to help doctors with more timely detection of various types of cancers and eye-related diseases, thereby saving lives. In the energy space, AI is playing a role in optimizing energy systems and advancing the transition to renewable energies. For example, AI has been used to boost the value of wind energy, control tokamak plasmas in nuclear fusion, and enable carbon capture. There is scope for the UN to encourage progress in AI-enabled science by focusing attention on questions worth solving for the global good.
Public sector opportunities
Crucially, AI may drive progress in areas where market forces alone have traditionally failed. These range from extreme weather forecasting and monitoring biodiversity, to expanding educational opportunities or access to quality healthcare, and optimizing energy systems. Governments and the public sector can improve services for citizens and strengthen delivery for vulnerable communities by leveraging AI for social good.
Opportunities for the UN to harness AI
Finally, the use of AI can contribute to accelerating progress towards achieving the Sustainable Development Goals and enhance the role and effectiveness of the UN in promoting sustainable development, human rights and peace and security. For example, the UN can use AI to monitor the development of crisis situations in different parts of the world including human right abuses or for measuring progress on the SDGs. While many have noted the potential of AI to contribute to many of the 17 SDGs, many have also noted significant barriers to fully leveraging the potential of AI to help make progress. The UN and other international organizations have started to build promising AI use cases and demonstrations in areas such as prediction of food insecurity, managing relief operations and weather forecasting.
Key enablers for harnessing AI for humanity
17. The development of AI is now driven by data, compute, and talent, sometimes supplemented by manual labelling labour. Currently, only well-resourced member states
7
and large technology companies have access to the first three, leading to a concentration of influence. In addition to global shortages of crucial hardware such as GPUs, there is also a dearth of top technical talent in the field of AI. It has been suggested that open model development may alter this dynamic, though the impact and safety of open models is still being analysed and debated.
18. The AI opportunity arrives at a difficult time, especially for the Global South. An “AI divide” lurks within a larger digital and developmental divide. According to ITU estimates for 2023, more than 2.6 billion people still lack access to the Internet. The basic foundations of a digital economy — broadband access, affordable devices and data, digital literacy, electricity that is reliable and affordable are not there. Fiscal space is constrained and the international environment for trade and investment flows is challenging. Critical investments will be needed in basic infrastructure such as broadband and electricity, without which the ability to participate in the development and use of AI will be severely limited. Even outside the Global South, taking advantage of AI will require efforts to develop local AI ecosystems, the ability to train local models on local data, as well as fine-tuning models developed elsewhere to suit local circumstances and purposes.
19. Access and benefits must go hand in hand. Entrepreneurs in regions lagging in AI capacity require and deserve the ability to create their own AI solutions. This requires national investments in talent, data, and compute resources, as well as national regulatory and procurement capacity. Domestic efforts should be supplemented by international assistance and cooperation not only among governments but also private sector players. Rallying scientists to solve societal challenges could be a key enabler for harnessing AI’s potential for humanity. Open-Source and sharing of data and models could play an important role in spreading the benefits of AI and developing beneficial data and AI value chains across borders.
20. Enablers (‘common rails’) for AI development, deployment and use would need to be balanced with ‘guard rails’ to manage impact on societies and communities. A litmus test will be the extent to which AI governance efforts yield human augmentation rather than human replacement or alienation as the outcome. Some AI development relies on cheap and exploitable labour in the Global South. Even in the Global North, there are questions related to valuing artistic expression, intellectual property, and the dignity of human labour. Equitable access to these technologies and relevant skills to make full use of them are needed if we are to avoid “AI divides” within and across nations.
Governance as a key enabler
21. AI can and should be deployed in support of the Sustainable Development Goals. But doing so cannot rely on current market practices alone, nor should it rely on the benevolence of a handful of technology companies. Any governance framework should shape incentives globally to promote these larger and more inclusive objectives and help identify and address trade-offs.
22. Comparisons with other sectors offer potential lessons. Mechanisms such as Gavi, the Vaccine Alliance, may suggest short-term examples for ensuring that the benefits are shared. Repositories of AI models that can be adapted to different contexts could be the equivalent of generic medicines to expand access, in ways that do not promote AI concentration or consolidation.
23. Some of these societally beneficial aspirations may be realized by advances in AI research itself; others may be addressed by leveraging novel market mechanisms to level the playing field, or by incentivizing actors to reach all communities and enable benefits to be accessible to all. But many will not. Ensuring that AI is deployed for the common good, and that its benefits are distributed equitably, will require governmental and intergovernmental action with innovative ways to incentivize participation from private sector, academia and civil society. A more lasting solution is to enable federated access to the fundamentals of data, compute, and talent that power AI — as well as ICT infrastructure and electricity, where needed. Here, the European Organization for Nuclear Research (CERN), which operates the largest particle physics laboratory in the world, and similar international scientific collaborations may offer useful lessons. A ‘distributed-CERN' reimagined for AI, networked across diverse states and regions, could expand opportunities for greater involvement. Other examples of open science relevant to AI include the European Molecular Biology Laboratory (EMBL) in biology or ITER, the International Thermonuclear Experimental Reactor.
Risks and Challenges
24. Along with ensuring equitable access to the opportunities created by AI, greater efforts must be made to confront known, unknown, and as yet unknowable harms. Today, increasingly powerful systems are being deployed and used in the absence of new regulation, driven by the desire to deliver benefits as well as to make money. AI systems can discriminate by race or sex. Widespread use of current systems can threaten language diversity. New methods of disinformation and manipulation threaten political processes, including democratic ones. And a cat and mouse game is underway between malign and benign users of AI in the context of cybersecurity and cyber defence.
Risks of AI
25. We examined AI risks firstly from the perspective of technical characteristics of AI. Then we looked at risks through the lens of inappropriate use, including dual-use, and broader considerations of human-machine interaction. Finally, we looked at risks from the perspective of vulnerability.
26. Some AI risks originate from the technical limitations of these systems. These range from harmful bias to various information hazards such as lack of accuracy and “hallucinations" or confabulations, which are known issues in generative AI.
27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the latest example of technologies being deployed for malevolent ends. They can pose serious risks to societal trust and democratic debate.
28. Still others relate to human-machine interaction. At the individual level, this includes excessive trust in AI systems (automation bias) and potential de-skilling over time. At the societal level, it encompasses the impact on labour markets if large sections of the workforce are displaced, or on creativity if intellectual property rights are not protected. Societal shifts in the way we relate to each other as humans as more interactions are mediated by AI cannot also be ruled out. These may have unpredictable consequences for family life and for physical and emotional well-being.
29. Another category of risk concerns larger safety issues, with ongoing debate over potential “red lines” for AI — whether in the context of autonomous weapon systems or the broader weaponization of AI. There is credible evidence about the increasing use of AI-enabled systems with autonomous functions on the battlefield. A new arms race might well be underway with consequences for global stability and the threshold of armed conflict. Autonomous targeting and harming of human beings by machines is one of those “red lines” that should not be crossed. In many jurisdictions, law-enforcement use of AI, in particular real-time biometric surveillance, has been identified as an unacceptable risk, violating the right to privacy. There is also concern about uncontrollable or uncontainable AI, including the possibility that it could pose an existential threat to humanity (even if there are debates over whether and how to assess such threats).
30. Putting together a comprehensive list of AI risks for all time is a fool’s errand. Given the ubiquitous and rapidly evolving nature of AI and its use, we believe that it is more useful to look at risks from the perspective of vulnerable communities and the commons. We have attempted an initial categorization as per this approach (Box 3), which will be developed further into a risk assessment framework, building on existing efforts. There will be dynamicity about risks as technology, its adoption, and use evolve. This speaks to the need to keep risks under review through interdisciplinary science and evidence- based approaches. Adaptable risk management frameworks that can be tuned as per the experience of different regions at different times would also be needed. The UN can provide a valuable space for such mutual learning and agile adaptation. ...
31. There is not yet a consensus on how to assess or address these risks. Nevertheless, as the precautionary principle provides on environmental dilemmas, scientific uncertainty about risks should not lead to governance paralysis. Achieving consensus and acting on it requires global cooperation and coordination, including through shared risk monitoring mechanisms. International organizations have decades of relevant experience with dual use technologies, from chemical and biological weapons to nuclear energy, based in treaty law and other normative frameworks, that could be applied in addressing risks of AI.
32. We also recognize the need to be proactive. There are important lessons in recent experiences with other globally scalable, high-impact technologies, such as social media. Even as diverse societies process the impact and implications of AI, the need for effective global governance to share concerns and coordinate responses is clear.
33. We must identify, classify, and address AI risks, including building consensus on which risks are unacceptable and how they can be prevented or pre-empted. Alertness and horizon-scanning for unanticipated consequences from AI is also needed, as such systems are introduced in increasingly diverse and untested contexts.
To achieve this, we must overcome technical, political, and social challenges.
Challenges to be addressed
34. Many AI systems are opaque, either because of their inherent complexity or commercial secrecy as to their inner workings. Researchers and governance bodies have difficulty in accessing information or fully interrogating proprietary datasets, models, and systems. Further, the science of AI is at an early stage, and we still do not fully understand how advanced AI systems behave. This lack of transparency, access, compute and other resources, and understanding hinders the identification of where risks come from, and where responsibility for managing those risks (or compensating for harm) should lie.
35. Despite AI’s global reach, governance remains territorial and fragmented. National approaches to regulation that typically end at physical borders may lead to tension or conflict if AI does not respect those borders. Mapping, avoiding, and mitigating risks will require self-regulation, national regulation, as well as international governance efforts. There should be no accountability deficits.
36. We also need to meet member states where they are and assist them with what they need in their own contexts given their specific constraints in terms of participation in and adherence to global AI governance, rather than telling them where they should be and what they should do based on a context to which they cannot relate.
37. In addition to technical and political hurdles, these challenges exist in a broader social context. Digital technologies are impacting the ‘software’ of societies challenging governance writ large. Moreover, there are human and environmental costs of AI — hardware as well as software — must be accounted for throughout its lifecycle, as human lives and our environment are at the beginning and end of all AI-integrated processes.
38. Besides misuse, we also note countervailing worries about missed uses — failing to take advantage of and share the benefits of AI technologies out of an excess of caution. Leveraging AI to improve access to education might raise concerns about young people’s data privacy and teacher agency. However, in a world where hundreds of millions of students do not have access to quality education resources, there may be downsides of not using technology to bridge the gap. Agreeing on and addressing such trade-offs will benefit from international governance mechanisms that enable us to share information, pool resources, and adopt common strategies.
International Governance of AI
The AI governance landscape
39. There is, today, no shortage of guides, frameworks, and principles on AI governance. Documents have been drafted by the private sector and civil society, as well as by national, regional, and multilateral bodies, with varying degrees of impact. In technology terms, governance efforts have been focused on data, models, and benchmarks or evaluations. Applications have also been under focus, especially where there are existing sectoral governance arrangements, say for health or dual-use technologies. These efforts can be anchored in specific governance arrangements, such as the EU AI Act or the U.S. Executive Order and they can be associated with incentives for participation and compliance. Figure 1 presents a simplified schema for considering the emerging AI governance landscape, which the Advisory Body will develop further in the next phase of its work.
...
40. Existing AI governance efforts have yielded similarities in language, such as the importance of fairness, accountability, and transparency. Yet there is no global alignment on implementation, either in terms of interoperability between jurisdictions or
13
in terms of incentives for compliance within jurisdictions. Some favour binding rules while others prefer non-binding nudges. Trade-offs are debated, such as how to balance access and safety — or whether the focus should be on present day or potential future harms. Different models may also require different emphasis in governance. A lack of common standards and benchmarks among national and multinational risk management frameworks, as well as multiple definitions of AI used in such frameworks, have complicated the governance landscape for AI, notwithstanding the need for space for different regulatory approaches to co-exist reflecting the world’s social and cultural diversity.
41. Meanwhile, technical advances in AI and its use continue accelerating, expanding the gap in understanding and capacity between technology companies developing AI, companies and other organizations using AI across various sectors and societal spaces, and those who would regulate its development, deployment, and use.
42. The result is that, in many jurisdictions AI governance can amount to self-policing by the developers, deployers, and users of AI systems themselves. Even assuming the good faith of these organizations and individuals, such a situation does not encourage a long- term view of risk or the inclusion of diverse stakeholders, especially those from the Global South. This must change.
Toward principles and functions of international AI governance
43. The Advisory Body is tasked with presenting options on the international governance of AI. We reviewed, among others, the functions performed by existing institutions of governance with a technological dimension, including FATF, FSB, IAEA, ICANN, ICAO, ILO, IMO, IPCC, ITU, SWIFT and UNOOSA2. These organizations offer inspiration and examples of global governance and coordination.
44. The range of stakeholders and potential applications presented by AI and their uses in a wide variety of contexts makes unsuitable an exact replication of any existing governance model. Nonetheless, lessons can be learned from examples of entities that have sought to: (a) build scientific consensus on risks, impact, and policy (IPCC); (b) establish global standards (ICAO, ITU, IMO), iterate and adapt them; (c) provide capacity building, mutual assurance and monitoring (IAEA, ICAO); (d) network and pool research resources (CERN); (e) engage diverse stakeholders (ILO, ICANN); (f) facilitate commercial flows and address systemic risks (SWIFT, FATF, FSB).
45. Rather than proposing any single model for AI governance at this stage, the preliminary recommendations offered in this interim report focus on the principles that should guide the formation of new global governance institutions for AI and the broad functions such institutions would need to perform. The subfunctions listed in Table 1 below are informed by a survey of existing research on AI governance as well as a gap-analysis of nine current AI governance initiatives, namely, China’s interim measures for the management of AI services, the Council of Europe’s draft Convention on AI, the EU AI
2 See the list of abbreviations in the annex.
14
Act, the G7 Hiroshima Process, the Global Partnership on AI, the OECD AI Principles, the Partnership on AI and the Foundation Model Forum, the UK AI Safety Summit, and the U.S. Executive Order 14110.
The Preliminary Recommendations are
A. Guiding Principles
Guiding Principle 1. AI should be governed inclusively, by and for the benefit of all
46. Despite its potential, many of the world’s peoples are not yet in a position to access and use AI in a manner that meaningfully improves their lives. Fully harnessing the potential of AI and enabling widespread participation in its development, deployment, and use is critical to driving sustainable solutions to global challenges. All citizens, including those in the Global South, should be able to create their own opportunities, harness them, and achieve prosperity through AI. All countries, big or small, must be able to participate in AI governance.
47. Affirmative and corrective steps, including access and capacity building, will be needed to address the historical and structural exclusion of certain communities, for instance women and gender diverse actors, from the development, deployment, use, and governance of technology, and to turn digital divides into inclusive digital opportunities.
Guiding Principle 2. AI must be governed in the public interest
48. The development of AI systems is largely concentrated in the hands of technology companies. The refinement, deployment and use of AI will involve other actors including but not limited to the original developers (be they companies, small AI labs, other organizations as well as countries) but include deployers and users who will range from individuals, to companies, organizations, and governments and who will bring a wide variety of incentives to their approaches.
49. As shown by the experience with social media, AI products and services can scale rapidly across borders and categories of users. For this reason, as well as wider considerations of opportunities and risks, AI must be governed in the broader public interest. “Do no harm” is necessary, but not sufficient. A broader framing is needed for accountability of companies and other organisations that build, deploy and control AI as well as those that use AI across multiple sectors of the economy and society across the lifecycle of AI. This cannot rely on self-regulation alone: binding norms enforced by member states consistently are needed to ensure that public interests, rather than private interests, prevail.
50. AI will be used by people and organizations, across multiple sectors, each with different use-cases and complexities and risks. Governance efforts must bear in mind public policy goals related to diversity, equity, inclusion, sustainability, societal and individual well-being, competitive markets, and healthy innovation ecosystems. They must also integrate implications of missed uses for economic and social development. Governance in this context should expand representation of diverse stakeholders, as well as offer greater clarity in delineating responsibilities between public and private sector actors. Governing in the public interest also implies investments in public technology, infrastructure, and the capacity of public officials.
Guiding Principle 3. AI governance should be built in step with data governance and the promotion of data commons
51. Data is critical for many major AI systems. Its governance and management in the public interest cannot be divorced from other components of AI governance (Figure 1). Regulatory frameworks and techno-legal arrangements that protect privacy and security of personal data, consistent with applicable laws, while actively facilitating the use of such data will be a critical complement to AI governance arrangements, consistent with local or regional law. The development of public data commons should also be encouraged with particular attention to public data that is critical for helping solve societal challenges including climate change, public health, economic development, capacity building and crisis response, for use by multiple stakeholders.
Guiding Principle 4. AI governance must be universal, networked and rooted in adaptive multi- stakeholder collaboration
52. Any AI governance effort should prioritize universal buy-in by different member states and stakeholders. This is in addition to inclusive participation, in particular lowering entry barriers for previously excluded communities in the Global South (Guiding Principle 1). This is key for emerging AI regulations to be harmonized in ways that avoid accountability gaps.
53. Effective governance should leverage existing institutions that will have to review their current functions in light of the impact of AI. But this is not enough. New horizontal coordination and supervisory functions are required and they should be entrusted to a new organizational structure. New and existing institutions could form nodes in a network of governance structures. There is a clear momentum across diverse states for this to happen as well as growing awareness in the private sector for a well-coordinated and interoperable governance framework. Civil society concerns regarding the impact of AI on human rights point in a similar direction.
54. Such an AI governance framework can draw on best practices and expertise from around the world. It must also be informed by understanding of different cultural ideologies driving AI development, deployment, and use. Innovative structures within this governance framework would be needed to engage the private sector, academia, and civil society alongside governments. Inspiration may be drawn from past efforts to engage the private sector in pursuit of public goods, including the ILO’s tripartite structure and the UN Global Compact.
Guiding Principle 5. AI governance should be anchored in the UN Charter, International Human Rights Law, and other agreed international commitments such as the Sustainable Development Goals
55. The UN has a unique normative and institutional role to play; aligning AI governance with foundational UN values — notably the UN Charter and its commitment to peace and security, human rights, and sustainable development — offers a robust foundation and compass. The UN is positioned to consider AI’s impact on a variety of global economic, social, health, security, and cultural conditions, all grounded in the need to maintain universal respect for, and enforcement of, human rights and the rule of law. Several UN agencies have already done important work on the impact of AI on fields from education to arms control.
56. The Global Digital Compact and the Roadmap for Digital Cooperation are examples of multi-stakeholder deliberations towards a global governance framework of technologies including AI. Strong involvement of UN member states, empowering UN agencies and involving diverse stakeholders, will be vital to empowering and resourcing a global AI governance response.
B. Institutional Functions
57. We consider that to properly govern AI for humanity, an international governance regime for AI should carry out at least the following functions. These could be carried out by individual institutions or a network of institutions.
...
58. Figure 2 summarizes our recommended institutional functions for international AI governance. At the global level, international organizations, governments, and private sector would bear primary responsibility for these functions. Civil society, including academia and independent scientists, would play key roles in building evidence for policy, assessing impact, and holding key actors to account during implementation. Each set of functions would have different loci of responsibility at different layers of governance — private sector, government, and international organizations. We will further develop the concept of shared and differentiated responsibilities for multiple stakeholders at different layers of the governance stack in the next phase of our work.
Institutional Function 1: Assess regularly the future directions and implications of AI
59. There is, presently, no authoritative institutionalized function for independent, inclusive, multidisciplinary assessments on the future trajectory and implications of AI. A consensus on the direction and pace of AI technologies — and associated risks and opportunities — could be a resource for policymakers to draw on when developing domestic AI programmes to encourage innovation and manage risks.
60. In a manner similar to the IPCC, a specialized AI knowledge and research function would involve an independent, expert-led process that unlocks scientific, evidence-based insights, say every six months, to inform policymakers about the future trajectory of AI development, deployment, and use (subfunctions 1-3 in Table 1). This should include arrangements with companies on access to information for the purposes of research and horizon-scanning. This function would help the public better understand AI, and drive consensus in the international community about the speed and impact of AI’s evolution. It would produce regular shared risk assessments, as well as establishing standards to measure the environmental and other impacts of AI. This Advisory Body is in a way the start of such an experts-led process, which would need to be properly resourced and institutionalised.
61. The extent of AI’s negative externalities is not yet fully clear. The role of AI in disintermediating aspects of life that are core to human development could fundamentally change how individuals and communities function. As AI capabilities further advance, there is the potential for profound, structural adjustments to the way we live, work, and interact. A global analytical observatory function could coordinate research efforts on critical social impacts of AI, including its effects on labour, education, public health, peace and security, and geopolitical stability. Drawing on expertise and sharing knowledge from around the world, such a function could facilitate the emergence of best practices and common responses.
Institutional Function 2: Reinforce interoperability of governance efforts emerging around the world and their grounding in international norms through a Global AI Governance Framework endorsed in a universal setting (UN)
62. AI governance arrangements should be interoperable across jurisdictions and grounded in international norms, such as the Universal Declaration of Human Rights (Principle 4
18
above). They should leverage existing UN organizations and fora such as UNESCO and ITU for reinforcing interoperability of regulatory measures across jurisdictions. AI governance efforts could also be coordinated through a body that harmonises policies, builds common understandings, surfaces best practices, supports implementation and promotes peer-to-peer learning (subfunctions 7-10 in Table 1). A Global AI Governance Framework could support policymaking and guide implementation to avoid AI divides and governance gaps across public and private sectors, regions, and countries as well as clarifying the principles and norms under which various organizations should operate. As part of this framework, special attention should be paid to capacity-building both in the private and public sectors as well as dissemination of knowledge and awareness across the world. Best practices such as human rights impact assessments by private and public sector developers of AI systems could be spread through such a framework, which may a need an international agreement.
Institutional Function 3: Develop and harmonize standards, safety, and risk management frameworks
63. Several important initiatives to develop technical and normative standards, safety, and risk management frameworks for AI are underway, but there is a lack of global harmonization and alignment (subfunction 11 in Table 1). Because of its global membership, the UN can play a critical role in bringing states together, developing common socio-technical standards, and ensuring legal and technical interoperability.
64. As an example, emerging AI safety institutes could be networked to reduce the risk of competing frameworks, fragmentation of standardization practices across jurisdictions, and a global patchwork with too many gaps. Care should, however, be taken not to overemphasise technical interoperability without parallel movement on other functions and norms. While there is greater awareness of socio-technical standards, more research, active involvement of civil society and transdisciplinary cooperation is needed to develop such standards.
65. Further, new global standards and indicators to measure and track the environmental impact of AI as well as its energy and natural resources consumption (i.e. electricity and water) could be defined to guide AI development and help achieve SDGs related to the protection of the environment.
Institutional Function 4: Facilitate development, deployment, and use of AI for economic and societal benefit through international multi-stakeholder cooperation
66. In addition to standards for preventing harm and misuse, developers and users, especially in the Global South, need critical enablers such as standards for data labelling and testing, data protection and exchange protocols that enable testing and deployment across borders for startups as well as legal liability, dispute resolution, business development, and other supporting mechanisms. Existing legal, financial, and technical arrangements need to evolve to anticipate complex adaptive AI systems of the future, and this will require taking into account lessons learnt from forums such as FATF,
19
SWIFT and equivalent mechanisms. In addition, for most countries and regions, capacity development in the public sector is urgently required to facilitate responsible and beneficial use of AI as well as participate in international multi-stakeholder cooperative frameworks to develop enablers for AI (subfunctions 4, 5 and 11 in Table 1).
Institutional Function 5: Promote international collaboration on talent development, access to compute infrastructure, building of diverse high-quality datasets, responsible sharing of open- source models, and AI-enabled public goods for the SDGs
67. A new mechanism (or mechanisms) is required to facilitate access to data, compute, and talent in order to develop, deploy, and use AI systems for the SDGs through upgraded local value chains, giving independent academic researchers, social entrepreneurs, and civil society access to the infrastructure and datasets needed to build their own models and to conduct research and evaluations. This may require networked resources and efforts to build common datasets and data commons for use in the public interest, responsible sharing of open-source models, computational resources, and scale education and training.
68. Pooling expert knowledge and resources analogous to CERN, EMLB or ITER, as well as the technology diffusion functions of the IAEA, could provide a much-needed boost to the SDGs (subfunction 6 in Table 1). Creating incentives for private sector actors to share and make available tools for research and development can also complement such functions. Experts from the Global South are often invisible at global conferences on AI. This needs to change.
69. Opening access to data and compute should also be accompanied by capacity-building, in particular in the Global South. To facilitate local creation, adoption, and context- specific tuning of models, it would be important to track positive uses of AI, incentivize and assess AI-enabled public goods. Private sector engagement would be crucial in leveraging AI for the SDGs. Analogous to commitments made by businesses under the Global Compact, this could include public promises by technology and other companies to develop, deploy, and use AI for the greater good. In the larger context of the Global Digital Compact, it could also include reporting on the ways in which AI is supporting the Sustainable Development Goals.
Institutional Function 6: Monitor risks, report incidents, coordinate emergency response
70. The borderless nature of AI tools, which can proliferate across the globe at the stroke of a key, pose new challenges to international security and global stability. AI models could lower the barriers for access to weapons of mass destruction. AI-enabled cyber tools increase the risk of attacks on critical infrastructure and dual-use AI can be used to power lethal autonomous weapons which could pose a risk to international humanitarian law and other norms. Bots can rapidly disseminate harmful information, with increasingly human characteristics, in a manner that can cause significant damage to markets and public institutions. The possibility of rogue AI escaping control and posing still larger risks cannot be ruled out. Given these challenges, capabilities must be
20
created at a global level to monitor, report, and rapidly respond to systemic
vulnerabilities and disruptions to international stability (subfunctions 13, 14 in Table 1).
71. For example, a techno-prudential model, akin to the macro-prudential framework used to
increase resilience in central banking and bringing together those developed at the national level, may help to similarly insulate against AI risks to global stability. Such a model must be grounded in human rights principles.
72. Reporting frameworks can be inspired by existing practices of the IAEA for mutual reassurance on nuclear safety and nuclear security, as well as the WHO on disease surveillance.
Institutional Function 7: Compliance and accountability based on norms
73. We cannot rule out that legally binding norms and enforcement would be required at the global level. A regional effort for an AI treaty is already underway and the issue of lethal autonomous weapons is under consideration in the framework of a treaty on conventional weapons. Non-binding norms could also play an important role, alone or in combination with binding norms. The UN cannot and should not seek to be the sole arbiter of AI governance. However, in certain fields, such as challenges to international security, it has unique legitimacy to elaborate norms (subfunction 12 in Table 1). It can also help ensure that there are no accountability gaps, for example by encouraging states to report analogous to reporting on the SDGs targets and the Universal Periodic Review that facilitates monitoring, assessing, and reporting on human rights practices (subfunctions 15 in Table 1). This would need to be done in a timely and accurate way. Inspired by existing institutions such as the WTO, dispute resolution can also be facilitated through global forums.
74. At the same time, the legitimacy of any global governance institution depends on accountability of that institution itself. International governance efforts must demonstrate resolute transparency in objectives and processes and make all efforts to gain the trust of citizen stakeholders, including by preventing conflicts of interest.
...
Conclusion
75. To the extent that AI impacts our lives — how we work and socialize, how we are educated and governed, how we interact with one another daily — it raises questions more fundamental than how to govern it. Such questions of what it means to be human in a fully digital and networked world go well beyond the scope of this Advisory Body. Yet they are implicated in the decisions we make today. For governance is not an end but a means, a set of mechanisms intended to exercise control or direction of something that has the potential for good or ill.
76. We aspire to be both comprehensive in our assessment of the impact of AI on people’s lives and targeted in identifying the unique difference the UN can make. We hope it is apparent that we see real benefits of AI; equally, we are clear-eyed about its risks.
77. The risks of inaction are also clear. We believe that global AI global governance is essential to reap the significant opportunities and navigate the risks that this technology presents for every state, community, and individual today. And for the generations to come.
78. To be effective, the international governance of AI must be guided by principles and implemented through clear functions. These global functions must add value, fill identified gaps, and enable interoperable action at regional, national, industry, and community levels. They must be performed in concert across international institutions, national and regional frameworks as well as the private sector. Our preliminary recommendations set out what we consider to be core principles and functions for any global AI governance framework.
79. We have taken a form follows function approach and do not, at this stage, propose any single model for AI governance. Ultimately, however, AI governance must deliver tangible benefits and safeguards to people and societies. An effective global governance framework must bridge the gap between principles and practical impact. In the next phase of our work, we will explore options for institutional forms for global AI governance, building on the perspectives of diverse stakeholders worldwide.
Next Steps
80. Rather than proposing any single model for AI governance at this stage, the foregoing preliminary recommendations focus on the principles and functions to which any such regime must aspire.
81. Over the coming months we will consult — individually and in groups — with diverse stakeholders around the world. This includes participation at events tasked with discussing the issues in this report as well as engagement with governments, the private sector, civil society, and research and technical communities. We will also pursue our research, including on risk assessment methodologies and governance interoperability. Case studies will be developed to help think about landing issues identified in the report in specific contexts. We also intend to dive deep into a few areas, including Open- Source, AI and the financial sector, standard setting, intellectual property, human rights, and the future of work by leveraging existing efforts and institutions. ...