UNIDIR is launching a new research project on the proliferation of artificial intelligence in the context of international peace and security. The project, developed under UNIDIR’s Security and Technology Programme, will unfold in two phases. First, it will map out the technology’s main pathways to proliferation: How could AI capabilities be accessed by malicious actors? Then, it will seek to identify appropriate policy responses: How could proliferation risks be mitigated? This will involve evaluating the effectiveness of existing non-proliferation and arms control frameworks, and formulating concrete policy recommendations to enhance counter-proliferation efforts.
The proliferation of artificial intelligence (AI) carries implications for international peace and security. Risks, in particular the access to the technology by malicious non-State actors, have been mentioned in numerous forums – including in multilateral meetings and UN documents. For example, in the ongoing discussions in the Group of Governmental Experts on Lethal Autonomous Weapons Systems, the Guiding Principles adopted in 2019 explicitly refer to the risk of proliferation of these weapons and their acquisition by terrorist groups. Further, the resolution adopted by the General Assembly in December 2024, refers to the concerns and possible impact of AI proliferation to non-State actors.
The risks of AI proliferation remain, however, relatively underexplored and somewhat simplified to imply diversion of autonomous weapons to non-State groups. This research project aims to fill a significant gap in current policy debates by providing an in-depth analysis of what the proliferation of AI effectively entails, how AI technologies may proliferate, and how policy responses may be devised to counter proliferation.
Mapping AI proliferation risks
The first phase of the project (2025-2026) will map out the main pathways for the proliferation and diversion of AI, including how these technologies can be repurposed, accessed, developed and misused by non-State actors.
AI relies on a vast and decentralized ecosystem of software, hardware infrastructure and talent distribution, with numerous entry points for vulnerabilities, which can be exploited for the purpose of proliferation, diversion, and misuse or weaponization. A break-down of the AI value chain can help point to possible pathways for proliferation.
A broad analysis, however, renders incomplete conclusions in the context of a general-purpose technology where different use cases will entail specific or unique proliferation risks. Because AI can be embedded and used across a wide range of domains, including physical (robotic) systems and digital technologies, different challenges and enabling factors for proliferation will surface across domains of use. For example, the elements making possible the proliferation of autonomous systems (e.g. drones) are not invariably the same as for the proliferation of large language models (LLMs). Varied considerations of compute, data, talent, scalability and costs come into play and these may trigger, at times, different implications for non-proliferation governance.
The research will highlight these challenges through two case studies. The first one will focus on the proliferation of autonomous weapons, in particular the retrofitting or repurposing of commercial unmanned systems for combat or other military functions. The second one will focus on malicious uses of LLMs, exploring open-source models and their proliferation for malicious purposes.
Towards effective policy responses
The next phase of the project (2026) will build on the technical research and case studies from Phase I, delving deeper into existing and possible future policy frameworks to counter proliferation.
The research will evaluate the effectiveness of current non-proliferation and arms control frameworks at the national and international level, including export control policies or measures agreed among States (e.g. information exchange arrangements). This will examine the adaptability of existing mechanisms to a rapidly evolving technological – and threat – landscape. Close attention will be given to possible gaps in compliance and implementation policies, including gaps in law enforcement at the national level, as well as jurisdictional challenges.
From this, UNIDIR will then formulate concrete and implementable recommendations to support counter-proliferation efforts for AI. These will be tailored for national policymaking and multilateral institutions, as well as industry stakeholders.
Safeguarding peace in the age of AI
Through this two-phase initiative, UNIDIR seeks not only to enhance understanding of how AI may proliferate, but also to strengthen the international community’s capacity to respond effectively. By combining rigorous technical research with actionable policy guidance, the project embodies UNIDIR’s mandate to deliver independent, forward-looking analysis at the intersection of security and technology.