The proliferation of artificial intelligence (AI) in the context of international peace and security has emerged as a key area of concern in ongoing international and multilateral discussions. To date, risks of proliferation have been mainly framed in connection to autonomous weapons systems and the ease of access, diversion or misuse of such systems by malicious non-State actors. The broader scope of AI proliferation, and its associated risks, remain, however, relatively underexplored.
This side event to the UN General Assembly First Committee aims to address some of the conceptual and practical aspects of AI proliferation. It will discuss what proliferation effectively means in the context of AI and how risks can be evaluated. Further, the event will provide concrete technical examples to illustrate the inherent complexities of proliferation for a general-purpose technology like AI. Two brief case studies will be presented to highlight specific avenues for, as well as challenges and (existing) limitations. One focusing on autonomous weapons, including a specific focus on repurposing and retrofitting of commercial unmanned systems for military purposes. The other, discussing risks of proliferation of large language models.
This side event is organized as part of an ongoing UNIDIR project that aims to map out the proliferation risks of AI and to consider counterproliferation measures and policies that the international community can implement to effectively respond to these challenges.
Agenda
Introduction
Unpacking AI proliferation
Case studies
- Autonomous weapons
- Large Language Models
Conclusions and ways forward
RSVP
Registration is mandatory for participation. Kindly register by 24 October.
Details to access the event online will be emailed to registered participants one day prior to the event.
Further information
For further information or questions, please contact sectec-unidir@un.org.