Large Language Models and International Security

Online
16 April 2024
Past
Large Language Models and International Security

Large language models (LLMs) are AI systems that work with language. LLMs have known immense increase in popularity since 2022 due to rapid increase in performance and ease of use across domains. LLMs operate as foundation models, which means that they can have broad capabilities, are scalable and can be adapted for task-specific models.

The implications of LLMs for international security deserve timely and closer scrutiny. While some concerns have already been raised in the context of LLMs’ potential to be misused for disinformation campaigns at scale, the technology’s areas of use are much broader, as are the possible risks. LLMs-based capabilities could be leveraged in myriad ways, including in intelligence analysis and for integration in decision support systems to augment military planning, but also for potentially nefarious purposes, with serious concerns raised in the domains of cyber security and the proliferation of biological weapons.

The event provided an overview of the technology’s relevance to international security, broadly, as well as address emerging areas of risks.

SPEAKERS

  • Rita Sevastjanova, Researcher, Swiss Federal Institute of Technology (ETH Zurich)
  • Dan Tadross, Head of Federal Delivery, Scale AI
  • Richard J. Carter, Senior Visiting Fellow and Strategy Advisor, Center for Emerging Technology and Security (CETaS), The Alan Turing Institute
  • Jessica Ji, Research Analyst, Center for Security and Emerging Technology (CSET)
  • Haonan Li, Fellow, Mohamed bin Zayef University of Artificial Intelligence (MBZUAI)

WHEN & WHERE

Tuesday, 16 April 2024 | 13:30-14:45 CET | Online (via Zoom)

PARTICIPANTS

This event brought together delegations, AI experts as well as members of the multistakeholder community including industry, civil society, and intergovernmental organizations.