<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Artificial intelligence → UNIDIR</title>
	<atom:link href="https://unidir.org/focus-area/artificial-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>https://unidir.org</link>
	<description>Building a more secure world.</description>
	<lastBuildDate>Wed, 22 Apr 2026 08:44:03 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>The Global Prism of Military AI Governance: Reflections from the 2025 Regional Consultations on Responsible AI in the Military Domain</title>
		<link>https://unidir.org/publication/the-global-prism-of-military-ai-governance-reflections-from-the-2025-regional-consultations-on-responsible-ai-in-the-military-domain/</link>
		
		<dc:creator><![CDATA[Maria Belen Lopez Conte]]></dc:creator>
		<pubDate>Mon, 02 Feb 2026 18:21:51 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=25428</guid>

					<description><![CDATA[<p>The Governments of Spain, the Republic of Korea, and the Kingdom of the Netherlands &#8211; in partnership with France, Kenya and Pakistan &#8211; conducted a series of five regional consultations on artificial intelligence (AI) in the military domain. These consultations served as key preparatory steps leading up to the third Summit on Responsible Artificial Intelligence<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/the-global-prism-of-military-ai-governance-reflections-from-the-2025-regional-consultations-on-responsible-ai-in-the-military-domain/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/the-global-prism-of-military-ai-governance-reflections-from-the-2025-regional-consultations-on-responsible-ai-in-the-military-domain/">The Global Prism of Military AI Governance: Reflections from the 2025 Regional Consultations on Responsible AI in the Military Domain</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>The Governments of Spain, the Republic of Korea, and the Kingdom of the Netherlands &#8211; in partnership with France, Kenya and Pakistan &#8211; conducted a series of five regional consultations on artificial intelligence (AI) in the military domain. These consultations served as key preparatory steps leading up to the third Summit on Responsible Artificial Intelligence in the Military Domain (REAIM), held in A Coruña, Spain, on 4–5 February 2026.</p>



<p>Facilitated by UNIDIR, the consultations sought to build on the 2024 REAIM Regional Consultations and the 2023 and 2024 Summits, in addition to capturing evolution in national views and policies on responsible AI in the military domain, regional priorities and multi-stakeholder engagement over the year.</p>



<p>This report seeks to capture the main takeaways from the five regional consultations, summarizing participants’ views and a selection of UNIDIR’s observations. Specifically, these observations are centred around a number of common threads that ran through all of the regional consultations (even though minor adjustments were made for each regional event to factor in its respective local context and realities):</p>



<ul class="wp-block-list">
<li>National policies and practices</li>



<li>Looking back (post-REAIM 2023 and 2024 reflections)</li>



<li>Looking ahead (reflections for the 2026 REAIM Summit)</li>
</ul>



<p>In addition, this report provides a comprehensive overview of some of the takeaways from the discussions held with the multi-stakeholder community. One key objective of the consultations, in acknowledgment of the importance of multi-stakeholder engagement, is to take stock of the views of regional representatives from industry, civil society, academia and research institutes, as well as regional and international organizations.</p>



<p>This report also looks into the operationalization of responsible AI principles across the life cycle of AI-enabled military capabilities through the lenses of assurances, incident response, crisis management and risk reduction.</p>



<p>The report then lays out States&#8217; reflections on the REAIM journey three years on from the inaugural summit. It concludes by identifying substantive areas of priority that States wish to see further pursued, both within REAIM and beyond, before presenting a series of concrete recommendations for the road ahead.</p>



&nbsp;



<p>Citation: <em>Yasmin Afina, The Global Prism of Military AI Governance: Reflections from the 2025 Regional Consultations on Responsible AI in the Military Domain (Geneva: UNIDIR, 2026)</em>.</p>



<p></p><p>The post <a href="https://unidir.org/publication/the-global-prism-of-military-ai-governance-reflections-from-the-2025-regional-consultations-on-responsible-ai-in-the-military-domain/">The Global Prism of Military AI Governance: Reflections from the 2025 Regional Consultations on Responsible AI in the Military Domain</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Factsheet: Artificial Intelligence and the Women, Peace and Security Agenda</title>
		<link>https://unidir.org/publication/factsheet-artificial-intelligence-and-the-women-peace-and-security-agenda/</link>
		
		<dc:creator><![CDATA[Meyha Sharma]]></dc:creator>
		<pubDate>Tue, 02 Sep 2025 15:14:17 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=23480</guid>

					<description><![CDATA[<p>This factsheet is intended to provide a snapshot of the link between artificial intelligence (AI) and the Women, Peace and Security (WPS) Agenda. This year, with the agenda turning 25, the factsheet provides ideas around how AI represents an opportunity and an obstacle for its realization. It also presents an analysis of the current state<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/factsheet-artificial-intelligence-and-the-women-peace-and-security-agenda/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/factsheet-artificial-intelligence-and-the-women-peace-and-security-agenda/">Factsheet: Artificial Intelligence and the Women, Peace and Security Agenda</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>This factsheet is intended to provide a snapshot of the link between artificial intelligence (AI) and the Women, Peace and Security (WPS) Agenda.</p>



<p>This year, with the agenda turning 25, the factsheet provides ideas around how AI represents an opportunity and an obstacle for its realization. It also presents an analysis of the current state of the integration of emerging technologies like AI within WPS national and regional action plans.</p>



<p>The factsheet concludes by proposing further areas of action to better include AI in the WPS Agenda, making it fit for purpose amidst changing realities around peace and conflict.</p>



&nbsp;



<p>Citation: <em>Shimona Mohan (2025) “Factsheet: Artificial Intelligence and the Women, Peace and Security Agenda”, UNIDIR, Geneva.</em></p>



<p></p><p>The post <a href="https://unidir.org/publication/factsheet-artificial-intelligence-and-the-women-peace-and-security-agenda/">Factsheet: Artificial Intelligence and the Women, Peace and Security Agenda</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Artificial Intelligence in the Military Domain and Its Implications for International Peace and Security: An Evidence-Based Road Map for Future Policy Action</title>
		<link>https://unidir.org/publication/artificial-intelligence-in-the-military-domain-and-its-implications-for-international-peace-and-security-an-evidence-based-road-map-for-future-policy-action/</link>
		
		<dc:creator><![CDATA[Meyha Sharma]]></dc:creator>
		<pubDate>Thu, 03 Jul 2025 11:35:07 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=22453</guid>

					<description><![CDATA[<p>Artificial intelligence (AI) is rapidly transforming the military domain, with profound implications for international peace and security. Until recently, multilateral discussions on military uses of AI were limited to the question of how this technology relates to lethal autonomous weapon systems (LAWS) – an important yet narrow field of application. In late 2024, however, the<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/artificial-intelligence-in-the-military-domain-and-its-implications-for-international-peace-and-security-an-evidence-based-road-map-for-future-policy-action/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/artificial-intelligence-in-the-military-domain-and-its-implications-for-international-peace-and-security-an-evidence-based-road-map-for-future-policy-action/">Artificial Intelligence in the Military Domain and Its Implications for International Peace and Security: An Evidence-Based Road Map for Future Policy Action</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Artificial intelligence (AI) is rapidly transforming the military domain, with profound implications for international peace and security. Until recently, multilateral discussions on military uses of AI were limited to the question of how this technology relates to lethal autonomous weapon systems (LAWS) – an important yet narrow field of application. In late 2024, however, the United Nations General Assembly <a href="https://unidir.org/wp-content/uploads/2025/03/UN_General_Assembly_A_RES_79_239-EN.pdf" title="">adopted a landmark resolution</a> that recognized the wide range of military applications of AI and called for the examination of this technology in the military domain beyond weapon systems. This resolution built on the growing awareness of AI in the military domain and the increase in its policy traction over the past 3 years. </p>



<p>This has been prompted by initiatives outside the United Nations, such as the Responsible AI in the Military Domain (REAIM) summits and the <a href="https://www.state.gov/bureau-of-arms-control-deterrence-and-stability/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy" title="">Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy</a>. These processes were fundamental in increasing awareness and served as incubators for policy action on the international stage. Against this backdrop and for many years, UNIDIR has contributed significantly to initiating and shaping national, regional and international discussions through its research, its capacity-building and its convening power.</p>



<p>The push for responsible AI in the military domain has opened new channels for dialogue among states. The shared recognition of AI’s disruptive potential, both positive and negative, has led to international discussions specifically about ensuring its safe and controlled development, deployment, and use. </p>



<p>The international community now has an opportunity to shape the future of international peace and security in the era of AI, putting principles of responsible AI at the core. Such engagement can build trust and mutual understanding, future-proofing the international peace and security architecture.</p>



<p>To further advance multilateral discussions on this new and rapidly evolving issue, it is crucial to clarify what “the military domain” means and entails; to survey key applications of AI in military settings in order to understand the associated opportunities; and to analyse the challenges and consider recommendations for policy development at all levels. This report addresses each of these aspects in turn, drawing on UNIDIR’s research and analysis on these topics over the years. It then proposes a 10-step road map towards effective national and international governance of AI in the military domain.</p>



&nbsp;



<p>Citation: <em>UNIDIR&#8217;s Security and Technology Programme. &#8220;Artificial Intelligence in the Military Domain and Its Implications for International Peace and Security: An Evidence-Based Road Map for Future Policy Action&#8221;. Geneva, Switzerland: UNIDIR, 2025.</em></p><p>The post <a href="https://unidir.org/publication/artificial-intelligence-in-the-military-domain-and-its-implications-for-international-peace-and-security-an-evidence-based-road-map-for-future-policy-action/">Artificial Intelligence in the Military Domain and Its Implications for International Peace and Security: An Evidence-Based Road Map for Future Policy Action</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Regional Perspectives on the Application of International Humanitarian Law to Lethal Autonomous Weapon Systems</title>
		<link>https://unidir.org/publication/regional-perspectives-on-the-application-of-international-humanitarian-law-to-lethal-autonomous-weapon-systems/</link>
		
		<dc:creator><![CDATA[Meyha Sharma]]></dc:creator>
		<pubDate>Mon, 07 Apr 2025 13:02:11 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=21534</guid>

					<description><![CDATA[<p>States’ decade-long deliberations on emerging technologies in the area of lethal autonomous weapon systems (LAWS) have consistently discussed the application of international humanitarian law (IHL). Yet, as the international community grapples with this inherently technical and complex issue, much uncertainty and unclarity remain as to how IHL specifically applies in relation to LAWS. Against this<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/regional-perspectives-on-the-application-of-international-humanitarian-law-to-lethal-autonomous-weapon-systems/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/regional-perspectives-on-the-application-of-international-humanitarian-law-to-lethal-autonomous-weapon-systems/">Regional Perspectives on the Application of International Humanitarian Law to Lethal Autonomous Weapon Systems</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>States’ decade-long deliberations on emerging technologies in the area of lethal autonomous weapon systems (LAWS) have consistently discussed the application of international humanitarian law (IHL). Yet, as the international community grapples with this inherently technical and complex issue, much uncertainty and unclarity remain as to how IHL specifically applies in relation to LAWS.</p>



<p>Against this backdrop, UNIDIR conducted a project on working “Towards a Common Understanding of the Application of IHL to Emerging Technologies in the Area of LAWS”. Building on the momentum on this topic, UNIDIR’s primary objective was to take stock of the existing state of affairs and to capture existing views, positions and approaches – across sectors and across regions – to the application of IHL to LAWS. To this end, the Institute has drafted a separate background paper that summarizes publicly available views expressed by states, scholars and other experts participating in multilateral discussions on the applicability and interpretation of IHL with respect to the development, deployment and use of LAWS. To complement this research, UNIDIR conducted a series of bilateral and regional consultations between November 2024 and March 2025. In partnership with regional partners, regional consultations were held in The Hague, Brasília, Pretoria and Singapore. The consultations were designed to provide a platform for open discussions, knowledge and information sharing and the deepening of regional understandings on the intersection between IHL and LAWS. Held in-person under the Chatham House Rule, participants included government-affiliated experts in law, policy and defence from various ministries, national agencies and authorities, as well as a select number of scholars specializing in IHL and policy.</p>



&nbsp;



<p>Citation: <em>Yasmin Afina, &#8220;Regional Perspectives on the Application of International Humanitarian Law to Lethal Autonomous Weapons Systems&#8221;, UNIDIR, Geneva, 2025.</em></p><p>The post <a href="https://unidir.org/publication/regional-perspectives-on-the-application-of-international-humanitarian-law-to-lethal-autonomous-weapon-systems/">Regional Perspectives on the Application of International Humanitarian Law to Lethal Autonomous Weapon Systems</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI in the Military Domain: A briefing note for States</title>
		<link>https://unidir.org/publication/ai-military-domain-briefing-note-states/</link>
		
		<dc:creator><![CDATA[Mireia Mas Vivancos]]></dc:creator>
		<pubDate>Mon, 10 Mar 2025 15:15:41 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=21128</guid>

					<description><![CDATA[<p>On 24 December 2024, the United Nations (UN) General Assembly adopted Resolution A/RES/79/239 on Artificial intelligence in the military domain and its implications for international peace and security. The UN Secretary-General recently invited Member States, observer States, international and regional organizations, the International Committee of the Red Cross, civil society, industry and the scientific community<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/ai-military-domain-briefing-note-states/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/ai-military-domain-briefing-note-states/">AI in the Military Domain: A briefing note for States</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>On 24 December 2024, the United Nations (UN) General Assembly adopted Resolution A/RES/79/239 on <a href="https://unidir.org/wp-content/uploads/2025/03/UN_General_Assembly_A_RES_79_239-EN.pdf" title="">Artificial intelligence in the military domain and its implications for international peace and security</a>. The UN Secretary-General recently invited Member States, observer States, international and regional organizations, the International Committee of the Red Cross, civil society, industry and the scientific community to submit their views “on the opportunities and challenges posed to international peace and security by the application of artificial intelligence in the military domain, with specific focus on areas other than lethal autonomous weapons systems”.</p>



<p>This briefing note will contribute to a report submitted to the 18th session of General Assembly and aims to support States in the formulation of their national views on this topic. It seeks to ensure that the resulting report is as comprehensive, diverse and geographically representative as possible. The brief includes some contextual information on the topic of AI in the military domain, a set of considerations for States to refer to, and a list of suggested readings that draws on UNIDIR’s own research and selected external publications.</p>



&nbsp;



<p>Citation: <em>Giacomo Persi Paoli and Yasmin Afina, &#8220;AI in the Military Domain: A briefing note for States&#8221;, <em>UNIDIR</em></em>, <em>Geneva, 2025.</em></p><p>The post <a href="https://unidir.org/publication/ai-military-domain-briefing-note-states/">AI in the Military Domain: A briefing note for States</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Interpretation and Application of International Humanitarian Law in Relation to Lethal Autonomous Weapon Systems</title>
		<link>https://unidir.org/publication/the-interpretation-and-application-of-international-humanitarian-law-in-relation-to-lethal-autonomous-weapon-systems/</link>
		
		<dc:creator><![CDATA[Mireia Mas Vivancos]]></dc:creator>
		<pubDate>Thu, 06 Mar 2025 16:14:03 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=21082</guid>

					<description><![CDATA[<p>Much of the multilateral deliberations on lethal autonomous weapon systems (LAWS) over the last decade has been grounded in consideration of how international humanitarian law (IHL) is to be interpreted and applied to the development and use of these systems. The complexity of technologies in the area of LAWS challenges traditional understandings of IHL. Many<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/the-interpretation-and-application-of-international-humanitarian-law-in-relation-to-lethal-autonomous-weapon-systems/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/the-interpretation-and-application-of-international-humanitarian-law-in-relation-to-lethal-autonomous-weapon-systems/">The Interpretation and Application of International Humanitarian Law in Relation to Lethal Autonomous Weapon Systems</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Much of the multilateral deliberations on lethal autonomous weapon systems (LAWS) over the last decade has been grounded in consideration of how international humanitarian law (IHL) is to be interpreted and applied to the development and use of these systems. The complexity of technologies in the area of LAWS challenges traditional understandings of IHL. Many contributions have grappled with what limits IHL places on the development and use of LAWS and what kinds of practical measures or limits might be or are being used to ensure that LAWS are used in compliance with these rules. Core topics among the views of States, scholars and other experts are the circumstances under which LAWS are permitted to be used in attacks and the measures that are required to be taken to minimize civilian harm due to the use of LAWS in attacks. In addition, the discourse has addressed the measures that must be taken before and after any attack involving the use of LAWS to prevent violations of IHL and ensure accountability in the case of any such violations.</p>



<p>To support these ongoing discussions, UNIDIR implemented a series of activities as part of the project &#8220;Towards a Common Understanding of the Application of IHL to Emerging Technologies in the Area of LAWS&#8221;.  This background paper summarizes publicly available views expressed by States, scholars and other experts participating in multilateral discussions on the applicability and interpretation of IHL with respect to the development and use of LAWS.</p>



<p>The background paper finds that, while all contributions to the discussion stem from the common starting point that IHL applies to the development and use of LAWS, divergences of both form and content persist in publicly available views. Despite the breadth of the discussions, a coherent comparison of views remains difficult to achieve and some IHL rules that govern the development and use of LAWS remain underexamined. Publicized views on measures that States can, do or should take with respect to the development and use of LAWS to avoid or minimize the effects of LAWS on civilian populations, civilians and civilian objects often do not specify whether such measures derive from an IHL principle or rule. The background paper underscores the considerations that arise in ensuring that LAWS are developed and used only in accordance with IHL and the challenges in achieving a level of certainty about the interpretation and application of IHL to these technologies.</p><p>The post <a href="https://unidir.org/publication/the-interpretation-and-application-of-international-humanitarian-law-in-relation-to-lethal-autonomous-weapon-systems/">The Interpretation and Application of International Humanitarian Law in Relation to Lethal Autonomous Weapon Systems</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Impact of Artificial Intelligence on Regional Security, Threat Perceptions and the Middle East WMD-Free Zone</title>
		<link>https://unidir.org/publication/the-impact-of-artificial-intelligence-on-regional-security-threat-perceptions-and-the-middle-east-wmd-free-zone/</link>
		
		<dc:creator><![CDATA[Mireia Mas Vivancos]]></dc:creator>
		<pubDate>Fri, 07 Feb 2025 09:50:15 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=20794</guid>

					<description><![CDATA[<p>With significant advancements in artificial intelligence (AI), many countries have been seeking to integrate these technologies into military and defence industries, including in the Middle East. In this publication, the author examines and analyzes the impact of AI on regional security, weapons of mass destruction (WMD), proliferation-related risks in the Middle East, and its potential<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/the-impact-of-artificial-intelligence-on-regional-security-threat-perceptions-and-the-middle-east-wmd-free-zone/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/the-impact-of-artificial-intelligence-on-regional-security-threat-perceptions-and-the-middle-east-wmd-free-zone/">The Impact of Artificial Intelligence on Regional Security, Threat Perceptions and the Middle East WMD-Free Zone</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>With significant advancements in artificial intelligence (AI), many countries have been seeking to integrate these technologies into military and defence industries, including in the Middle East. In this publication, the author examines and analyzes the impact of AI on regional security, weapons of mass destruction (WMD), proliferation-related risks in the Middle East, and its potential influence on the initiative to establish a WMD-Free Zone in the region.</p>



<p>The author examines plausible scenarios, such as the emergence of an arms race in military applications of AI among regional states, which could either increase WMD proliferation risks in the region or, conversely, help reduce them. The paper also discusses key factors AI may have in the negotiations to establish a WMD-Free Zone, including urgency and the potential technical benefits of AI in arms control processes.</p>



<p>Citation: <em>Nasser bin Nasser, “The Impact of Artificial Intelligence on Regional Security, Threat Perceptions and the Middle East WMD-Free Zone”, UNIDIR, Geneva, 2025, <a href="https://www.doi.org/10.37559/MEWMDFZ/2025/ZoneAI">https://www.doi.org/10.37559/MEWMDFZ/2025/ZoneAI</a></em>.</p><p>The post <a href="https://unidir.org/publication/the-impact-of-artificial-intelligence-on-regional-security-threat-perceptions-and-the-middle-east-wmd-free-zone/">The Impact of Artificial Intelligence on Regional Security, Threat Perceptions and the Middle East WMD-Free Zone</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Exploring the AI-ICT Security Nexus</title>
		<link>https://unidir.org/publication/exploring-the-ai-ict-security-nexus/</link>
		
		<dc:creator><![CDATA[Jack Conneely]]></dc:creator>
		<pubDate>Thu, 05 Dec 2024 12:47:49 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=20287</guid>

					<description><![CDATA[<p>There is growing attention among the international community on how Artificial Intelligence (AI) can change how Information Communication Technologies (ICTs) activities are conducted. In multilateral discussions, Member States and other stakeholders underline how AI can have both positive and concerning adoptions in the ICT environment. Indeed, AI could support offensive operations by increasing perpetrators’ capabilities<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/exploring-the-ai-ict-security-nexus/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/exploring-the-ai-ict-security-nexus/">Exploring the AI-ICT Security Nexus</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>There is growing attention among the international community on how Artificial Intelligence (AI) can change how Information Communication Technologies (ICTs) activities are conducted. In multilateral discussions, Member States and other stakeholders underline how AI can have both positive and concerning adoptions in the ICT environment. Indeed, AI could support offensive operations by increasing perpetrators’ capabilities to penetrate systems and networks, as well as enhancing defender’s posture in detecting, mitigating and responding to intrusions.</p>



<p>This publication unpacks the AI-ICT security nexus and outlines, through a very easy-to-read, infographic the main current applications of AI for offensive and defensive purposes. To explain AI applications to the ICT environment, this study introduces the UNIDIR’s Intrusion Phases model, which is a framework that identifies three areas where AI can be used: outside the network perimeter, on the network perimeter, and inside the network perimeter.</p>



&nbsp;



<p>Citation: <em>Giacomo Persi Paoli, Samuele Dominioni. “Exploring the AI-ICT Security Nexus”. Geneva, Switzerland: UNIDIR, 2024.</em></p><p>The post <a href="https://unidir.org/publication/exploring-the-ai-ict-security-nexus/">Exploring the AI-ICT Security Nexus</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Large Language Models and International Security: A Primer</title>
		<link>https://unidir.org/publication/large-language-models-and-international-security-a-primer/</link>
		
		<dc:creator><![CDATA[Jack Conneely]]></dc:creator>
		<pubDate>Wed, 06 Nov 2024 16:21:53 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=19926</guid>

					<description><![CDATA[<p>Large language models (LLMs) are AI systems best known for their ability to generate text when embedded in chatbots. The range of uses of this technology is, however, much broader and that includes applications with impact on international security. This primer provides an overview of LLMs and their relevance and impact in the context of<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/large-language-models-and-international-security-a-primer/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/large-language-models-and-international-security-a-primer/">Large Language Models and International Security: A Primer</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Large language models (LLMs) are AI systems best known for their ability to generate text when embedded in chatbots. The range of uses of this technology is, however, much broader and that includes applications with impact on international security.</p>



<p>This primer provides an overview of LLMs and their relevance and impact in the context of international security, through select case studies, which illustrate the technology’s dual-use character. It covers emerging uses in defense (such as for decision support, intelligence and wargaming), as well as areas of potential misuse by malicious actors (for example, for biological weapons proliferation, cyber-attacks and disinformation).</p>



<p>The paper highlights main areas of risks of LLMs, as well as limitations both in the technology and how it may be leveraged or misused in the current context.</p>



<p>The primer concludes with suggested action items to mitigate risks and points to possible future directions for the integration of LLMs in conversations about AI governance.</p>



<p>Citation: <em>Ioana Puscas, Large Language Models and International Security: A Primer, UNIDIR, Geneva, 2024</em>.</p><p>The post <a href="https://unidir.org/publication/large-language-models-and-international-security-a-primer/">Large Language Models and International Security: A Primer</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Draft Guidelines for the Development of a National Strategy on AI in Security and Defence</title>
		<link>https://unidir.org/publication/draft-guidelines-for-the-development-of-a-national-strategy-on-ai-in-security-and-defence/</link>
		
		<dc:creator><![CDATA[Jack Conneely]]></dc:creator>
		<pubDate>Thu, 24 Oct 2024 07:27:10 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=19769</guid>

					<description><![CDATA[<p>As innovation in artificial intelligence (AI) proceeds at breakneck speed, states’ appetite for devising frameworks for the governance of the research, development and deployment of these technologies is at its greatest. With calls for governance solutions increasing at both the national and international levels, the number of national strategy documents that frame the development, deployment<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/draft-guidelines-for-the-development-of-a-national-strategy-on-ai-in-security-and-defence/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/draft-guidelines-for-the-development-of-a-national-strategy-on-ai-in-security-and-defence/">Draft Guidelines for the Development of a National Strategy on AI in Security and Defence</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>As innovation in artificial intelligence (AI) proceeds at breakneck speed, states’ appetite for devising frameworks for the governance of the research, development and deployment of these technologies is at its greatest. With calls for governance solutions increasing at both the national and international levels, the number of national strategy documents that frame the development, deployment and use of these technologies has started to grow across regions. </p>



<p>Yet, most of these policies exclude or barely touch upon security and defence applications. Only a handful of national strategy documents have a section dedicated to this realm; and even fewer are specifically dedicated to it. This scarcity is at odds with the United Nations Secretary-General’s recommendation for Member States to “urgently develop national strategies on responsible design, development and use of artificial intelligence”, as outlined in his&nbsp;<a href="https://dppa.un.org/en/a-new-agenda-for-peace" target="_blank" rel="noreferrer noopener">New Agenda for Peace</a>.</p>



<p>Against this backdrop, UNIDIR has launched a programme of work to establish guidelines for the development, adoption, implementation and review of national strategies on AI in security and defence. The purpose of the guidelines, both of procedural and substantive nature, is to capture, anticipate and dissect the key issues, considerations and needs that each state must address as it develops or seeks to develop, adopt, implement and review its national strategy on AI in security and defence. In recognition of the host of incentives stemming from the establishment of such strategies, it is hoped that these guidelines will serve as a useful tool for states and non-state stakeholders alike as they seek to address issues related to the responsible development, deployment and use of AI in security and defence.<br><br>The present draft guidelines have been released to provide states and all relevant stakeholders involved in the development, adoption, implementation and review of national strategies on AI in security and defence with an opportunity to review and provide feedback to UNIDIR. The Institute aims to adopt a holistic and inclusive method to the establishment of the guidelines; it thus seeks to capture all the varying perspectives, viewpoints and approaches to this issue. We welcome feedback from stakeholders across sectors and domains<strong>.</strong></p><p>The post <a href="https://unidir.org/publication/draft-guidelines-for-the-development-of-a-national-strategy-on-ai-in-security-and-defence/">Draft Guidelines for the Development of a National Strategy on AI in Security and Defence</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Governance of Artificial Intelligence in the Military Domain: A Multi-Stakeholder Perspective on Priority Areas</title>
		<link>https://unidir.org/publication/governance-of-artificial-intelligence-in-the-military-domain-a-multi-stakeholder-perspective-on-priority-areas/</link>
		
		<dc:creator><![CDATA[Mireia Mas Vivancos]]></dc:creator>
		<pubDate>Thu, 05 Sep 2024 15:22:41 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=19159</guid>

					<description><![CDATA[<p>In March 2024, the Roundtable for AI, Security and Ethics (RAISE) was launched in Bellagio, Italy. A multi-year collaborative initiative led by UNIDIR and in partnership with Microsoft, RAISE is intended to establish itself as the neutral, trusted and independent platform for inclusive, cross-regional and multisectoral engagement on artificial intelligence (AI) in security and defence.<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/governance-of-artificial-intelligence-in-the-military-domain-a-multi-stakeholder-perspective-on-priority-areas/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/governance-of-artificial-intelligence-in-the-military-domain-a-multi-stakeholder-perspective-on-priority-areas/">Governance of Artificial Intelligence in the Military Domain: A Multi-Stakeholder Perspective on Priority Areas</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>In March 2024, <a href="https://unidir.org/raise/" title="">the Roundtable for AI, Security and Ethics (RAISE)</a> was launched in Bellagio, Italy. A multi-year collaborative initiative led by UNIDIR and in partnership with <a href="https://www.microsoft.com/en-us/" title="">Microsoft</a>, RAISE is intended to establish itself as the neutral, trusted and independent platform for inclusive, cross-regional and multisectoral engagement on artificial intelligence (AI) in security and defence.</p>



<p>The inaugural edition of RAISE convened participants primarily hailing from the industry and the research community, with select government representatives in light of the upcoming second edition of the Responsible AI in the Military Domain (REAIM) Summit (Seoul, 9-10 September 2024). Its objectives were two-fold:</p>



<ol class="wp-block-list">
<li>Review the current state of applications of AI in security and defence contexts, across sectors and geographies but with a particular focus on the military domain; and</li>



<li>Identify key priority areas in which to develop specific guidance and policy recommendations on identified issues.</li>
</ol>



<p>This inaugural edition of RAISE focused specifically on the military domain due to the momentum in this particular area. As a result of the convening and UNIDIR’s facilitation of the discussions, participants had identified, by its end, six key priority areas for RAISE to advance in the governance of AI in the military domain: building a knowledge base; trust building; the human element in AI uses; data practices; life cycle management; and destabilization.</p>



<p>This Policy Brief aims to lay the foundation for future work that will bring to life recommendations revolving around the six priority themes that the meeting identified, and which it agreed would serve as a basis for cooperation and collective action that transcends geopolitical rivalry, cross-sectoral divides and competition.</p><p>The post <a href="https://unidir.org/publication/governance-of-artificial-intelligence-in-the-military-domain-a-multi-stakeholder-perspective-on-priority-areas/">Governance of Artificial Intelligence in the Military Domain: A Multi-Stakeholder Perspective on Priority Areas</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Global Kaleidoscope of Military AI Governance</title>
		<link>https://unidir.org/publication/the-global-kaleidoscope-of-military-ai-governance/</link>
		
		<dc:creator><![CDATA[Mireia Mas Vivancos]]></dc:creator>
		<pubDate>Thu, 05 Sep 2024 15:14:44 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=19152</guid>

					<description><![CDATA[<p>In the run-up to the second iteration of the Responsible AI in the Military Domain (REAIM) Summit, to be held in Seoul, Republic of Korea, on 9-10 September 2024, the Governments of the Republic of Korea and the Netherlands organized, in partnership with Chile, Costa Rica, Kenya, Singapore and Türkiye, a series of five regional<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/the-global-kaleidoscope-of-military-ai-governance/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/the-global-kaleidoscope-of-military-ai-governance/">The Global Kaleidoscope of Military AI Governance</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>In the run-up to the second iteration of the Responsible AI in the Military Domain (REAIM) Summit, to be held in Seoul, Republic of Korea, on 9-10 September 2024, the Governments of the Republic of Korea and the Netherlands organized, in partnership with Chile, Costa Rica, Kenya, Singapore and Türkiye, a series of five regional consultations on responsible artificial intelligence in the military domain.</p>



<p>This report captures UNIDIR’s main reflections on the key takeaways stemming from the five regional consultations. These consultations did indeed enable the dissection of local contexts, realities and approaches with regard to the responsible development, deployment and use of AI in the military and wider security domains – including the identification of areas of nuanced convergence and divergence at the regional level.</p>



<p>Specifically, this report first discusses the reflections shared by States on the unique characteristics of AI technologies and the opportunities that they provide in the military domain. In addition, States also discussed and exchanged views on the risks, challenges and implications stemming from the development, deployment and use of AI in the military and wider security domains. The report then covers six points of convergence that have emerged from the consultations, along with five main points of divergence observed across and within regions.</p><p>The post <a href="https://unidir.org/publication/the-global-kaleidoscope-of-military-ai-governance/">The Global Kaleidoscope of Military AI Governance</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Gender and Lethal Autonomous Weapons Systems</title>
		<link>https://unidir.org/publication/gender-and-lethal-autonomous-weapons-systems/</link>
		
		<dc:creator><![CDATA[Asa Cusack]]></dc:creator>
		<pubDate>Mon, 26 Aug 2024 09:26:42 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=19058</guid>

					<description><![CDATA[<p>This factsheet provides an overview of the issue of biases, especially on the basis of gender, that manifest in military applications of artificial intelligence (AI) such as lethal autonomous weapons systems (LAWS). It also addresses how biases in LAWS have been discussed at relevant disarmament forums like the Group of Governmental Experts (GGE) meetings under<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/gender-and-lethal-autonomous-weapons-systems/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/gender-and-lethal-autonomous-weapons-systems/">Gender and Lethal Autonomous Weapons Systems</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>This factsheet provides an overview of the issue of biases, especially on the basis of gender, that manifest in military applications of artificial intelligence (AI) such as lethal autonomous weapons systems (LAWS).</p>



<p>It also addresses how biases in LAWS have been discussed at relevant disarmament forums like the Group of Governmental Experts (GGE) meetings under the Convention on Certain Convention Weapons (CCW).</p>



<p>The factsheet further recommends areas of action for a host of stakeholders to ensure that gender biases are mitigated in military applications of AI.</p>



&nbsp;



<p>Citation: <em>Gender and Disarmament &amp; Security and Technology Programmes (2024) &#8220;Factsheet: Gender and Lethal Autonomous Weapons Systems&#8221;, UNIDIR, Geneva, Switzerland.</em></p><p>The post <a href="https://unidir.org/publication/gender-and-lethal-autonomous-weapons-systems/">Gender and Lethal Autonomous Weapons Systems</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Confidence-Building Measures for Artificial Intelligence: A Multilateral Perspective</title>
		<link>https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-multilateral-perspective/</link>
		
		<dc:creator><![CDATA[Asa Cusack]]></dc:creator>
		<pubDate>Wed, 31 Jul 2024 15:31:34 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=18824</guid>

					<description><![CDATA[<p>The UNIDIR project on Confidence-Building Measures (CBMs) for Artificial Intelligence (AI) aimed to advance conversations about the objectives, format and ways forward for AI-focused CBMs at the multilateral level. The project consisted of two distinct phases: This report concludes the second phase of the project and presents a framework for conceptual and practical considerations for<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-multilateral-perspective/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-multilateral-perspective/">Confidence-Building Measures for Artificial Intelligence: A Multilateral Perspective</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>The UNIDIR project on Confidence-Building Measures (CBMs) for Artificial Intelligence (AI) aimed to advance conversations about the objectives, format and ways forward for AI-focused CBMs at the multilateral level.</p>



<p>The project consisted of two distinct phases:</p>



<ol class="wp-block-list">
<li>The first developed a comprehensive taxonomy of AI risks in the context of international security.</li>



<li>The second provided a theoretical and historical overview of CBMs, probing for viable ways to move forward and seeking input from States.</li>
</ol>



<p>This report concludes the second phase of the project and presents a framework for conceptual and practical considerations for CBMs for AI, drawing on lessons learned from other domains and from perspectives shared by a diverse group of States.</p>



<p>This study and the consultation UNIDIR convened with national representatives, which included a workshop and surveys, brought to light some key areas of agreement and shared concerns. As conversations about future CBMs begin to take shape, this publication provides a realistic assessment of current priorities and invites reflection on next steps.</p>



<p>The research output from the first phase of the project is available in the form of our earlier publication <a href="https://unidir.org/publication/ai-and-international-security-understanding-the-risks-and-paving-the-path-for-confidence-building-measures/" title="">AI and International Security: Understanding the Risks and Paving the Path for Confidence-Building Measures</a>.</p>



<p>The initial framing paper which launched the project is also available as <a href="https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-framing-paper/" title="">Confidence-Building Measures for Artificial Intelligence: A Framing Paper</a>.</p>



&nbsp;



<p><em>Citation: Ioana Puscas, Confidence-Building Measures for Artificial Intelligence. A Multilateral Perspective, UNIDIR, Geneva, 2024.</em></p><p>The post <a href="https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-multilateral-perspective/">Confidence-Building Measures for Artificial Intelligence: A Multilateral Perspective</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Exploring Synthetic Data for Artificial Intelligence and Autonomous Systems: A Primer</title>
		<link>https://unidir.org/publication/exploring-synthetic-data-for-artificial-intelligence-and-autonomous-systems-a-primer/</link>
		
		<dc:creator><![CDATA[UNIDIR Comms]]></dc:creator>
		<pubDate>Thu, 30 Nov 2023 15:57:20 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=15710</guid>

					<description><![CDATA[<p>Synthetic data refers to artificially created data that seeks to reproduce the characteristics of real-world datasets in order to have a beneficial effect on training highly complex AI systems. The availability, quality and diversity of data have been recurrent challenges for training highly complex AI and autonomous systems, and defence organizations are increasingly looking into<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/exploring-synthetic-data-for-artificial-intelligence-and-autonomous-systems-a-primer/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/exploring-synthetic-data-for-artificial-intelligence-and-autonomous-systems-a-primer/">Exploring Synthetic Data for Artificial Intelligence and Autonomous Systems: A Primer</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Synthetic data refers to artificially created data that seeks to reproduce the characteristics of real-world datasets in order to have a beneficial effect on training highly complex AI systems. The availability, quality and diversity of data have been recurrent challenges for training highly complex AI and autonomous systems, and defence organizations are increasingly looking into opportunities provided by synthetic data. The characteristics and potential benefits offered by synthetic data, along with proven application of the technology in various sectors, make it a relevant topic for debates surrounding the use of AI within the context of international security.</p>



<p>This UNIDIR Primer provides an overview of the main opportunities and limitations of synthetic data in the training of AI systems. While synthetic data can be a proxy for real-world data and help shorten training cycles, among other benefits, there are also significant risks and challenges associated with its use.</p>



<p>The Primer explores existing data challenges, both technical and organizational, introduces key technical characteristics and methods of generating synthetic data, and analyzes implications of using synthetic data in the context of international security, including for autonomous systems and in the cyber realm.</p>



<p><strong>Sponsor Organizations:</strong>&nbsp;The European Union; the governments of Czech Republic, Germany, Italy, Netherlands, Switzerland, and Microsoft.</p>



<p>Citation: <em>Harry Deng (2023). &#8220;Exploring Synthetic Data for Artificial Intelligence and Autonomous Systems: A Primer&#8221;, UNIDIR, Geneva, Switzerland.</em></p><p>The post <a href="https://unidir.org/publication/exploring-synthetic-data-for-artificial-intelligence-and-autonomous-systems-a-primer/">Exploring Synthetic Data for Artificial Intelligence and Autonomous Systems: A Primer</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI and International Security: Understanding the Risks and Paving the Path for Confidence-Building Measures</title>
		<link>https://unidir.org/publication/ai-and-international-security-understanding-the-risks-and-paving-the-path-for-confidence-building-measures/</link>
		
		<dc:creator><![CDATA[UNIDIR Comms]]></dc:creator>
		<pubDate>Thu, 12 Oct 2023 13:08:28 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=14365</guid>

					<description><![CDATA[<p>This publication is part of the UNIDIR project on ‘Confidence-Building Measures for Artificial Intelligence’. Advances in artificial intelligence (AI) in recent years, combined with the technology’s scalability and convergence with other domains, have prompted numerous concerns about the risks of AI to global security, including risks of misuse and escalation. However, policy discussions still lack<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/ai-and-international-security-understanding-the-risks-and-paving-the-path-for-confidence-building-measures/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/ai-and-international-security-understanding-the-risks-and-paving-the-path-for-confidence-building-measures/">AI and International Security: Understanding the Risks and Paving the Path for Confidence-Building Measures</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><em>This publication is part of the UNIDIR project on ‘Confidence-Building Measures for Artificial Intelligence’.</em></p>



<p>Advances in artificial intelligence (AI) in recent years, combined with the technology’s scalability and convergence with other domains, have prompted numerous concerns about the risks of AI to global security, including risks of misuse and escalation. However, policy discussions still lack a comprehensive analysis of the technology&#8217;s risks and how categories of risks are interrelated.</p>



<p>This report provides an overview of the main categories of risks of AI in the context of international peace and security, across domains of use and applications.</p>



<p>This research concludes phase one of the UNIDIR project on CBMs for AI. It provides a basis for multi-stakeholder engagements to understand the risks and to advance discussions about CBMs, which can help promote a more transparent, safe and responsible environment for the development and use of AI.</p>



<p>Citation: <em>Ioana Puscas (2023) &#8220;AI and International Security: Understanding the Risks and Paving the Path for Confidence-Building Measures&#8221;, UNIDIR, Geneva, Switzerland</em></p><p>The post <a href="https://unidir.org/publication/ai-and-international-security-understanding-the-risks-and-paving-the-path-for-confidence-building-measures/">AI and International Security: Understanding the Risks and Paving the Path for Confidence-Building Measures</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Artificial Intelligence Beyond Weapons: Application and Impact of AI in the Military Domain</title>
		<link>https://unidir.org/publication/artificial-intelligence-beyond-weapons-application-and-impact-of-ai-in-the-military-domain/</link>
		
		<dc:creator><![CDATA[UNIDIR Comms]]></dc:creator>
		<pubDate>Wed, 11 Oct 2023 09:56:42 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/?post_type=publication&#038;p=14212</guid>

					<description><![CDATA[<p>Within the United Nations, the application of artificial intelligence (AI) in the military domain has, to-date, primarily discussed in the context of the United Nations Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). However, the application of AI within the military domain extends beyond the issue<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/artificial-intelligence-beyond-weapons-application-and-impact-of-ai-in-the-military-domain/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/artificial-intelligence-beyond-weapons-application-and-impact-of-ai-in-the-military-domain/">Artificial Intelligence Beyond Weapons: Application and Impact of AI in the Military Domain</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Within the United Nations, the application of artificial intelligence (AI) in the military domain has, to-date, primarily discussed in the context of the United Nations Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). However, the application of AI within the military domain extends beyond the issue of LAWS.</p>



<p>In the midst of discussions and debates around the opportunities and risks of AI for military purposes, as well as the governance and responsible use of these technologies, the United Nations Institute for Disarmament Research (UNIDIR)’s new report aims to increase understanding of the role of AI in the execution of military tasks beyond applications relating to the use of force and the narrow tasks of target selection and target engagement within the targeting process.</p>



<p>The report provides an overview of current and near-future AI capabilities relevant to aiding with 18 military tasks. The paper also presents a discussion on the strengths and limitations regarding the application of AI to these military tasks.</p>



<p></p>



<p>Citation: <em>Sarah Grand-Clément, “Artificial Intelligence Beyond Weapons: Application and Impact of AI in the Military Domain”, UNIDIR, Geneva, 2023.</em></p><p>The post <a href="https://unidir.org/publication/artificial-intelligence-beyond-weapons-application-and-impact-of-ai-in-the-military-domain/">Artificial Intelligence Beyond Weapons: Application and Impact of AI in the Military Domain</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Proposals Related to Emerging Technologies in the Area of Lethal Autonomous Weapons Systems: A Resource Paper (updated)</title>
		<link>https://unidir.org/publication/proposals-related-to-emerging-technologies-in-the-area-of-lethal-autonomous-weapons-systems-a-resource-paper-updated/</link>
		
		<dc:creator><![CDATA[devx]]></dc:creator>
		<pubDate>Tue, 09 May 2023 22:00:00 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/publication/proposals-related-to-emerging-technologies-in-the-area-of-lethal-autonomous-weapons-systems-a-resource-paper/</guid>

					<description><![CDATA[<p>This resource paper offers a comparative analysis of the content of the different proposals related to emerging technologies in the area of lethal autonomous weapon systems (LAWS) submitted by States to the Group of Governmental Experts on LAWS up until the end of 2022.* It identifies commonality in views as well as areas that require<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/proposals-related-to-emerging-technologies-in-the-area-of-lethal-autonomous-weapons-systems-a-resource-paper-updated/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/proposals-related-to-emerging-technologies-in-the-area-of-lethal-autonomous-weapons-systems-a-resource-paper-updated/">Proposals Related to Emerging Technologies in the Area of Lethal Autonomous Weapons Systems: A Resource Paper (updated)</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>This resource paper offers a comparative analysis of the content of the different proposals related to emerging technologies in the area of lethal autonomous weapon systems (LAWS) submitted by States to the Group of Governmental Experts on LAWS up until the end of 2022.*</p>



<p>It identifies commonality in views as well as areas that require further discussion in relation to eleven thematic areas covered in the proposals and the Group’s discussions. These include:</p>



<ol class="wp-block-list">
<li>Application of International Humanitarian Law (IHL)</li>



<li>Weapons prohibitions and other regulations/restrictions</li>



<li>Application of International Human Rights Law (IHRL) and International Criminal Law (ICL)</li>



<li>Characterisation</li>



<li>General requirements regarding human-machine interaction and human control</li>



<li>Responsibility and accountability</li>



<li>Legal reviews</li>



<li>Risk mitigation</li>



<li>Ethical considerations</li>



<li>Peaceful uses of Artificial Intelligence (AI)</li>



<li>Potential benefits of autonomy in weapon systems<br>&nbsp;</li>
</ol>



<p>Also available: <strong><a href="https://unidir.org/sites/default/files/2023-05/UNIDIR_Proposals_Emerging_Technologies_Lethal_Autonomous_Weapons_Systems_Annex_A_2023.pdf">Annex A</a></strong>, which includes relevant excerpts from proposals related to emerging technologies in the area of lethal autonomous weapons systems.</p>



<p><em>* This Resource Paper is an updated version of the previous document UNIDIR released in July 2022, and includes the following additional submissions to the GGE on LAWS in 2022 that were not included in the previous version: Elements for a Legally Binding Instrument to Address the Challenges Posed by Autonomy in Weapon Systems; Protocol VI; Working Paper submitted by Finland, France, Germany, the Netherlands, Norway, Spain and Sweden; Working Paper of the People’s Republic of China on LAWS, and Working Paper of the Russian Federation “Application of International Law to Lethal Autonomous Weapons Systems (LAWS)”.</em></p>



<p><strong>Sponsor Organizations:</strong> Support from UNIDIR’s core funders provides the foundation for all of the Institute’s activities. Both this paper and the original resource paper were prepared with supported from the Governments of New Zealand and Switzerland. This paper was prepared by UNIDIR’s Security and Technology Programme, which is funded by the governments of Czechia, Germany, Italy, the Netherlands and Switzerland, and by Microsoft.</p>



<p>Citation: Ioana Puscas and Alisha Anand (2023) &#8220;Proposals Related to Emerging Technologies in the Area of Lethal Autonomous Weapons Systems: A Resource Paper (Updated)&#8221;, UNIDIR, Geneva, Switzerland.</p><p>The post <a href="https://unidir.org/publication/proposals-related-to-emerging-technologies-in-the-area-of-lethal-autonomous-weapons-systems-a-resource-paper-updated/">Proposals Related to Emerging Technologies in the Area of Lethal Autonomous Weapons Systems: A Resource Paper (updated)</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The 2022 Innovations Dialogue: AI Disruption, Peace and Security (Conference Report)</title>
		<link>https://unidir.org/publication/the-2022-innovations-dialogue-ai-disruption-peace-and-security-conference-report/</link>
		
		<dc:creator><![CDATA[devx]]></dc:creator>
		<pubDate>Sun, 30 Apr 2023 22:00:00 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/publication/the-2022-innovations-dialogue-ai-disruption-peace-and-security-conference-report/</guid>

					<description><![CDATA[<p>This report provides a summary of the key themes, issues and takeaways that emerged from discussions during the 2022 Innovations Dialogue. Part I of the report seeks to provide a foundational understanding of the concept of AI and its state of play. Part II examines the disruptive impact of AI on international peace and security.<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/the-2022-innovations-dialogue-ai-disruption-peace-and-security-conference-report/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/the-2022-innovations-dialogue-ai-disruption-peace-and-security-conference-report/">The 2022 Innovations Dialogue: AI Disruption, Peace and Security (Conference Report)</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>This report provides a summary of the key themes, issues and takeaways that emerged from discussions during the 2022 Innovations Dialogue.</p>



<p><strong>Part I</strong> of the report seeks to provide a foundational understanding of the concept of AI and its state of play.</p>



<p><strong>Part II</strong> examines the disruptive impact of AI on international peace and security. In particular, it discusses the risks and benefits of uses of AI in military operations and across domains of warfare as well as the opportunities and challenges of harnessing AI technologies for conflict prevention and peacebuilding.</p>



<p><strong>Part III</strong> of the report examines the path to Responsible AI. It unpacks the RAI governance approach and discusses how it is and can be operationalized. It also reflects on the value of building an RAI culture.</p>



<h3 class="wp-block-heading">HIGHLIGHTS &amp; RECORDINGS:</h3>



<ul class="wp-block-list">
<li>Read a&nbsp;brief summary of the <a href="/sites/default/files/2023-05/2022_Innovations_Dialogue_Highlights_web.pdf"><strong>Conference Highlights</strong></a></li>



<li>Watch <a href="https://youtu.be/gUqMproYlS4?t=43"><strong>all of the conference sessions</strong></a> again (via the&nbsp;<a href="https://www.youtube.com/@UNIDIR/featured">UNIDIR YouTube channel</a>&nbsp;or below)</li>
</ul>



<p><iframe title="YouTube video player" src="https://www.youtube.com/embed/gUqMproYlS4?start=43" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></p></iframe></p>



<p>Citation: <em>Wenting He and Alisha Anand (2023) &#8220;The 2022 Innovations Dialogue: AI Disruption, Peace and Security&#8221;, UNIDIR, Geneva, Switzerland.</em></p><p>The post <a href="https://unidir.org/publication/the-2022-innovations-dialogue-ai-disruption-peace-and-security-conference-report/">The 2022 Innovations Dialogue: AI Disruption, Peace and Security (Conference Report)</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Towards Responsible AI in Defence: A Mapping and Comparative Analysis of AI Principles Adopted by States</title>
		<link>https://unidir.org/publication/towards-responsible-ai-in-defence-a-mapping-and-comparative-analysis-of-ai-principles-adopted-by-states/</link>
		
		<dc:creator><![CDATA[devx]]></dc:creator>
		<pubDate>Sun, 12 Feb 2023 23:00:00 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/publication/towards-responsible-ai-in-defence-a-mapping-and-comparative-analysis-of-ai-principles-adopted-by-states/</guid>

					<description><![CDATA[<p>Continuous advances in the field of artificial intelligence (AI) and efforts to integrate AI systems in critical sectors are gradually transforming all aspects of society, including in the defence sector. Although advancements in AI present unprecedented opportunities to augment human capabilities and improve decision-making in various ways, they also present significant legal, safety, security and<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/towards-responsible-ai-in-defence-a-mapping-and-comparative-analysis-of-ai-principles-adopted-by-states/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/towards-responsible-ai-in-defence-a-mapping-and-comparative-analysis-of-ai-principles-adopted-by-states/">Towards Responsible AI in Defence: A Mapping and Comparative Analysis of AI Principles Adopted by States</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Continuous advances in the field of artificial intelligence (AI) and efforts to integrate AI systems in critical sectors are gradually transforming all aspects of society, including in the defence sector. Although advancements in AI present unprecedented opportunities to augment human capabilities and improve decision-making in various ways, they also present significant legal, safety, security and ethical concerns. Thus, to ensure that AI systems are developed and used lawfully, ethically, safely, securely and responsibly, governments and intergovernmental organisations are developing a range of normative instruments. This approach is broadly known as &#8220;Responsible AI&#8221;, or ethical or trustworthy AI. Presently, the most notable approach to Responsible AI is the development and operationalisation of responsible or ethical AI principles.</p>



<p>UNIDIR&#8217;s project Towards Responsible AI in Defence seeks to, first, build a common understanding of the key facets of responsible research, design, development, deployment, and use of AI systems. It will then examine the operationalisation of Responsible AI in the defence sector, including identifying and facilitating the exchange of good practices. The project has three main aims. First, it aims to encourage states to adopt and operationalise tools that can enable responsible behaviour in the development and use of AI systems. It also seeks to help increase transparency and foster trust among states and other key AI actors. Finally, the project aims to build a shared understanding of the key elements of Responsible AI and how they may be operationalised, which may inform the development of internationally accepted governance frameworks.</p>



<p>This research brief provides an overview of the aims of the project. It also outlines the research methodology for and preliminary findings of the project&#8217;s first phase: the development of a common taxonomy of principles and a comparative analysis of AI principles adopted by states.</p>



<p>Citation:<em> Alisha Anand and Harry Deng (2023) &#8220;Towards Responsible AI in Defence: A Mapping and Comparative Analysis of AI Principles Adopted by States&#8221;, UNIDIR, Geneva, Switzerland.</em></p><p>The post <a href="https://unidir.org/publication/towards-responsible-ai-in-defence-a-mapping-and-comparative-analysis-of-ai-principles-adopted-by-states/">Towards Responsible AI in Defence: A Mapping and Comparative Analysis of AI Principles Adopted by States</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Confidence-Building Measures for Artificial Intelligence: A Framing Paper</title>
		<link>https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-framing-paper/</link>
		
		<dc:creator><![CDATA[devx]]></dc:creator>
		<pubDate>Sun, 18 Dec 2022 23:00:00 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-framing-paper/</guid>

					<description><![CDATA[<p>The increasing use of artificial intelligence (AI) in military operations and weapons systems introduces a wide range of risks, including risks of misuse and inadvertent escalation in conflict. While the international community has begun to address some of these concerns both at the national level and in regional and multilateral forums, further dedicated efforts are<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-framing-paper/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-framing-paper/">Confidence-Building Measures for Artificial Intelligence: A Framing Paper</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>The increasing use of artificial intelligence (AI) in military operations and weapons systems introduces a wide range of risks, including risks of misuse and inadvertent escalation in conflict. While the international community has begun to address some of these concerns both at the national level and in regional and multilateral forums, further dedicated efforts are needed to map and mitigate risks.&nbsp;</p>



<p>Confidence-building measures (CBMs) for AI can provide flexible options for the future development and deployment of AI-enabled systems.&nbsp;This framing paper introduces a new UNIDIR project, which aims at developing a possible roadmap for the future elaboration of CBMs for AI.</p>



<p>The first phase of the project consists of a risk-mapping analysis, unpacking risks of the technology and assessing how they may translate into risks for international peace and security. The second phase of the project will consider pathways for the elaboration of CBMs in a series of multi-stakeholder engagements.</p>



<p>Citation: <em>Ioana Puscas (2022) &#8220;Confidence-Building Measures for Artificial Intelligence: A Framing Paper&#8221;, UNIDIR, Geneva, Switzerland.</em></p><p>The post <a href="https://unidir.org/publication/confidence-building-measures-for-artificial-intelligence-a-framing-paper/">Confidence-Building Measures for Artificial Intelligence: A Framing Paper</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applications of AI</title>
		<link>https://unidir.org/publication/does-military-ai-have-gender-understanding-bias-and-promoting-ethical-approaches-in-military-applications-of-ai/</link>
		
		<dc:creator><![CDATA[devx]]></dc:creator>
		<pubDate>Mon, 06 Dec 2021 23:00:00 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/publication/does-military-ai-have-gender-understanding-bias-and-promoting-ethical-approaches-in-military-applications-of-ai/</guid>

					<description><![CDATA[<p>&#8220;Does Military AI Have Gender?&#8221; uncovers the significance of gender norms in the development and deployment of artificial intelligence (AI) for military purposes. The report addresses gender bias in data collection, algorithms and computer processing.&#160; Drawing on research in ethical AI, the report outlines avenues for countering bias and mitigating harm, including a gender-based review<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/does-military-ai-have-gender-understanding-bias-and-promoting-ethical-approaches-in-military-applications-of-ai/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/does-military-ai-have-gender-understanding-bias-and-promoting-ethical-approaches-in-military-applications-of-ai/">Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applications of AI</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>&#8220;Does Military AI Have Gender?&#8221; uncovers the significance of gender norms in the development and deployment of artificial intelligence (AI) for military purposes. The report addresses gender bias in data collection, algorithms and computer processing.&nbsp;</p>



<p>Drawing on research in ethical AI, the report outlines avenues for countering bias and mitigating harm, including a gender-based review of military applications of AI. In doing so, it seeks to chart a path for technology development that promotes – rather than hinders – gender equity and contributes to gender mainstreaming in the military. </p>



<p>Citation: <em>Katherine Chandler (2021) &#8220;Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applications of AI&#8221;, UNIDIR, Geneva, <a href="https://doi.org/10.37559/GEN/2021/04">https://doi.org/10.37559/GEN/2021/04</a>.</em></p><p>The post <a href="https://unidir.org/publication/does-military-ai-have-gender-understanding-bias-and-promoting-ethical-approaches-in-military-applications-of-ai/">Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applications of AI</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Black Box, Unlocked</title>
		<link>https://unidir.org/publication/the-black-box-unlocked/</link>
		
		<dc:creator><![CDATA[devx]]></dc:creator>
		<pubDate>Mon, 21 Sep 2020 22:00:00 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/publication/the-black-box-unlocked/</guid>

					<description><![CDATA[<p>Predictability and understandability are widely held to be vital characteristics of artificially intelligent systems. Put simply: AI should do what we expect it to do, and it must do so for intelligible reasons. This consideration stands at the heart of the ongoing discussion about lethal autonomous weapon systems and other forms of military AI. But<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/the-black-box-unlocked/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/the-black-box-unlocked/">The Black Box, Unlocked</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Predictability and understandability are widely held to be vital characteristics of artificially intelligent systems. Put simply: AI should do what we expect it to do, and it must do so for intelligible reasons. This consideration stands at the heart of the ongoing discussion about lethal autonomous weapon systems and other forms of military AI. But what does it mean for an intelligent system to be &#8220;predictable&#8221; and &#8220;understandable&#8221; (or, conversely, unpredictable and unintelligible)? What is the role of predictability and understandability in the development, use, and assessment of military AI? What is the appropriate level of predictability and understandability for AI weapons in any given instance of use?&nbsp; And how can these thresholds be assured?&nbsp;</p>



<p>This study provides a clear, comprehensive introduction to these questions, and proposes a range of avenues for action by which they may be addressed.</p>



<p>Citation: <em>Arthur Holland Michel (2020) &#8220;The Black Box, Unlocked&#8221;, UNIDIR, Geneva. doi: 10.37559/SecTec/20/AI1</em></p>



<p><iframe src="https://www.youtube.com/embed/1eYwUa2HF2w" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>



<p><strong>Teaser:</strong> Predictability and Understandability in Military AI</p><p>The post <a href="https://unidir.org/publication/the-black-box-unlocked/">The Black Box, Unlocked</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Modernizing Arms Control</title>
		<link>https://unidir.org/publication/modernizing-arms-control/</link>
		
		<dc:creator><![CDATA[devx]]></dc:creator>
		<pubDate>Sun, 30 Aug 2020 22:00:00 +0000</pubDate>
				<guid isPermaLink="false">https://unidir.org/publication/modernizing-arms-control/</guid>

					<description><![CDATA[<p>This report provides an initial insight into why the international security community&#160;may need to consider regulating artificial intelligence (AI) applications that fall in&#160;the digital grey zone between AI-enabled weapon systems (e.g. lethal autonomous&#160;weapon systems) and military uses of civilian AI applications (e.g. logistics, transport). It also provides an initial exploration of the familiar tools the<span class="excerpt-read-more">... <a class="btn--link" href="https://unidir.org/publication/modernizing-arms-control/">Read more</a></span></p>
<p>The post <a href="https://unidir.org/publication/modernizing-arms-control/">Modernizing Arms Control</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>This report provides an initial insight into why the international security community&nbsp;may need to consider regulating artificial intelligence (AI) applications that fall in&nbsp;the digital grey zone between AI-enabled weapon systems (e.g. lethal autonomous&nbsp;weapon systems) and military uses of civilian AI applications (e.g. logistics, transport). It also provides an initial exploration of the familiar tools the community has at its disposal for such regulation.</p>



<p><strong>Teaser:</strong> Exploring responses to the use of AI in military decision-making</p>



<p><strong>Sponsor Organizations:</strong> Germany, the Netherlands, Norway, Switzerland, Microsoft and CIFAR</p>



<p>Citation:<em> Giacomo Persi Paoli, Kerstin Vignard, David Danks and Paul Meyer (2020) &#8220;Modernizing Arms Control: Exploring Responses to the Use of AI in Military Decision-Making&#8221;, UNIDIR, Geneva, Switzerland.</em></p><p>The post <a href="https://unidir.org/publication/modernizing-arms-control/">Modernizing Arms Control</a> first appeared on <a href="https://unidir.org">UNIDIR</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
