Skip to main content
default

Conference
Autonomous Weapon Systems: Understanding Bias in Machine Learning and Artificial Intelligence

Autonomous Weapon Systems: Understanding Bias in Machine Learning and Artificial Intelligence

05/10/2017 - 05/10/2017, 5 October 2017, New York, USA

Increasing autonomy in weapon systems is the focus of growing international attention, as are the implications of artificial intelligence for international security. The algorithms that make increasing autonomy in weapon systems possible are not spared from bias. Therefore, it is critical to develop a better understanding of how biases influence outcomes in learning systems. What can we learn about bias from other fields where decisions with significant human impact are already made by learning algorithms? What do we already know about detecting bias—both unintentional and intentional? How could we know in which ways algorithms are biased? Is all bias bad? And are there specific issues concerning bias that we need to be mindful of in relation to discussions at the upcoming GGE on Lethal Autonomous Weapon Systems?

Support from UNIDIR's core funders provides the foundation for all of the Institute's activities.
In addition, dedicated project funding was received from the Government of Germany.

This Conference is the part of Project(s): Autonomous Weapon Systems: Understanding Bias in Machine Learning and Artificial Intelligence ,

Related Documents

Download this document

Related Projects

The Weaponization of Increasingly Autonomous Technologies (Phase III)

Share
Contact
Kerstin Vignard kvignard@unog.ch Tel. +41 (0)22 917 15 82 Tae Takahashi ttakahashi@unog.ch Tel. +41 (0)22 917 15 83 Fax +41 (0)22 917 01 76