Predictability and understandability are widely held to be vital qualities for lethal autonomous weapons (LAWS). However, many questions remain unanswered as to how these qualities can be defined, characterized, and enshrined in the development and real-world use of these technologies.
A recording of the event is now available on our YouTube channel (below).
What are predictability and understandability in a technical and operational sense, and what factors determine the degree to which a LAWS’s actions could be understood or anticipated? How can AI-based systems be made more predictable or explainable? Conversely, to what extent is some degree of unpredictability and unexplainability likely to be inherent in any complex autonomous weapon when used in the real-world? What constitutes “appropriate” or “sufficient” predictability and understandability for autonomous weapons, and how could these qualities be measured and assured?
This side-event to the 2021 GGE on LAWS unpacked these and other relevant considerations and pointed to concrete avenues of action for all stakeholders involved in the LAWS debate.
WHEN & WHERE
29 September 2021 – 13:30-14:45 CEST | Online
SPEAKERS
Aude Billard, Full Professor, EPFL
Arthur Holland Michel, Associate Researcher, UNIDIR
PARTICIPANTS
UNIDIR encourages the participation of representatives and experts working on or interested in issues pertaining to lethal autonomous weapon systems and other forms of military AI.
Kindly RSVP by 29 September 2021, 13:00 CEST. Registration is mandatory. Please contact sectec-unidir@un.org with any questions.