Human–AI teaming (HAT) refers to the integration of humans and AI systems as interdependent and coordinated units, working to complete tasks and achieve goals. The goal of HAT is to leverage the strengths of both humans and AI through a dynamic, collaborative and evolving distribution of tasks. The nature of these relationships is deeply dependent on the type of system, nature of AI and intended use.
While the concept does not imply equal taskwork or responsibility between humans and AI systems, the use of such systems as teammates in military and safety-critical contexts introduces important questions about human control, responsibility and accountability.
A recording of the meeting is available on UNIDIR’s YouTube channel or below:
This UNIDIR webinar, featuring an interdisciplinary group of experts, discussed human-AI teaming’s implications for human control in the context of autonomous weapon systems, as well as explored key questions, including:
- What does human-AI teaming mean in a military context?
- What is the value of the ‘teaming’ metaphor and why is it preferred to other alternatives (e.g., “AI tools” rather than teammates)?
- What are the implications for system design (e.g., interface design)?
- What are the challenges for team composition and for training?
- What is the role of explainable AI models for developing appropriate levels of trust and improving human-AI teamwork?
Speakers
- Dr Jurriaan van Diggelen, Senior Research Scientist, the Netherlands Organisation for Applied Scientific Research
- Dr Mica Endsley, President, SA Technologies
- Dr Nathan J. McNeese, Dean’s Professor, Clemson University
- Prof. Mary (Missy) Cummings, Professor, George Mason University
- Dr Mennatallah El-Assady, Research Fellow, ETH AI Center
Read the speakers’ full biographies here.