AI-enabled systems depend on algorithms, but those same algorithms are susceptible to bias. Algorithmic biases come in many types, arise for a variety of reasons, and require different standards and techniques for mitigation. This primer characterizes algorithmic biases, explains their potential relevance for decision-making by autonomous weapons systems, and raises key questions about the impacts of and possible responses to these biases.
Citation: Security and Technology Programme (2018) "Algorithmic Bias and the Weaponization of Increasingly Autonomous Technologies", UNIDIR, Geneva.