Skip to content Skip to main navigation Skip to footer

RP3: The Conceptual Roots of the Criminal Responsibility Gap in Autonomous Weapon Systems

One major reason for the controversy around autonomous weapon systems (‘AWS’) is the concern that no criminal liability is possible for resulting war crimes. This article takes a comprehensive look at one factor, the cognitive element of mens rea, and how and when characteristics specific to artificial intelligence (‘AI’) can render it more difficult to assign criminal liability to the deploying commander. It takes a multidisciplinary approach, considering both technical characteristics of modern AI and realistic conditions under which AWS are used.

The article finds that modern AI primarily induces reduced perceivability through imperfect tracking of human intuition, opacity and generic reliability metrics. It also finds that AWS make it easier to willingly avoid acquiring cognition simply through inaction. Subsequently, it attempts to locate the exact loci of the problem within criminal law’s spectrum of intent. This article finds that the epicentre of difficulty lies at the intermediate level of risk-taking, and particularly situations of generic risk: the condition where there is awareness only of a nondescript, indeterminate probability of ‘something going wrong’.

In contrast, no-gap situations are identified higher up the ladder of intent where there is purpose or virtual certainty, and judicious gaps lower down where we want ‘impunity’ for justified risk-taking and genuine accidents. Additionally, this article also considers the dangers of manufactured ignorance, where the risk can theoretically be known but in practice was not, due to a prior, separate omission. It ends with recommendations to address these challenges, including reducing opacity, standardising iterative investigations and enforcing technical trainings.

Attachments