Skip to content Skip to main navigation Skip to footer

Performance or Explainability? A Law of Armed Conflict Perspective

Kwik J and van Engers T, ‘Performance or Explainability? A Law of Armed Conflict Perspective’ in Angelos Kornilakis and others (eds), Artificial Intelligence and Normative Challenges: International and Comparative Legal Perspectives (Springer Nature Switzerland AG 2023), https://link.springer.com/10.1007/978-3-031-41081-9_14.

Abstract

Machine learning techniques lie at the centre of many recent advancements in artificial intelligence (AI), including in weapon systems. While powerful, these techniques utilise opaque models whose internal workings are generally quite difficult to explain, which necessitated the development of explainable AI (XAI). In the military domain, both performance and explainability are important and legally required by international humanitarian law (IHL). In practice, however, these two desiderata are in conflict, as improving explainability may involve paying an opportunity cost in performance and vice versa. It is unclear how IHL requires States to address this dilemma. In this article, we attempt to operationalise normative IHL requirements in terms of P (performance) and X (explainability) to derive qualitative guidelines for decision-makers on this issue. We first explain the explainability-performance trade-off, what causes it, and what its consequences are. Then, we explore relevant IHL principles that include P and X as requirements, and develop four tenets derived from these principles. We demonstrate how IHL prescribes minimum values for both P and X, but that once these values are achieved, P should be prioritised over X. We conclude by formulating a general guideline and provide an example of how this would impact model choice.

Attachments