RM3: Design methodologies for addressing Ethical, Legal and Societal Aspects (ELSA) of military AI applications (V2.0)


Abstract
This report, which is the first Deliverable in WP2 (D2.1) of the ELSA Lab Defence project,
explores design methodologies to address ethical, legal and societal aspects (ELSA) of the use
of military artificial intelligence (AI) applications. The methodologies mapped in this report
serve as the starting point for developing within the project a comprehensive design
methodology tailored to identifying ELSA issues in the use of AI technologies in the defence
domain and providing guidance for designing military AI technologies that avoid or minimise
ELSA issues. To map relevant technologies, a three-step approach is used.
First, to ensure the research results have usable benefits, the ELSA lab has developed three
unique case studies. The use cases were selected as they generate a diverse range of ELSA-related
problems, are pertinent for Dutch defence interests and, while relevant, have currently
remained under-examined. The three established use cases are (1) Countering cognitive
warfare using Early Warning Systems (EWS); (2) (non-lethal) autonomous robots; and (3)
military decision-support systems.
Second, based on literature study, the most important ELSA issues regarding AI are
investigated, without particular focus on the use of AI in the military domain. A total of nine
values affected by the use of AI are identified and described (dignity, human agency and
autonomy, responsibility, life and physical integrity, privacy and data protection, liberty, justice,
democratic decision-making and political participation, and peace and international security).
These values, as located in existing literature, are then put into the military AI context and linked
to ELSA issues.
Third, existing ELSA design methods are identified and described. Most of these methods do
not focus on defence and cannot directly be applied to the defence context, meaning that they
may need to be adjusted and further tailored to military AI applications. A total of five major
design methodologies are identified and described, all based on the concept of value sensitive
design (VSD). These methodologies are: design for privacy/privacy by design (PbD), design for
human agency and responsibility, including meaningful human control (MHC) and human
oversight, explainable AI (XAI), and contestability by design, other approaches for design for
human agency, including cognitive and socio-cognitive engineering, human-machine teaming
and team design pattern engineering, engaging society in the design of military systems,
including Scandinavian participatory design, critical design, speculative design, social design,
and (new) participatory design, and design for emerging human-technology interactions,
including coactive design and adaptive/adaptable automation. These design methodologies
are core design approaches and methods for addressing by design ELSA concerning new
technologies. This provides an overview of the most relevant approaches and allows them to
be applied to selected use cases.
The introduction of new technology in defence offers opportunities, yet also creates risks.
Introducing AI technology raises ethical, legal and societal issues. If AI is to be deployed and
used responsibly, these and other aspects must constantly be considered in the design,
implementation and maintenance of AI-based systems. Highlighting and understanding the
different design methodologies from different sectors will allow for a holistic approach to be
adopted, which can then be tailored to the specificities of defence and linked to the case
studies. ,