Skip to content Skip to main navigation Skip to footer

Design methodologies for addressing Ethical, Legal and Societal Aspects (ELSA) of military AI applications

Abstract


This report, which is the first Deliverable in WP2 (D2.1) of the ELSA Lab Defence project, explores
design methodologies to address ethical, legal and societal aspects (ELSA) of the use of AI in the
military domain. The methodologies mapped in this report serve as the starting point for
developing within the project a comprehensive design methodology tailored to identify ELSA
issues in the use of AI technologies in the defence domain and provide guidance for designing
military AI technologies that avoid or minimise ELSA issues. To map relevant technologies, a
three-step approach is used.

First, based on a literature study, the most important ELSA issues regarding AI are investigated,
without particular focus on the use of AI in the military domain. A total of six values affected by
the use of AI are identified and described (dignity, privacy, life and physical integrity, liberty,
democratic decision-making and political participation, and peace and international security).
These six values, as located in the existing literature, are then put into the military AI context and
linked to ELSA issues.

Second, existing ELSA design methods are identified and described. Most of these methods do
not focus on defence and cannot directly be applied to the defence context, meaning that they
may need to be adjusted and further tailored to military AI applications. A total of 11 design
methodologies are identified and described (value-sensitive design, guidance ethics, cognitive
engineering, socio-cognitive engineering, coactive design, explainable ai, meaningful human
control, team design pattern engineering, contestability-by-design, participatory design and
evaluation methods, and privacy by design and privacy by default). These 11 design
methodologies are core design approaches and methods for mapping ELSA concerning new
technologies. This provides an overview of the most relevant approaches and allows them to be
applied to selected use cases.
Third, to give the research real-world application, so that the results have usable benefits, the
methodology is applied to case studies. The use cases were selected as they generate a diverse
range of ELSA-related problems, are pertinent for Dutch defence interests and, while relevant,
have currently remained under-examined. The two established use cases are (1) Countering
cognitive warfare using Early Warning Systems and (2) (Non-lethal) autonomous robots.
Additionally, it was necessary to include a third use case (military decision-support system) as
this allowed for additional ELSA to be demonstrated and described.
The introduction of new technology in defence offers opportunities, yet also creates risks.
Introducing AI technology raises ethical, legal and social issues. If AI is to be applied responsibly,
these and other aspects must constantly be considered in the design, implementation, and
maintenance of AI-based systems. Highlighting and understanding the different design
methodologies from different sectors will allow for a holistic approach to be adopted, which can
then be tailored to the specificities of defence and linked to the case studies.

Attachments