Skip to content Skip to main navigation Skip to footer

Background

ELSA and ‘HUMAN-CENTRIC AI FOR AN INCLUSIVE SOCIETY’

The ELSA concept stands for Ethical, Legal and Societal Aspects. The position paper ‘ELSA Labs for Human Centric Innovation in AI’ explains the background and the value of the ELSA concept as a Quadruple Helix model. In this approach, next to businesses, government, and knowledge institutes, the general public participates actively in the co-creation process of solving social challenges. Although co-creation has a longer history, applying it to digital developments such as AI and data is a relatively new and innovative application. The aim hereof is to ensure that all stakeholders develop responsible applications of AI jointly while addressing human values as well as public values. 

Seven criteria and a process were set up for recognition of a lab as an ELSA Lab:

  1. The labs work on challenges formulated by society that are aimed at achieving broader prosperity, such as health care, jobs, responsible living, and safety and security. 
  2. The understandings acquired are collated and documented in a multi-stakeholder setup. They are validated using scientific methods where possible and made available to all the lab’s participants and the associated national AI ecosystem.
  3. Solutions are developed using one of the key-enabling design methods with testing handled using successive improvement cycles in relevant practical situations. The approach is based on the broad definition of the design process that includes all the stakeholders: end users, specialists, technicians, policymakers, and administrators.
  4. The projects focus on meaningful and human-centric solutions that are data-intensive and AI-based.
  5. The labs represent the Quadruple Helix dimensions: all four of the actors participate actively and jointly ensure that the activities are managed and coordinated properly, as well as actively involving and influencing the broader social ecosystem in which they operate.
  6. The labs have an active policy of communicating their findings transparently to stakeholders, involving stakeholders and engaging in a dialogue with society.
  7. ELSA Labs have a responsibility to scale up the solutions and transfer them to society.

To support and oversee the development of ELSA Labs, The Netherlands Organisation for Scientific Research (NWO) and the Netherlands AI Coalition launched an NWA call for ‘Human-centric AI for an inclusive society: Towards an ecosystem of trust’. After testing by an independent NWO evaluation committee, five projects were approved at the end of January 2022, including this ELSA Lab. Furthermore, the ELSA Lab Defence is endorsed by the NL AIC label, thus operating in line with the strategic goals and quality of the NL AIC.

WHY A DEFENCE LAB?

AI technology is needed for dealing with new challenges in both peacekeeping and warfare to improve the efficiency, effectiveness, and security of the Dutch armed forces. We must be able to deal with misleading or false information, cope with our enemies using artificial intelligence (AI), and we must handle the processing of large amounts of data. AI, therefore, has a crucial role to play. The introduction of new technology in defence offers opportunities, yet also creates risks. Introducing AI technology raises ethical, legal, and social issues. How can AI-driven systems remain under human control? How can control and dignity be maintained when machines get autonomy? How are we working within all the legal frameworks?

If AI is to be applied responsibly, these and other aspects must constantly be considered in the design, implementation, and maintenance of AI-based systems. To date it is unclear which AI-based systems are acceptable from the ethical, legal, and social points of view. It also remains unclear in what conditions and circumstances these systems would become acceptable. This can lead to excessive use of AI (for example using too many systems in too many situations, without keeping the possible consequences in mind) or no use at all (for example not using AI due to insufficient knowledge or fear of the consequences). Both excessive and insufficient use of AI applications can have unknown consequences in the defence domain when protecting the freedom and safety of society.

Additionally, it is necessary to study how society and defence personnel experience the use of military AI, how it develops over time, and how it changes in different situations. ELSA Lab Defence will follow global technological, military, and social developments that influence perceptions of the use of AI systems, so that the lessons learned can be applied in the ELSA Lab. ELSA Lab Defence will become an independent advisory agency that makes recommendations about ELSA aspects when using military systems with AI-based technology. As ELSA is very context-dependent and the technology is constantly developing, this will not give standardised answers. Instead, there should be regulatory authorities that give tailored advice.

ELSA LAB DEFENCE

1. Monitors global technological, military, and societal developments that could influence attitude towards the use of military AI-based applications.

2. Studies how society and defence personnel perceive the use of military AI, how this perception evolves over time and, how it changes in various contexts.

3. Develops a methodology for context-dependent analysis, design and evaluation of ethical, legal, and societal aspects of military AI-based applications. It builds upon existing methods for value-sensitive design, explainable algorithms, and human-machine teaming. These methods are adapted to the specific defence context by conducting representative case studies, such as the use of (semi-) autonomous robots, and AI-based methods against cognitive warfare. 

The ELSA Lab Defence consortium is a public-private initiative addressing ethical, legal, and societal issues by developing a future-proof, independent, and consultative ecosystem for the responsible use of AI in the defence domain.