The Ethics of AI in Military Applications

Artificial Intelligence can bring great advantages to the military, but at the same time it poses unique risks when applied in this domain. It can play a role throughout the combat decision cycle, from combat simulations to target identification and Lethal Autonomous Weapon Systems (LAWS). At the same time, it comes with questions of accountability and responsibility, human control, legitimacy, human dignity and human autonomy. Yet, clear guidelines on the responsible development and use of AI systems are lacking. This may lead to both “over-use” (e.g., using too many AI-systems in too many situations, with lack of due consideration of consequences) and “under-use” (e.g., not using AI, due to lack of knowledge or fear of consequences) of AI-systems. It may also lead to the type of use that we at a later stage of development come to regret and we will have great trouble to reverse.

Special issue in Ethics and Information Technology

Together with external colleagues, researchers from the TU Delft Digital Ethics Centre will be putting together a special issue in the journal Ethics and Information Technology on Responsible AI in Military Applications. This topical collection aims to move forward the debate on the responsible use of AI in the military domain, broadening the scope beyond LAWS to include all invasive uses of AI by the military. Contributions will be published in Ethics and Information Technology and will additionally act as important input for an International Summit on Responsible Artificial Intelligence in the Military Domain, on the responsible military development, deployment and use of AI, taking place in the Hague on 15 and 16 February 2023.

The editors welcome contributions, especially those that aim to bring the academic discussions closer to the political debates and policy agendas of states and identify innovations that allow parties to reconcile disagreements. In particular, contributions that specify value-based design requirements for the AI-enabled systems, but also for the social aspects (e.g. the interaction of operators with technologies) and the institutional aspects of AI-enabled systems, are much welcomed. 

In addition, we welcome contributions from different disciplines including but not limited to philosophy, law, social science, computer science, engineering. However, given the nature and scope of the journal, we expect each contribution to clearly identify, address or put in context at least one important normative issue (ethical, legal, societal) in the development or use of AI in the military domain.

For practicalities, check TUDelft.nl