UvA / Asser / ACIL / LACMO members Berenice Boutin, Terry Gill and Tom van Engers have been awarded an NWO grant for their four-year research project ‘Agency and Compliance by Design in Military AI Technologies.’ Focussing on a human-centred approach to military AI technologies, the project seeks to explore ethical and lawful uses of AI in the military.
The deployment of artificial intelligence (AI) technologies in the military context has the potential to greatly improve military capabilities and to offer significant strategic and tactical advantages. At the same time, the use of increasingly autonomous technologies and adaptive systems in the military context poses profound ethical, legal, and policy challenges.
The project seeks to explore the conditions and modalities that would allow to leverage the potential benefits of AI technologies and human-machine partnerships in the military while abiding by the rule of law and aligning with public values.
The ethical and legal implications of the potential use of AI technologies in the design of weapons systems has been on the agenda of the United Nations, governments, and non-governmental organisations for several years. In critical warfare functions, reliance on autonomous intelligent systems is indeed highly controversial and should be carefully assessed against ethical values and legal norms. Fully autonomous weapon systems constitute a hard red line considered by many as irreconcilable with international humanitarian law and public values of human dignity and accountability. Yet, the question of where and how to draw this red line is not settled.
Besides, the potential applications of AI in the military context are considerably broader than merely the issue of autonomous weapons. The capacity of AI technologies to collect and process vast amount of information at a scale and speed that goes beyond human cognitive abilities will likely impact on many aspects of decision making in a broad spectrum of activities which take place in the planning and conduct of military operations, ranging from reconnaissance and intelligence gathering, to prioritising and identifying potential targets in an operational setting.
Responsible innovation entails that decisions to develop and use AI technologies are guided by public values, in particular the rule of law, and aimed at benefiting society. This project will examine conditions and limits in which autonomous technologies can responsibly be developed and deployed in the military context. It aims at proactively shaping the development of technology and policy in the field.
The multi-disciplinary research team will combine ethical, legal, and technical perspectives, with the goal of operationalising principles into practice. In order to test and refine proposed solutions, the methodology for this project will include policy simulations ensuring a constant feedback loop.
Throughout the project, research findings will provide solid input for policy and regulation of military technologies involving AI. In particular, the research team will translate results into policy recommendations for national and international institutions, as well as technical standards and protocols for testing compliance and regulation.