4 November 2025 14h00-14h30
Ann-Katrien Oimann, dr. (SCGW)
Artificial intelligence (AI) is increasingly used across a wide range of domains, including the military. As we move toward more advanced, second-generation AI systems, their growing autonomy in decision-making processes raises pressing ethical and legal questions. One of the most debated concerns is the emergence of lethal autonomous weapons systems (LAWS) and the challenge of assigning moral responsibility for their actions. Some philosophers argue that the high degree of autonomy in such systems creates a so-called “responsibility gap,” where no human agent can be fully held accountable for AI-caused harm. In recent years, literature on responsibility gaps has expanded rapidly, reflecting the urgency and complexity of this issue. Responsibility is central to most AI ethics discussions, but how it should be understood when applied to autonomous military systems is still debated. In this presentation, I will map the main positions within the debate on responsibility gaps in relation to LAWS, highlighting points of convergence, disagreement, and possible ways to make the discussion more coherent. The aim is to clarify what really is at stake when we speak of a “gap” in responsibility and what this means for future governance of autonomous military technologies.
Teams Link will be communicated very soon