Do Autonomous Weapon Systems Disrupt Military Ethics?

SCGW philosophy

14 May 2024 14h00-14h30

Dr. Andrew Rebera

A great deal of military technology aims at enhancing risk-asymmetry, i.e. at reducing risks faced by the militaries that deploy it and/or increasing the risks imposed upon their enemies. For example, unpiloted aerial vehicles (UAVs) and (semi-)autonomous weapon systems (AWS) appear to make possible 'risk-free' forms of fighting: the ability for soldiers to kill without themselves being exposed to any risk whatsoever of physical harm (let alone death). Many scholars have found something disturbing about these 'technologies of radical risk reduction'; some have challenged the legitimacy of certain uses of them. But the question of moral disruption persists even if—as will be the case here—attention is limited to uses of technology that do not cross the boundary of acceptable risk-reduction. The question persists for two main reasons. First, the traditional ‘martial virtues’—those moral virtues that are specifically relevant to soldiers at war—are considered less relevant to soldiers who fight without risk. Second, soldiers who are liberated from the chaos of the battlefield (because they operate semi-autonomous weapons from thousands of kilometres away) are physically, emotionally, and cognitively far better placed to be able to engage in complex moral deliberation. For these reasons, the moral disruption to which technologies of radical risk-reduction give rise has been thought to take one or both of two forms: either it is necessary to radically revise our conception of the traditional martial virtues, or—even more disruptively—it is necessary to radically rethink commonplace approaches to military ethical thinking (in order to reflect the dwindling relevance of Aristotelian virtue-based approaches). Scholars endorsing these revisionary views have argued that more rigidly rule-based, deontological approaches to military ethical thinking are required. In this talk I explain why they think this is so—and argue that it is not.

Teams link: click here.

Previous Post