Autonomous Weapons in Humanitarian Law: Understanding the Technology, Its Compliance with the Principle of Proportionality and the Role of Utilitarianism
Autonomous machines are moving rapidly from science fiction to science fact. The defining feature of this technology is that it can operate independently of human control. Consequently, society must consider how ‘decisions’ are to be made by autonomous machines. The matter is particularly acute in circumstances where harm is inevitable no matter what course of action is taken. This dilemma has been identified in the context of autonomous vehicles driving under the regulation of domestic law and, there, governments seem to be moving towards a utilitarian solution to inevitable harm. This leads one to question whether utilitarianism should be transposed into the context of autonomous weapons which might soon operate on the battlefield under the gaze of humanitarian law. The argument here is that it should because humanitarian law includes the core principle of ‘proportionality’, which is fundamentally a utilitarian concept – requiring that any gain derived from an attack outweighs the harm caused. However, while human soldiers are always able to come to a view on proportionality, albeit subjective, there is much doubt over how an autonomous weapon might determine what is proportionate. There is a very large gap between our embryonic understanding of utilitarianism in relation to autonomous vehicles manoeuvring around a city on one hand; and what would be required for armed robots patrolling a battlespace on the other. Bridging this gap is fraught with difficulty but perhaps the best starting point is to take Bentham’s expression of utilitarian mechanics and build upon them. With conscious effort and, ideally, collaboration, states could use the process of applying his classic theory to this very modern problem to raise the standard of protection offered to those caught up in conflict.
Open Access Creative Commons