I am currently finishing an article on autonomous killer drones – military robots that can go out, identify and kill enemy combatants without human supervision. They don’t exist yet, but technology is inching us closer to that day. 54 countries are developing military robots and autonomy is a hot feature.
My paper argues that autonomous killer robots are illegal under international humanitarian law because various IHL provisions require the exercise of discretion in combat, a quality I argue that robots lack.
One argument against this position is while robots don’t possess human-like discretion, they also don’t possess human-like foibles such as temper, volatility, fear, anxiety or revenge. These emotions conspire to cause soldiers to lose their cool in the heat of battle. It is often argued that a major advantage robots have over humans is that they can fire second.
This ability to fire second was the source of my epiphany. It occurred to me that a Monitor and Merrimack moment is looming, a time when two enemy autonomous robots first meet in combat. But what if both robots are programmed to fire second? They may approach and circle each other, waiting in vain for the other to initiate the use of force. Peace may break out, unless some human intervenes to save the day.