Killer robots already exist, and they’ve been here a very long time…


Humans will always make the final decision on whether armed robots can shoot, according to a statement by the US Department of Defense. Their clarification comes amid fears about a new advanced targeting system, known as ATLAS, that will use artificial intelligence in combat vehicles to target and execute threats. While the public may feel uneasy about so-called “killer robots”, the concept is nothing new – machine-gun wielding “SWORDS” robots were deployed in Iraq as early as 2007.

But our relationship with military robots goes back even further than that.  This is because when we say ‘robot’, what we really mean is a technology with some form of ‘autonomous’ element that allows it to perform a task without the need for direct human intervention.

Multipurpose unmanned tactical transport (MUTT), used by the US Marine Corp
Multipurpose unmanned tactical transport (MUTT), used by the US Marine Corp

The thing is, these technologies have existed for a very long time. Way back during the Second World War, the proximity fuze was developed to explode artillery shells at a pre-determined distance from their target. This made the shells far more effective than they would have otherwise been by augmenting human decision making, and in some cases, taking the human out of the loop completely.

The question then, is not so much one of whether we should use autonomous weapon systems in battle, for we already use them, and they take many forms. Rather, we should focus on how we use them, why we use them, and what form (if any) human intervention should take.

The birth of cybernetics

The theory of human-machine interaction has its origins in the Second World War. During the war, Norbert Wiener laid the groundwork for the theory of cybernetics in his work on the control of anti-aircraft fire. By studying the deviations between an aircraft’s predicted motion, and its actual motion, Wiener and his colleague Julian Bigelow came up with the concept of the ‘feedback loop’, where deviations could be fed back into the system in order to correct further predictions.

Thus, Wiener’s theory went far beyond mere augmentation, for cybernetic technology could be used to pre-empt human decision – removing the fallible human from the loop, in order to make better, quicker decisions, and make weapons systems far more effective.

Norbert Wiener, father of cybernetics
Norbert Wiener, father of cybernetics

The computer and the drone

In the years since the Second World War, the computer has emerged to sit alongside cybernetic theory to form a central pillar of military thinking. From the ‘smart bombs’ of the Vietnam-era, to cruise missiles and Reaper drones, computers both create and sustain the military-industrial complex that continues to rumble on to this day.

Indeed, it is no longer enough to merely augment the human war fighter as it was in the early days. Much rather, the next phase is to remove the human completely, ‘maximising’ military outcomes while minimising the political cost associated with the loss of allied lives. This has led to the widespread use of military drones by the US and its allies. While these missions are highly controversial, in political terms, they have proven far preferable to public outcry caused by military deaths.

RAF Reaper MQ-9 drone
RAF Reaper MQ-9 drone, one of many ‘killer robots’ in use today

The human machine

One of the most contentious issues relating to drone warfare is the role of the drone pilot or ‘operator’. Like all personnel, these operators are bound by necessity, and the need to ‘do a good job’. However, the terms of success are far from clear. As Laurie Calhoun observers, ‘The business of UCAV [drone] operators is to kill’.[1] In this way, their task is not so much to make a (human) decision, but rather to do the job that they are employed to do. If the computer tells them to kill, is there really any reason why they shouldn’t?

A similar argument can be made with respect to the modern-day soldier. From GPS navigation to video uplinks, soldiers carry numerous devices that tie them into a vast network that monitors and controls them at every turn. With the increasingly widespread use of kill-cams and live data feeds, soldiers even run the risk of retrospective sanction.

Augmented soldiers, NORNAVSOC exercise 2014
Operators from Norwegian Naval Special Operations Command (2014)

This leads to an ethical conundrum. If the purpose of the soldier is to follow orders to the letter (with cameras used to ensure compliance), then why do we bother with human soldiers at all? After all, machines are far more efficient than human beings and don’t suffer from fatigue and stress in the same way as a human does. If soldiers are expected to behave in a programmatic, robotic fashion anyway, then what’s the point in shedding unnecessary allied blood?

The answer, here, is that the human serves as an alibi or form of ‘ethical cover’ for what is in reality, an almost wholly mechanical, robotic act. Just as the drone operator’s job is to oversee the computer-controlled drone, so the human’s role in the DoD’s new targeting system is merely to act as ethical cover in case things go wrong.

Augmented soldiers. Picture: Ric Feld/AP//The News Tribune, Peter Haley/AP
Are they humans, or are they ‘killer robots’? Picture: Ric Feld/AP//The News Tribune, Peter Haley/AP

The robotic future

While Predator and Reaper drones may stand at the forefront of the public imagination about military autonomy and ‘killer robots’, these innovations are in themselves nothing new. They are merely the latest in a long line of developments that go back many decades.

While it may comfort some readers to imagine that autonomy and machinic automation can’t, or indeed shouldn’t be allowed to occur, this argument really does miss the point.[2] Autonomous systems have long been embedded in the military, and indeed, our wider society, and we should prepare ourselves for the consequences.

It’s not so much a question of whether or not we use machines to help us do our jobs better for we already use them, even if that ‘job’ is to bomb enemy targets. Rather, we should consider why we do the things we do, and why we do them in the way that we do. In many respects, we’re all robots, and have always been robots of a kind, tied up in a complex relationship with other people and the material world around us. Now may well be the perfect time to take a step back and ask ourselves why this should be the case, and can we really think of any other way?


[1] Laurie Calhoun, We Kill Because We Can: From Soldiering to Assassination in the Drone Age (London: Zed Books, 2015), p. 257.

[2] The Campaign to Stop Killer Robots was set up in 2012 ‘to ban fully autonomous weapons and thereby retain meaningful human control over the use of force’. However, there remains a major question over just quite what constitutes ‘meaningful human control’ and how (if at all) this human control is any more ‘ethical’ than a robotic decision. It is also interesting to note that of the 28 countries that have reportedly signed up to the campaign, none of them are in a position to actively deploy the ‘killer robots’ against which the group is campaigning.

One thought on “Killer robots already exist, and they’ve been here a very long time…”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.