Whilst I was doing my PhD in the late 90s, I met a guy called Steve Wright who used to run the Omega Foundation (who were like the ‘Lone Gunman’ organisation from the X-Files, but for real), and who is now at the Praxis Centre at Manchester Metropolitan University, UK. He was investigating the development of new forms of automated killing and control systems and ever since then, I’ve been keeping an eye on the development of remote-controlled and increasingly automated surveillance technologies, and in particular the development of robotic devices that are able not only to collect or transfer data, but to respond physically.
Two stories this week reflect the variety of developments in all kinds of different arenas, and raise all sorts of issues around the distancing of human responsibility from material, in many cases, punitive or lethal action.
The first was the news that Japanese technologists have produced a mobile remote-controlled robot that can fire a net over ‘intruders’. Until recent years such developments had been carried out largely in the area of military research by organisations like the RAND corporation in the USA. However, particularly since the end of the Cold War when military supply companies started to look to diversify and find new markets in a more uncertain time when the ‘military-industrial complex’ might no longer ensure their profits, there has been a gradual ‘securitization’ of civil life. One consequence of this has been that so-called ‘less-lethal’ weapons are increasing regarded as normal for use in law enforcement, private security organisations and even by individuals.
However a further change has been in the when these operations can be automated when an intermediary technology using sensors of some kind is placed between the person operating them and the person(s) or thing(s) being monitored. This removes the person from the consequences of their action and allows them to place the moral burden of action onto the machine. The operation is aided still more if the machine itself can be ‘humanized’, in the way that Japanese robots so often are. But a kawaii(cute) weaponized robot is still a weaponized robot.
In the skies above Afghanistan, Iraq and Gaza however, ‘cuteness’ doesn’t matter. Remote-control military machines have been stealthily entering the front lines, colonising the vertical battlespace, with lethal consequences that have not yet been considered enough. This week we saw US unmanned aircraft operated by the CIA kill a total of 21 people in Pakistan, one of the few aspects of Bush-era policy that new President Obama has not (yet) promised to change.
All of these machines are still under some kind of control from human operators, but several profoundly misguided scientists are trying to create systems that are more independent, even ‘intelligent’. This week, I read about Professor Mandyam Srinivasan of Queensland University in Australia who, at least according to a report in The Australian, thinks it is a great idea to give missiles brains like angry bees. A mindless drone robot is one thing but an independent robot with a tiny mind capable only of death and destruction – that is something else entirely. I can think of few things that are less in the spirit of social progress than this, but he’s hardly the only one thinking this way: there are billions of dollars being pumped into this kind of research around the world…