Torin Monahan sent me this interesting video from the US Air Force showing ideas on Micro-Aerial Vehicles (MAVs) – nature-mimicking drones or independent robots that are intended to ‘enhance the capability of the future war-fighter’…
The estimable Professor Noel Sharkey is calling today for a debate on the use of robotic weapon systems, like the UAVs that I have been covering sporadically. He’s right of course, but we need to go much further and much faster. With increasing numbers of counties turning to remote-controlled weapon systems, and the potential deployment of more independent or even ‘intelligent’ robots in war, hat we need is an international convention which will limit the development and use of automated weapon systems, with clear guidelines on what lines are not to be crossed. In the case of atomic, biological and chemical weapons these kinds of conventions have had mixed success, but we have had very few clear examples of the use of such weapons in international conflicts.
According to The Times of India, the Indian military is investing massively in boom military industry of the moment – Unmanned Aerial Vehicles (UAVs or drones).
An IAL Heron TP UAV in flight
The initial order is apparently for coastal protection and involves the purchase of Heron UAVs from Israel Aerospace Industries, a specialist in such technologies which produces everything from large payload drones to tiny micro-UAVs like the Mosquito, which can be launched by hand and is designed for “providing real-time imagery data in restricted urban areas.” The Indian Defence Research and Development Organisation (DRDO) and Aeronautical Development Establishment (ADE) have also been developing their own drones in conjunction with IAI, the latest being the Rustom MALE.
A Predator UAV equiped with Hellfire missile (USAF)
Herons are supposedly unarmed but armed versions were used in the 2006 invasion of Lebanon by Israeli armed forces. The ToI article also makes it clear that Indian forces will be buying more overtly aggressive drones such as the US Predator systems that have been used to such devastating effect against Al-Qaeda and the Taliban in the Pakistan-Afghanistan frontier regions. Far from easing up on the use of these remote-control killing machines, Obama’s administration has accelerated their use. They put fewer US troops in the firing line, and can attack remote areas, from where it is also very difficult to get an accurate independent view on their activities. However they are alleged to have been massively inaccurate, with the Pakistan government claiming that only 10 out of 60 missions between January 2006 and April 2009 had hit their targets, killing 14 Al-Qaeda leaders and 687 civilians, an appalling ratio.
With the advent of strategic bombing and then the ICBM, the Twentieth Century saw a massive increase of the role of remote surveillance in warfare, which was intimately linked to the growth in destructive power and the ability to not to understand the consequences in any direct or emotional way. Even with the tank and artillery ground warfare was not so remote, but now in the Twenty-first Century we are seeing surveillance-based, remote-control warfare becoming increasingly normalised. It is not surprising to see both hypocritical states like the USA and Israel intimately involved in the promotion of this form of conflict which looks cleaner and more ‘moral’ from the point of view of the user, but which in fact simply further isolates them from the consequences of their action. Real time surveillance turns everyday life in to a simulation, and drone-based warfare makes war into something like a game. And it’s a deadly and amoral game that increasing numbers of states, like India, are now playing.
Several years ago, during my postdoctoral work on algorithmic surveillance, I was warning of the problems associated with the development of heuristic (learning) software applied to surveillance systems. I argued that their popularity would grow as the amount of data acquired through growing numbers of cameras that could not possibly be watched by human operators. Today in CSO Security and Risk, a magazine for ´security executives´, there is a piece by Eric Eaton which summarises the current state of the art, and which makes exactly that argument as a reason why operators of surveillance systems should consider using such learning software. As Eaton says,
¨Various tools have emerged that not only “see video better,” but also analyze the digitized output of video cameras in real time to learn and recognize normal behavior, and detect and alert on all abnormal patterns of activity without any human input.¨
Judge, jury... and executioner (IPC magazines)
The key issue here, and one which I mentioned in my post about angry robots the other day, is the automated determination of normality. As French theorist, Michalis Lianos, argued inLe Nouveau Contrôle Social back in 2001, the implementation of these kinds of systems threatens to replace social negotiation with a process of judgement and sometimes even the consequences (which in some border control systems can be fatal – Judge Dredd as computer).
Eaton identifies the first part of this conundrum when he says that ¨the key to successful surveillance is learning normal behaviors¨and he believes that this will enable systems to filter out such activity and ¨help predict, and prevent, future threats.¨ He does admit that in many cases the numbers of actual instances of systems detecting suspicious behaviour is very small and therefore the cost of systems may not be economically justified, but there is as usual amongst his ´5 musts´ for security operators, no place to be found for ethics, human rights (or indeed humanity outside of systems operators) or consideration of the wider social impacts of the growth in use of learning security machines.
No doubt he would say, as most developers and operators do, that this is simply a matter of how systems are used in compliance with best practice and the law, which in itself ignores the possibility of already consciously or unconsciously programmed-in biases, but the more that systems become intelligent and are able to make decisions independently of human operators, the thinner this legalistic response becomes. I´m not saying that we are about to have automatic car park CCTV cameras with guns any time soon, but it´s about time we had some forward-looking policy on the use of heuristic systems before they become as normal as Eaton suggests…
A mindless drone robot is one thing but an independent robot with a tiny mind capable only of death and destruction – that is something else entirely.
Whilst I was doing my PhD in the late 90s, I met a guy called Steve Wright who used to run the Omega Foundation (who were like the ‘Lone Gunman’ organisation from the X-Files, but for real), and who is now at the Praxis Centre at Manchester Metropolitan University, UK. He was investigating the development of new forms of automated killing and control systems and ever since then, I’ve been keeping an eye on the development of remote-controlled and increasingly automated surveillance technologies, and in particular the development of robotic devices that are able not only to collect or transfer data, but to respond physically.
Two stories this week reflect the variety of developments in all kinds of different arenas, and raise all sorts of issues around the distancing of human responsibility from material, in many cases, punitive or lethal action.
Japanese security robot (AP)
The first was the news that Japanese technologists have produced a mobile remote-controlled robot that can fire a net over ‘intruders’. Until recent years such developments had been carried out largely in the area of military research by organisations like the RAND corporation in the USA. However, particularly since the end of the Cold War when military supply companies started to look to diversify and find new markets in a more uncertain time when the ‘military-industrial complex’ might no longer ensure their profits, there has been a gradual ‘securitization’ of civil life. One consequence of this has been that so-called ‘less-lethal’ weapons are increasing regarded as normal for use in law enforcement, private security organisations and even by individuals.
However a further change has been in the when these operations can be automated when an intermediary technology using sensors of some kind is placed between the person operating them and the person(s) or thing(s) being monitored. This removes the person from the consequences of their action and allows them to place the moral burden of action onto the machine. The operation is aided still more if the machine itself can be ‘humanized’, in the way that Japanese robots so often are. But a kawaii(cute) weaponized robot is still a weaponized robot.
A 'Predator' Drone (USAF)
In the skies above Afghanistan, Iraq and Gaza however, ‘cuteness’ doesn’t matter. Remote-control military machines have been stealthily entering the front lines, colonising the vertical battlespace, with lethal consequences that have not yet been considered enough. This week we saw US unmanned aircraft operated by the CIA kill a total of 21 people in Pakistan, one of the few aspects of Bush-era policy that new President Obama has not (yet) promised to change.
All of these machines are still under some kind of control from human operators, but several profoundly misguided scientists are trying to create systems that are more independent, even ‘intelligent’. This week, I read about Professor Mandyam Srinivasan of Queensland University in Australia who, at least according to a report in The Australian, thinks it is a great idea to give missiles brains like angry bees. A mindless drone robot is one thing but an independent robot with a tiny mind capable only of death and destruction – that is something else entirely. I can think of few things that are less in the spirit of social progress than this, but he’s hardly the only one thinking this way: there are billions of dollars being pumped into this kind of research around the world…