Wired’s Danger Room blog has published pictures of what may be the hitherto secret CIA drone base in Saudi Arabia, revealed as part the confirmation hearings for John Brenner as proposed Director of the CIA…
Category: CIA
Obama’s drone wars in question
So much has been happening over the US drone warfare program over the last few weeks that it’s hard to keep up.
First, the United Nations Special Rapporteur for Human Rights and Counter-Terrorism, instigated an inquiry into the targeted killing programs operated by the USA, largely using drones, and focusing on the issue of civilian casualties. The rapporteur, Ben Emmerson, made it clear that the inquiry would pull no punches and might result in war crimes charges against the US, should evidence be discovered of such crimes.
Second, NBC television in the USA revealed a leaked Justice Department document laying out the legal justification for the targeted assassination of US citizens using drones. The full memo is also available from this link and assembles a tortuous argument about how US citizens can be killed by their own government from above if there an “informed, high-level” official decides that the person has “recently” been involved in undefined “activities” threatening a violent attack against the US and “there is no evidence suggesting that he has renounced or abandoned such activities.”
And now, the Washington Post is reporting that the nomination of President Obama’s counter-terrorism guru, John Brennan, to head the CIA, has led to all sorts of revelations and difficult questions for Brennan to answer about the CIA’s targeted assassination program, including the acknowledgement of a secret drone base, at a still undisclosed location, in Saudi Arabia.
A while ago it looked like Obama’s drone strategy was unassailable despite increasing public knowledge via the Bureau of Investigative Journalism and criticisms from groups like the International Committee for Robot Arms Control. Now, this is going mainstream and it’s not looking so good for what former CIA Director, Leon Panetta, called the ‘only game in town’.
North American military drone policy update
The USA has established organized Pacific and Atlantic surveillance UAV squadrons (of 12-24 aircraft each) for the first time, for border and sea lane monitoring. These are a variant of the Northrop-Grumman MQ-4 drones I mentioned the other day, which Japan are also buying. The order establishing the program can be found via Cryptome here. Cryptome has also published the locations of the bases from which they will fly, Ventura Country naval base in California and and Mayport naval base near Jacksonville in Florida.
It is increasingly seeming like UAVs will continue to form the core of Obama’s military strategy, and it seems no coincidence that he has nominated John Brennan, described as the ‘architect’ of his drone policy, to be the new head of the Central Intelligence Agency.
Meanwhile, Canada is more likely to have widespread use of drones by police and the private sector before it gets any military models. It was reported just at the end of last year that the Canadian military drone program is now not likely to be in operation until 2017 and the cost has gone up to over $1Bn (Can). This doesn’t seem to have attracted anything like the attention that has been given to the ingoing farago surrounding the Canadian government’s attempt to purchase US Lockheed F-35 fighter jets, although admittedly that is no estimated as being something in the region of 50 times as expensive…
(Thanks to Chris Prince for keeping me updated on this!)
Where Will the Big Red Balloons Be Next?
The US Defence Advanced Research Projects Agency (DARPA) has launched a $40,000 competition ostensibly to see examine the way communication works in Web2.0. The competition will see whether disributed teams working together online can uncover the location of large red weather balloons moored across the USA.
The ‘DARPA Network Challenge’ “will explore the roles the Internet and social networking play in the timely communication, wide-area team-building, and urgent mobilization required to solve broad-scope, time-critical problems”.
All the headlines for this story have been verging on the amused (even The Guardian). Words like ‘whimsical’ and ‘wacky’ have been common. But it seems to me that this project has many underlying aims apart from those outlined in these superficial write-ups, not least of which are: how easily people in a culture of immediate gratification can be mobilised to state aims and in particular to do mundane intelligence and surveillance tasks (following the failure of simple old style rewards to work in the tracking down of Osama Bin Laden and other such problems), and 2, the prospects for manipulating ‘open-source intelligence’ in a more convenient manner, i.e. distributing military work and leveraging (a word the military loves) a new set of assets – the online public, which is paradoxially characterised by both an often extreme scepticism and paranoia, but at the same time, a general superficiality and biddability.
DARPA, of course, was one of the originators of the Internet in the first place (as it continues to remind us), but the increasingly ‘open’ nature of emergent online cultures has meant that the US military now has a chronic anxiety about the security threats posed not so much by overt enemies as by the general loss of control – in fact, there’s been talk for a while of an ‘open-source insurgency’, a strategic notion that in one discursive twist elides terrorism and the open-source / open-access movement, and the CIA has recently bought into firms that specialize in Web 2.0 monitoring.
It seems rather reminiscent of both the post-WW2 remobilisation of US citizens in things like the 1950s ‘Skywatch’ programs (which Matt Farish from the University of Toronto has been studying) or more specifically, some of the brilliant novels of manipulation that emerged from that same climate, in particular Phillip K. Dick’s Time Out of Joint, in which unwitting dupe, Raggle Gumm, plots missile strikes for an oppressive government whilst thinking he’s winning a newspaper competition, ‘Where will the Little Green Man be Next?’
So, who’s going to be playing ‘Where Will the Big Red Balloons Be Next?’ then… ?

CIA buys into Web 2.0 monitoring firm
Wired online has a report that the US Central Intelligence Agency has bought a significant stake in a market research firm called Visible Technologies that specializes in monitoring new social media such as blogs, mirco-blogs, forums, customer feedback sites and social networking sites (although not closed sites like Facebook – or at least that’s what they claim). This is interesting but it isn’t surprising – most of what intelligence agencies has always been sifting through the masses of openly available information out there – what is now called open-source intelligence – but the fact is that people are putting more of themselves out their than ever before, and material that you would never have expected to be of interest to either commercial or state organisations is now there to be mined for useful data.
(thanks, once again to Aaron Martin for this).
The robots are coming and now they’re angry…
A mindless drone robot is one thing but an independent robot with a tiny mind capable only of death and destruction – that is something else entirely.
Whilst I was doing my PhD in the late 90s, I met a guy called Steve Wright who used to run the Omega Foundation (who were like the ‘Lone Gunman’ organisation from the X-Files, but for real), and who is now at the Praxis Centre at Manchester Metropolitan University, UK. He was investigating the development of new forms of automated killing and control systems and ever since then, I’ve been keeping an eye on the development of remote-controlled and increasingly automated surveillance technologies, and in particular the development of robotic devices that are able not only to collect or transfer data, but to respond physically.
Two stories this week reflect the variety of developments in all kinds of different arenas, and raise all sorts of issues around the distancing of human responsibility from material, in many cases, punitive or lethal action.

The first was the news that Japanese technologists have produced a mobile remote-controlled robot that can fire a net over ‘intruders’. Until recent years such developments had been carried out largely in the area of military research by organisations like the RAND corporation in the USA. However, particularly since the end of the Cold War when military supply companies started to look to diversify and find new markets in a more uncertain time when the ‘military-industrial complex’ might no longer ensure their profits, there has been a gradual ‘securitization’ of civil life. One consequence of this has been that so-called ‘less-lethal’ weapons are increasing regarded as normal for use in law enforcement, private security organisations and even by individuals.
However a further change has been in the when these operations can be automated when an intermediary technology using sensors of some kind is placed between the person operating them and the person(s) or thing(s) being monitored. This removes the person from the consequences of their action and allows them to place the moral burden of action onto the machine. The operation is aided still more if the machine itself can be ‘humanized’, in the way that Japanese robots so often are. But a kawaii(cute) weaponized robot is still a weaponized robot.

In the skies above Afghanistan, Iraq and Gaza however, ‘cuteness’ doesn’t matter. Remote-control military machines have been stealthily entering the front lines, colonising the vertical battlespace, with lethal consequences that have not yet been considered enough. This week we saw US unmanned aircraft operated by the CIA kill a total of 21 people in Pakistan, one of the few aspects of Bush-era policy that new President Obama has not (yet) promised to change.
All of these machines are still under some kind of control from human operators, but several profoundly misguided scientists are trying to create systems that are more independent, even ‘intelligent’. This week, I read about Professor Mandyam Srinivasan of Queensland University in Australia who, at least according to a report in The Australian, thinks it is a great idea to give missiles brains like angry bees. A mindless drone robot is one thing but an independent robot with a tiny mind capable only of death and destruction – that is something else entirely. I can think of few things that are less in the spirit of social progress than this, but he’s hardly the only one thinking this way: there are billions of dollars being pumped into this kind of research around the world…