New UAVs in Afghanistan

The USAF continues to use the Afghanistan / Pakistan conflict as a test bed for new military surveillance technologies and robotic weapons. The latest thing is apparently the RQ-170, codenamed Sentinel, which is a radar-evading UAV or drone aircraft.

This picture of the aircraft was apparently shot near Kandahar…

The Sentinel (source unknown)

It seems that as this conflict drags on, more and more of these things will get wheeled out. Its only purpose seems to have become to field test all these black-project developed technologies that the US security-industrial complex has been churning out. It wasn’t that long after the Predator drone emerged that we saw a weaponized version. It is unclear whether there is any such version of the Sentinel yet, but no doubt there will be soon enough. The increasing reliance on remote-controlled and robotic weapons seems to be a new article of faith amongst the world’s wealthier militaries.

Controlling Robotic Weapons

I’m delighted to be informed by Professor Noel Sharkey that I have been invited to become the first member of the Advisory Board of the International Campaign for Robotic Arms Control (ICRAC). ICRAC aims to help prevent the unfettered spread of automated weapons systems and to produce an international convention or some other kind of binding agreement to control their use. I’ve been tracking the develeopment of robotic surveillance (and killing) systems for quite a while now and I think this campaign is absolutely essential. This piece recently in The Times of London goes into some of the issues quite well. There is a lot of work to do here to persuade governments to control what many militaries think will be ‘essential’ to warfare in this coming century, but I think that the landmines campaign is a good example of what can be done here – but this time before robotic weapons become too common.

More military robots…

A story in the Daily Mail shows two new military robot surveillance devices developed for the UK Ministry of Defence’s Defence and Equipment Support (DES) group. The first is a throwable rolling robot equipped with multiple sensors, which can be chucked like a hand-grenade and then operated by remote-control. The second is another Micro-(Unmanned) Aerial Vehicle (Micro-UAV or MAV), a tiny helicopter which carries a surveillance camera. There have been rolling surveillance robots around for a while now (like the Rotundus GroundBot from Sweden), but this toughened version seems to be unique. The helicopter MAV doesn’t seem to be particularly new, indeed it looks at least from the pictures, pretty similar to the one controversially bought by Staffordshire police in Britain – which is made by MicroDrones of Germany.

The proliferation of such devices in both military and civil use is pretty much unchecked and unnoticed by legislators at present. Media coverage seems to be limited to ‘hey, cool!’ and yes, they are pretty cool as pieces of technology, but being used in useful humanitarian contexts (for example, rolling robots getting pictures of a partially-collapsed building or MAVs flying over a disaster zone) is a whole lot different from warfare, which is a whole lot different again from civilian law enforcement, commercial espionage or simple voyeuristic purposes. As surveillance gets increasingly small, mobile and independent, we have a whole new set of problems for privacy, and despite the fact that we warned regulators about these problems back in 2006 in our Report on the Surveillance Society, little government thought seems to have been devoted to these and other new technologies of surveillance.

The use of robots in war is of course something else I have become very interested in, especially as these flying and rolling sensor-platforms are increasingly independent in their operation and, like the US Predator drones employed in Afghanistan and Pakistan or the MAARS battlefield robot made by Qinetiq / Foster-Miller, become weapons platforms too. This is an urgent but still largely unnoticed international human rights and arms control issue, and one which the new International Committee for Robotic Arms Control (in which I am now getting involved), will hopefully play a leading role in addressing.

Manchester Airport trials virtual strip-search system

Rapiscan image (BBC)
Rapiscan image (BBC)

You would think after 4 years of trials at Heathrow, that British airports would now be able to work out whether or not they could and more importantly, should, use the various varieties of body scanners that are now available. However Manchester Airport is holding another trial starting from now at its Terminal 2. At least it will give a chance for the public to say what they think. The scans are remote – i.e.: the officer observing the images is not on the airport floor, which prevents the kind of scenario we mentioned in our Report on the Surveillance Society of lewd remarks directed at passengers. Personally, I am rather less concerned about this rather abstract view of my body being seen briefly as I pass through an airport than I am about my financial details and personal life being traded between private companies, or about being under constant video surveillance in ordinary public space in the city. However, the images, although ghostly, are detailed enough that genitals, deformities, medical implants and so on can be seen, and if this story is to be believed it would seem that there is no provision for women’s images to be seen by a women alone and men’s only by a man. This will make it entirely unacceptable to some people, in particular members of certain religious groups. But the scans are – at least, for now – voluntary, in that passengers can refuse and have a traditional pat-down search instead.

However, this technology won’t be staying in the airports for long. I reported back in July on stories that terahertz wave scanning could soon be made to fit into portable cameras. That raises a whole different set of social, political and ethical questions…

(Thanks to Simon Reilly for sending me the link)

Automation and Imagination

Peter Tu writes over on Collective Imagination, that greater automation might prevent racism and provide for greater privacy in visual surveillance, providing what he calls ‘race-blind glasses’. The argument is not a new one at all, indeed the argument about racism is almost the same proposition advanced by Professor Gary Marx in the mid-90s about the prospects for face-recognition. Unfortunately, Dr Tu does several of the usual things: he argues that ‘the genie is out of the bottle’ on visual surveillance, as if technological development of any kind is an unstoppable linear force that cannot be controlled by human politics; and secondly, seemingly thinking that technologies are somehow separate from the social context in which they are created and used – when of course technologies are profoundly social. Although he is more cautious than some, this still leads to the rather over optimistic conclusion, the same one that has been advanced for over a century now, that technology will solve – or at least make up for – social problems. I’d like to think so. Unfortunately, empirical evidence suggests that the reality will not be so simple. The example Dr Tu gives on the site is one of a simple binary system – a monitor shows humans as white pixels on a black background. There is a line representing the edge of a station platform. It doesn’t matter who the people are or their race or intent – if they transgress the line, the alarm sounds, the situation can be dealt with. This is what Michalis Lianos refers to as an Automated Socio-Technical Environment (ASTE). Of course these simple systems are profoundly stupid in a way that the term ‘artificial intelligence’ disguises and the binary can hinder as much as it can help in many situations. More complex recognition systems are needed if one wants to tell one person from another or identify ‘intent’, and it is here that all those human social problems return with a vengeance. Research on face-recognition systems, for example, has shown that prejudices can get embedded within programs as much as priorities, in other words the politics of identification and recognition (and all the messiness that this entails) shifts into the code, where it is almost impossible for non-programmers (and often even programmers themselves) to see. And what better justification for the expression of racism can there be that a suspect has been unarguably ‘recognised’ by a machine? ‘Nothing to do with me, son, the computer says you’re guilty…’ And the idea that ‘intent’ can be in any way determined by superficial visualisation is supported by very little evidence which is far from convincing, and yet techno-optimist (and apparently socio-pessimist) researchers push ahead with the promotion of the idea that computer-aided analysis of ‘microexpressions’ will help tell a terrorist from a tourist. And don’t get me started on MRI…

I hope our genuine human ‘collective imagination’ can do rather better than this.

MAVs

Torin Monahan sent me this interesting video from the US Air Force showing ideas on Micro-Aerial Vehicles (MAVs) – nature-mimicking drones or independent robots that are intended to ‘enhance the capability of the future war-fighter’…

I’ve called for a convention on the use of robotic weapons and Professor Noel Sharkey and a couple of colleagues have now set up the International Committee for Robot Arms Control (ICRAC) – this video just illustrates why they need to be controlled as soon as possible before these kinds of things are widespread.

India plans ‘world class’ electronic surveillance for Commonwealth Games

The Times of India reports on the Indian government’s plans to implement comprehensive surveillance for the 2010 Commonwealth Games. One aim seems to be to create the kind of ‘island security’ with which we have become so familiar at these kinds of mega-events: vehicle check-points with automatic license-plate recording and recognition; x-ray machines and other scanners for vehicles (and perhaps people too). They will also massively expand CCTV systems and not just in the actual Games area, but throughout the city of Delhi.There are also, as usual plans to use more experimental surveillance and control techniques (as with the use of sub-lethal sonic weapons in Pittsburgh the other day), in this case a drone surveillance airship,” capable of taking and transmitting high-density visual images of the entire city.”

However, this is not just about the temporary security of the games. As with many other such mega-events, the Indian government appears to be planning to use the Delhi games as a kind of Trojan Horse for the rolling out of similar and more permanent measures in big cities across the country. The Times article claims that the Ministry of Home Affairs intends to expand the measures and “soon the same model is planned to be replicated across the country,” and in particular on use of airships, “similar airships would be launched in other big and vulnerable cities like Mumbai, Bangalore, Kolkata and Chennai.” And there will be an infrastructure too, apparently “the IB [Intelligence Bureau] is silently working to create a command center to monitor all-India intelligence and surveillance.”

Of course the threat of ‘terror groups’ is the justification, and there’s no doubt there is a threat to Indian cities from such groups, particularly those based in Pakistan. However, the Indian public shouldn’t assume that anything done in the name of ‘anti-terrorism’ will: 1. actually work (in the sense of preventing terrorism); or 2. be used for those purposes anyway. This same trend happened  in the UK during the early 1990s, when the threat of the Provisional IRA was the justification, and before most people in Britain had even noticed, a massive (and it seems ever-expanding) patchwork of CCTV camera systems had been created, which were joined by further repressive measures even before 9/11. And did this massive number of cameras stop London being attacked by terrorists? No, it didn’t.  7/7 still happened. But of course we had lots of good pictures after the event for the media… and they are very expensive and don’t even do much to stop regular crime, as a recent meta-study has shown. What would be more effective would be peace and co-operation with Pakistan, a move away from both chauvinistic Hindu and Muslim nationalisms and extremisms which only generate resentment and hatred, and old-fashioned targeted intelligence work on those very few people who are actually planning terrorism – not mass surveillance and the gradual erosion of civil liberties of the entire population based on state fears that some of them might be guilty.

Finally, this is about globalization. The whole way this is promoted by the Indian government is as if there is some international competition to install as much CCTV and security as possible. But the global spread of the surveillance standards and expectations of the rich western elite is a self-fulfilling logic that benefits only the massive global security-industrial complex.

Another day, another ‘intelligent’ surveillance system…

Yet another so-called ‘intelligent’ surveillance system has been announced. This one comes from Spain and is designed to detect abnormal behaviour on and around pedestrian crossings.

Comparison between the reasoning models of the artificial system and a theoretical human monitor in a traffic-based setting. (Credit: ORETO research group / SINC)
Comparison between the reasoning models of the artificial system and a theoretical human monitor in a traffic-based setting. (Credit: ORETO research group / SINC)

The article in Science Daily dryly notes that it could be used “to penalise incorrect behaviour”… Now, I know there’s nothing intrinsically terribly wrong with movement detection systems, but the trend towards the automation of fines and punishment, nor indeed of everyday life and interaction more broadly, is surely not one that we should be encouraging. I’ve seen these kinds of systems work in demonstrations (most recently at the research labs of Japan Railways, more of which later…) but, despite their undoubtedly impressive capabilities and worthwhile potential, they leave me with a sinking feeling, and a kind of mourning for the further loss of little bits of humanity. Maybe that’s just a personal emotion, but I don’t think we take enough account of both the generation and loss of emotions in response to increasing surveillance and control.

Further Reference: David Vallejo, Javier Albusac, Luis Jiménez, Carlos González y Juan Moreno. (2009) ‘A cognitive surveillance system for detecting incorrect traffic behaviors,’ Expert Systems with Applications 36 (7): 10503-10511

Moon protest highlights wider border surveillance issues

The mass mooning of the US balloon camera owned by Sierra Nevada Corporation went ahead, but the irony was that the system had already been disabled by the weather. Apparently a large thunderstorm cause a gash in the fabric of the balloon last week which, if nothing else, should prove rather more effective than the protest in making sure that the US government does not invest in it.

However the wider issue of the US surveillance of the border with Canada remains (not mention that of the Mexican border, already a major concern) and whilst this particular technology and the appropriately ridiculous protest, has attracted most attention in the media, let’s not forget that camera towers have been erected and the USA is flying UAVs along the border. Surely President Obama should realise that the paranoid policies of his predecessor do nothing apart from damage relationships (and trade) with a close neighbour?

Time for an international convention on robotic weapons

The estimable Professor Noel Sharkey is calling today for a debate on the use of robotic weapon systems, like the UAVs that I have been covering sporadically. He’s right of course, but we need to go much further and much faster. With increasing numbers of counties turning to remote-controlled weapon systems, and the potential deployment of more independent or even ‘intelligent’ robots in war, hat we need is an international convention which will limit the development and use of automated weapon systems, with clear guidelines on what lines are not to be crossed. In the case of atomic, biological and chemical weapons these kinds of conventions have had mixed success, but we have had very few clear examples of the use of such weapons in international conflicts.