Increasing ‘popularity’ of drones

National Public Radio (NPR)  in the US broadcast an interesting short piece on the spread of drones or Unmanned Aerial Vehicles (UAVs). Drones of all sizes are now increasingly used by national and local states – I think ‘popular’ is the wrong word as most people have no idea that they are so widespread or even that they are likely to be operating at all. There’s also a longer piece by Barbara Ehrenreich on the more general issue of robotic warfare on the Guardian’s Comment is Free site today.

According to my sources in the UK, they have just started flying Reaper drones out of RAF Northolt over London, in preparation for the 2012 London Olympics.

Helping robots find their way in the city

Many approaches to developing cities as automated environments, whether this be for robotics, for augmented reality or ubiquitous computing tend to take as their premise the addition of items, generally computing devices, to the environment. Thus, for example, RFID chips can be embedded in buildings and objects which could (and indeed in some cases, already do) communicate with each other and with mobile devices to form networks to enable all kinds of location-based services, mobile commerce and of course, surveillance.

But for robots in the city, such a complex network of communication is not strictly necessary. Cities already contain many relatively stable points by which such artificial entities can orient themselves, however not all of them are obvious. One recent Japanese paper, mentioned in Boing Boing, advocates the use of manhole covers, which tend to be static, metallic, quite distinctive and relatively long-lasting – all useful qualities in establishing location. The shape of manhole covers could be recorded and used as location-finding data with no need for embedded chips and the like.

It isn’t mentioned in the article, but I wonder whether such data could also be used for other inhabitants of the city with limited sensory capabilities: impaired humans? Could one equip people with devices that read the same data and use this to help sensorially-impaired people to navigate the city more effectively? On the less positive side, I also wonder whether such data would prove to be highly desirable information for use in urban warfare…

Facebook face-recognition

Reports are that US users can now use an automated face-recognition function to tag people in photos posted to the site. To make it clear, this is not the already dubious practice of someone else tagging you in a photo, but an automated service – enter a picture and the system will search around identifying and tagging.

As a Facebook engineer is quoted as saying:

“Now if you upload pictures from your cousin’s wedding, we’ll group together pictures of the bride and suggest her name… Instead of typing her name 64 times, all you’ll need to do is click ‘Save’ to tag all of your cousin’s pictures at once.”

Once again, just as with Facebook Places, the privacy implications of this do not appear to have been thought through (or more likely just disregarded) and it’s notable that this has not yet been extended to Canada, where the federal Privacy Commissioner has made it very clear that Facebook cannot unilaterally override privacy laws.

Let’s see how this one plays out, and how much, once again, Facebook has to retrofit privacy settings…

Controlling Robotic Weapons

I’m delighted to be informed by Professor Noel Sharkey that I have been invited to become the first member of the Advisory Board of the International Campaign for Robotic Arms Control (ICRAC). ICRAC aims to help prevent the unfettered spread of automated weapons systems and to produce an international convention or some other kind of binding agreement to control their use. I’ve been tracking the develeopment of robotic surveillance (and killing) systems for quite a while now and I think this campaign is absolutely essential. This piece recently in The Times of London goes into some of the issues quite well. There is a lot of work to do here to persuade governments to control what many militaries think will be ‘essential’ to warfare in this coming century, but I think that the landmines campaign is a good example of what can be done here – but this time before robotic weapons become too common.

US Predator drones in Pakistan

Although the US military has been operated its Predator Unmanned Aerial Vehicles (UAVs) (both surveillance and weaponized versions) in both Pakistan and Afghanistan for some time now, the Pakistan government is now for the first time reported to be accepting their use as an official part of its own military’s operations in the South Waziristan region. This is the area where it has long been known that some of the most important Taliban and Al-Qaeda groups have been ‘hiding’  – but hiding in pretty much plain sight. More on military drones here and here.

More military robots…

A story in the Daily Mail shows two new military robot surveillance devices developed for the UK Ministry of Defence’s Defence and Equipment Support (DES) group. The first is a throwable rolling robot equipped with multiple sensors, which can be chucked like a hand-grenade and then operated by remote-control. The second is another Micro-(Unmanned) Aerial Vehicle (Micro-UAV or MAV), a tiny helicopter which carries a surveillance camera. There have been rolling surveillance robots around for a while now (like the Rotundus GroundBot from Sweden), but this toughened version seems to be unique. The helicopter MAV doesn’t seem to be particularly new, indeed it looks at least from the pictures, pretty similar to the one controversially bought by Staffordshire police in Britain – which is made by MicroDrones of Germany.

The proliferation of such devices in both military and civil use is pretty much unchecked and unnoticed by legislators at present. Media coverage seems to be limited to ‘hey, cool!’ and yes, they are pretty cool as pieces of technology, but being used in useful humanitarian contexts (for example, rolling robots getting pictures of a partially-collapsed building or MAVs flying over a disaster zone) is a whole lot different from warfare, which is a whole lot different again from civilian law enforcement, commercial espionage or simple voyeuristic purposes. As surveillance gets increasingly small, mobile and independent, we have a whole new set of problems for privacy, and despite the fact that we warned regulators about these problems back in 2006 in our Report on the Surveillance Society, little government thought seems to have been devoted to these and other new technologies of surveillance.

The use of robots in war is of course something else I have become very interested in, especially as these flying and rolling sensor-platforms are increasingly independent in their operation and, like the US Predator drones employed in Afghanistan and Pakistan or the MAARS battlefield robot made by Qinetiq / Foster-Miller, become weapons platforms too. This is an urgent but still largely unnoticed international human rights and arms control issue, and one which the new International Committee for Robotic Arms Control (in which I am now getting involved), will hopefully play a leading role in addressing.

Automation and Imagination

Peter Tu writes over on Collective Imagination, that greater automation might prevent racism and provide for greater privacy in visual surveillance, providing what he calls ‘race-blind glasses’. The argument is not a new one at all, indeed the argument about racism is almost the same proposition advanced by Professor Gary Marx in the mid-90s about the prospects for face-recognition. Unfortunately, Dr Tu does several of the usual things: he argues that ‘the genie is out of the bottle’ on visual surveillance, as if technological development of any kind is an unstoppable linear force that cannot be controlled by human politics; and secondly, seemingly thinking that technologies are somehow separate from the social context in which they are created and used – when of course technologies are profoundly social. Although he is more cautious than some, this still leads to the rather over optimistic conclusion, the same one that has been advanced for over a century now, that technology will solve – or at least make up for – social problems. I’d like to think so. Unfortunately, empirical evidence suggests that the reality will not be so simple. The example Dr Tu gives on the site is one of a simple binary system – a monitor shows humans as white pixels on a black background. There is a line representing the edge of a station platform. It doesn’t matter who the people are or their race or intent – if they transgress the line, the alarm sounds, the situation can be dealt with. This is what Michalis Lianos refers to as an Automated Socio-Technical Environment (ASTE). Of course these simple systems are profoundly stupid in a way that the term ‘artificial intelligence’ disguises and the binary can hinder as much as it can help in many situations. More complex recognition systems are needed if one wants to tell one person from another or identify ‘intent’, and it is here that all those human social problems return with a vengeance. Research on face-recognition systems, for example, has shown that prejudices can get embedded within programs as much as priorities, in other words the politics of identification and recognition (and all the messiness that this entails) shifts into the code, where it is almost impossible for non-programmers (and often even programmers themselves) to see. And what better justification for the expression of racism can there be that a suspect has been unarguably ‘recognised’ by a machine? ‘Nothing to do with me, son, the computer says you’re guilty…’ And the idea that ‘intent’ can be in any way determined by superficial visualisation is supported by very little evidence which is far from convincing, and yet techno-optimist (and apparently socio-pessimist) researchers push ahead with the promotion of the idea that computer-aided analysis of ‘microexpressions’ will help tell a terrorist from a tourist. And don’t get me started on MRI…

I hope our genuine human ‘collective imagination’ can do rather better than this.

Another day, another ‘intelligent’ surveillance system…

Yet another so-called ‘intelligent’ surveillance system has been announced. This one comes from Spain and is designed to detect abnormal behaviour on and around pedestrian crossings.

Comparison between the reasoning models of the artificial system and a theoretical human monitor in a traffic-based setting. (Credit: ORETO research group / SINC)
Comparison between the reasoning models of the artificial system and a theoretical human monitor in a traffic-based setting. (Credit: ORETO research group / SINC)

The article in Science Daily dryly notes that it could be used “to penalise incorrect behaviour”… Now, I know there’s nothing intrinsically terribly wrong with movement detection systems, but the trend towards the automation of fines and punishment, nor indeed of everyday life and interaction more broadly, is surely not one that we should be encouraging. I’ve seen these kinds of systems work in demonstrations (most recently at the research labs of Japan Railways, more of which later…) but, despite their undoubtedly impressive capabilities and worthwhile potential, they leave me with a sinking feeling, and a kind of mourning for the further loss of little bits of humanity. Maybe that’s just a personal emotion, but I don’t think we take enough account of both the generation and loss of emotions in response to increasing surveillance and control.

Further Reference: David Vallejo, Javier Albusac, Luis Jiménez, Carlos González y Juan Moreno. (2009) ‘A cognitive surveillance system for detecting incorrect traffic behaviors,’ Expert Systems with Applications 36 (7): 10503-10511

Time for an international convention on robotic weapons

The estimable Professor Noel Sharkey is calling today for a debate on the use of robotic weapon systems, like the UAVs that I have been covering sporadically. He’s right of course, but we need to go much further and much faster. With increasing numbers of counties turning to remote-controlled weapon systems, and the potential deployment of more independent or even ‘intelligent’ robots in war, hat we need is an international convention which will limit the development and use of automated weapon systems, with clear guidelines on what lines are not to be crossed. In the case of atomic, biological and chemical weapons these kinds of conventions have had mixed success, but we have had very few clear examples of the use of such weapons in international conflicts.

Surveillance and the Recession

In the editorial of the latest issue of Surveillance & Society, I speculated that that the global recession would lead to surveillance and security coming up against the demands of capital to flow (i.e. as margins get squeezed, things like complex border controls and expensive monitoring equipment become more obvious costs). This was prompted by news that in the UK, some Local Authorities were laying off staff employed to monitor cameras and leaving their control rooms empty.

However an article in the Boston Globe today says different. The piece in the business section claims that – at least in its area of coverage – the recession is proving to be good business for surveillance firms, especially high tech ones. The reasons are basically that both crime and the costs of dealing with it become comparatively larger in lean periods. The article doesn’t entirely contradict my reasoning: organisations in the USA are also starting to wonder about the costs of human monitoring within the organisation, but instead they are installing automated software monitoring or are outsourcing the monitoring to more sophisticated control rooms provided by security companies elsewhere.

Shouting cameras in the UK (The Register)
Shouting cameras in the UK (The Register)

They also note that human patrols are in some case being replaced (or at least they can be replaced – it’s unclear exactly how much of the article is PR for the companies involved and how much is factual reporting) by ‘video patrols’, i.e.: remote monitoring combined with reassuring (or instructive) disembodied voices from speakers attached to cameras. Now, we’ve seen this before in the UK as part of New Labour’s rather ridiculous ‘Respect Zones’ plan, but the calming voice of authority from a camera, now what famous novel does that sound like? Actually if it’s not Nineteen Eighty-Four, it is also rather reminiscent of the ubiquitous voice of Edgar Friendly in that odd (but actually rather effective) combination of action movie and Philip K. Dickian future, Demolition Man. The point is that this is what Bruce Schneier has called the ‘security show’. It doesn’t provide any real security, merely the impression that there might be.

How long will it be before people – not least criminals – start to get cynical about the disembodied voice of authority? This then has the potential to undermine more general confidence in CCTV and technological solutions to crime and fear of crime, and could end by increasing both.