Surveillance devices get smaller… but it’s privacy that vanishes.

I’ve been blogging for a while about miniaturization and the ‘vanishing’ of surveillance devices. This disappearance occurs in many ways, one of which is the incorporation of high-tech surveillance features into objects and devices that we are already used to or their reduction to a size and form factor that is relatively familiar. Two examples coincidentally arrived in my inbox over the last week.

The first was the news that the US Navy has awarded a development contract for binoculars that incorporate three-dimensional face-recognition technology from StereoVision Inc (who may well be the bunch of  California-based face recog people I met at a biometrics industry show a few years back). This supposedly gets round the problems that standard two-dimensional face recognition has dealing with unpredictably mobile crowds of people in natural light (AKA ‘the real world’). The issue I’m highlighting here however is that we don’t expect binoculars to be equipped with face recognition. Binoculars may not be entirely socially acceptable items, and already convey implications of creepy voyeurism when used in urban or domestic situations, however this is something else entirely.

NY-CD236_NYPD1_G_20130123200238
Small terahertz wave scanner being tested by NYPD in January (NYPD_

The second is the extraordinarily rapid ongoing progress towards working handheld terahertz wave technology (a far more effective form of scanning technology than either the backscatter x-ray or millimeter wave systems used in the bulky bodyscanners currently in use at airports). Just four years ago, I noted the theoretical proof that this was possible, and last month, it was revealed that police in New York were testing handheld terahertz wave scanners, (Thruvision from Digital Barriers) which of course people were likening to Star Trek’s tricorders. The idea that the police could perform a virtual strip search on the street without even having to ask is again, a pretty major change, but it’s also the case that the basic technology can be incorporated into standard video camera systems – potentially everyone with a mobile phone camera could be doing this in a few years.

I’m not a technological determinist, but in the context of societies in which suspicion, publicity and exposure are becoming  increasingly socially normative, I have to ask what these technologies and many others like them imply for conventional responses based on ‘privacy’. Privacy by design is pretty much a joke when the sole purpose of such devices is to breach privacy. And control by privacy regulators is based on the ability to know that one is actually under surveillance – when everything can potentially be performing some kind of highly advanced surveillance, how is one able to tell, let alone select which of the constant breaches of privacy is worth challenging? So, do we simply ban the use of certain forms of surveillance technology in public places? How, would this be enforced given that any conventional form factor might or might not contain such technology? And would this simply result in an even more intense asymmetry of the gaze, when the military and the police have such devices, but people are prevented from using them? Do we rely of camouflage, spoofing and disabling techniques and technologies against those who might be seeking to expose us? You can bet the state will not be happy if these become widespread – just look at the police reaction to existing sousveillance and cop-watching initiatives…

Make like a Dandy Highwayman to beat Face Recognition Software

Spoofing biometrics has become a mini-industry, as one would expect as the technologies of recognition become more pervasive. And not all of these methods are high-tech. Tsutomu Matsumoto’s low tech ‘gummy fingerprint‘ approach to beating fingerprint recognition is already quite well-known, for example. I’ve also seen him demonstrate very effective iris scan spoofing using cardboard irises.

Facial recognition would seem the most obvious target for such spoofing given that it is likely to be the system most used in public or other open spaces. And one of the most ingenious systems I have seen recently involves a few very simple tips. Inspired by the increasing hostility of legal systems to masks and head coverings, CV Dazzle claims to be an ‘open-source’ camouflage system for defeating computer vision.

Among the interesting findings of the project, which started as part of the Interactive Telecommunications Program at NYU, is that the more complex and high-fashion disguise-type attempts to beat facial recognition did not work as well as the simpler flat camouflage approaches. The solution suggested thus involves many of the same principles as earlier forms of camouflage: breaking up surface patterns and disguising surface topography. It uses startling make-up techniques which look a bit like 80s New Romantic face painting as deployed by Adam and the Ants – hence the title of this post! The system concentrates especially on key areas of the face which are essential to most facial recognition software systems such as the area around the bridge of the nose, cheekbones and eye socket depth.

Results from the CV Dazzle project

So, will we see a revival of the Dandy Highwayman look as a strategy of counter-surveillance? Or more likely, will social embarrassment and the desire to seem ‘normal’ mean that video surveillance operators have a relatively easy life?

Adam Ant in the early 80s

Do we need to be concerned about a new iPhone face-recognition app?

The Huffington Post has got itself in a twist about a new iPhone face-recognition app, Recognizr, that it claims will enable someone to take a person’s picture and instantly give them access to all their social networking details. Except that isn’t quite the case. As one (largely ignored) commenter points out, it’s not quite as the HP portrays it. It isn’t an open system – the original story (linked in the HP one) says that you have to opt in to the system, and upload your photo, and other social networking sites you want to be linked, into the developer’s own database. So only those who have decided they want to be part of this system can be recognised and linked. It’s only a rather small step from existing methods of social networking, and perhaps considered as the face recognition equivalent of giving out a business card. There’s the potential there for all kinds of development from this though, I would agree, but this isn’t (yet) a stalker’s or a marketer’s dream.

You can find the Swedish developer, The Astonishing Tribe (err, TAT!), here, and the source story, which is just slightly more circumspect, from Popular Science, here.

(Thanks to David Lyon for the link to the HP story).

Automation and Imagination

Peter Tu writes over on Collective Imagination, that greater automation might prevent racism and provide for greater privacy in visual surveillance, providing what he calls ‘race-blind glasses’. The argument is not a new one at all, indeed the argument about racism is almost the same proposition advanced by Professor Gary Marx in the mid-90s about the prospects for face-recognition. Unfortunately, Dr Tu does several of the usual things: he argues that ‘the genie is out of the bottle’ on visual surveillance, as if technological development of any kind is an unstoppable linear force that cannot be controlled by human politics; and secondly, seemingly thinking that technologies are somehow separate from the social context in which they are created and used – when of course technologies are profoundly social. Although he is more cautious than some, this still leads to the rather over optimistic conclusion, the same one that has been advanced for over a century now, that technology will solve – or at least make up for – social problems. I’d like to think so. Unfortunately, empirical evidence suggests that the reality will not be so simple. The example Dr Tu gives on the site is one of a simple binary system – a monitor shows humans as white pixels on a black background. There is a line representing the edge of a station platform. It doesn’t matter who the people are or their race or intent – if they transgress the line, the alarm sounds, the situation can be dealt with. This is what Michalis Lianos refers to as an Automated Socio-Technical Environment (ASTE). Of course these simple systems are profoundly stupid in a way that the term ‘artificial intelligence’ disguises and the binary can hinder as much as it can help in many situations. More complex recognition systems are needed if one wants to tell one person from another or identify ‘intent’, and it is here that all those human social problems return with a vengeance. Research on face-recognition systems, for example, has shown that prejudices can get embedded within programs as much as priorities, in other words the politics of identification and recognition (and all the messiness that this entails) shifts into the code, where it is almost impossible for non-programmers (and often even programmers themselves) to see. And what better justification for the expression of racism can there be that a suspect has been unarguably ‘recognised’ by a machine? ‘Nothing to do with me, son, the computer says you’re guilty…’ And the idea that ‘intent’ can be in any way determined by superficial visualisation is supported by very little evidence which is far from convincing, and yet techno-optimist (and apparently socio-pessimist) researchers push ahead with the promotion of the idea that computer-aided analysis of ‘microexpressions’ will help tell a terrorist from a tourist. And don’t get me started on MRI…

I hope our genuine human ‘collective imagination’ can do rather better than this.

Why Japan is a surveillance society

We met yesterday with member of the Campaign Against Surveillance Society (AKA Kanshi-No!) a small but active organisation formed in in 2002 in response to the Japanese government’s jyuminkihondaichou network system (Residents’ Registry Network System, or juki-net). plans and the simultaneous introduction of police video surveillance cameras in Kabukicho in Tokyo. We had a long and detailed discussion which would be impossible to reproduce in full here, but I did get much more of a sense of what in particular is seen as objectionable about past and current Japanese government actions in this area.

The main thrust of the argument was to do with the top-down imposition of new forms of control on Japanese society. This they argued was the product of the longtime ruling Liberal Democratic Party’s neo-liberal turn and has thus been some time in the making. It is not a post-9/11 phenomenon, although they were also clear that the G-8 summit held in Hokkaido in 2008 used many of the same forms of ‘community action’ in the name of preventing terrorism as are used in the name of anzen anshin (safety) or bohan (security) from crime in Japanese cities everyday.

However, they argued that this might be a product of neo-liberalism, but the forms of community security were drawn from or influenced by a much older style of governance, that of the Edo-period mutual surveillance and control of the goningumi (five family groups). (this is actually remarkably similar to the argument that I, David Lyon and Kiyoshi Abe made in our paper in Urban Studies in 2007!). Thus the mini-patoka and wan-wan patrol initiatives in Arakawa-ku were seen as as much a part of an imposed state ordering process as the more obviously externally-derived CCTV-based form of urban governance going on in Shunjuku.

Underlying all this was the creation of an infrastructure for the surveillance society, juki-net. They were certainly aware of the way that juki-net had been limited from the original plans, and indeed they regarded these limits as being the major success of the popular campaign against the system, however they argued that the 11-digit unique number now assigned to every citizen was the most important element of the plans and this remained and could therefore serve as the foundation for future expansion and linking of government databases. They pointed to the way that the passport system had already been connected.

Kanshi-no! were also concerned, in this context, about the development of plans for experimental facial recognition systems to be used in Tokyo (at a location as yet unrevealed). This would imply the development of a national database of facial images, and a further extension of the personal information held by central government on individuals.

So was this all in the name of puraibashi (privacy) or some wider social concerns of something else? Certainly, privacy was mentioned, but not as much as one would expect in an interview with a British activist group on the same issues. I asked in particular about the decline of trust and community. The argument here was that community and any lingering sense of social trust had already been destroyed and that CCTV cameras and other surveillance measures were not responsible in themselves. However, from an outside perspective it does seem that there is more of a sense of social assurance and community, even in Tokyo than there is in the UK. I do wonder sometimes when people (from any country) refer back to some time when some idealised ‘trust’ or ‘community’ existed, when exactly it was! Rather than a  particular time, it seems to be a current that either asserts itself or is suppressed of co-opted into the aims of more powerful concerns in particular times and places.

I asked at one point what immediate change or new laws Kashi-no! would want, and the answer was quite simple: no new laws, just for the state to respect the constitution which they said already made both CCTV cameras and juki-net illegal (although of course the Supreme Court recently disagreed).

(Thank-you to the two members of Kanshi-no! for their time and patience with my questions)