Automation and Imagination

Peter Tu writes over on Collective Imagination, that greater automation might prevent racism and provide for greater privacy in visual surveillance, providing what he calls ‘race-blind glasses’. The argument is not a new one at all, indeed the argument about racism is almost the same proposition advanced by Professor Gary Marx in the mid-90s about the prospects for face-recognition. Unfortunately, Dr Tu does several of the usual things: he argues that ‘the genie is out of the bottle’ on visual surveillance, as if technological development of any kind is an unstoppable linear force that cannot be controlled by human politics; and secondly, seemingly thinking that technologies are somehow separate from the social context in which they are created and used – when of course technologies are profoundly social. Although he is more cautious than some, this still leads to the rather over optimistic conclusion, the same one that has been advanced for over a century now, that technology will solve – or at least make up for – social problems. I’d like to think so. Unfortunately, empirical evidence suggests that the reality will not be so simple. The example Dr Tu gives on the site is one of a simple binary system – a monitor shows humans as white pixels on a black background. There is a line representing the edge of a station platform. It doesn’t matter who the people are or their race or intent – if they transgress the line, the alarm sounds, the situation can be dealt with. This is what Michalis Lianos refers to as an Automated Socio-Technical Environment (ASTE). Of course these simple systems are profoundly stupid in a way that the term ‘artificial intelligence’ disguises and the binary can hinder as much as it can help in many situations. More complex recognition systems are needed if one wants to tell one person from another or identify ‘intent’, and it is here that all those human social problems return with a vengeance. Research on face-recognition systems, for example, has shown that prejudices can get embedded within programs as much as priorities, in other words the politics of identification and recognition (and all the messiness that this entails) shifts into the code, where it is almost impossible for non-programmers (and often even programmers themselves) to see. And what better justification for the expression of racism can there be that a suspect has been unarguably ‘recognised’ by a machine? ‘Nothing to do with me, son, the computer says you’re guilty…’ And the idea that ‘intent’ can be in any way determined by superficial visualisation is supported by very little evidence which is far from convincing, and yet techno-optimist (and apparently socio-pessimist) researchers push ahead with the promotion of the idea that computer-aided analysis of ‘microexpressions’ will help tell a terrorist from a tourist. And don’t get me started on MRI…

I hope our genuine human ‘collective imagination’ can do rather better than this.

Another day, another ‘intelligent’ surveillance system…

Yet another so-called ‘intelligent’ surveillance system has been announced. This one comes from Spain and is designed to detect abnormal behaviour on and around pedestrian crossings.

Comparison between the reasoning models of the artificial system and a theoretical human monitor in a traffic-based setting. (Credit: ORETO research group / SINC)
Comparison between the reasoning models of the artificial system and a theoretical human monitor in a traffic-based setting. (Credit: ORETO research group / SINC)

The article in Science Daily dryly notes that it could be used “to penalise incorrect behaviour”… Now, I know there’s nothing intrinsically terribly wrong with movement detection systems, but the trend towards the automation of fines and punishment, nor indeed of everyday life and interaction more broadly, is surely not one that we should be encouraging. I’ve seen these kinds of systems work in demonstrations (most recently at the research labs of Japan Railways, more of which later…) but, despite their undoubtedly impressive capabilities and worthwhile potential, they leave me with a sinking feeling, and a kind of mourning for the further loss of little bits of humanity. Maybe that’s just a personal emotion, but I don’t think we take enough account of both the generation and loss of emotions in response to increasing surveillance and control.

Further Reference: David Vallejo, Javier Albusac, Luis Jiménez, Carlos González y Juan Moreno. (2009) ‘A cognitive surveillance system for detecting incorrect traffic behaviors,’ Expert Systems with Applications 36 (7): 10503-10511

FBI data warehouse revealed by EFF

Tenacious FoI and ‘institutional discovery’ work both in and out of the US courts by the Electronic Frontier Foundation has resulted in the FBI releasing lots of information about its enormous dataveillance program, based around the Investigative Data Warehouse (IDW). 

The clear and comprehensible report is available from EFF here, but the basic messages are that:

  •  the FBI now has a data warehouse with over a billion unique documents or seven times as many as are contained in the Library of Congress;
  • it is using content management and datamining software to connect, cross-reference and analyse data from over fifty previously separate datasets included in the warehouse. These include, by the way, both the entire US-VISIT database, the No-Fly list and other controversial post-9/11 systems.
  • The IDW will be used for both link and pattern analysis using technology connected to the Foreign Terrorist Tracking Task Force (FTTTF) prgram, in other words Knowledge Disovery in Databases (KDD) software, which will through connecting people, groups and places, will generate entirely ‘new’ data and project links forward in time as predictions.

EFF conclude that datamining is the future for the IDW. This is true, but I would also say that it was the past and is the present too. Datamining is not new for the US intelligence services, indeed many of the techniques we now call datamining were developed by the National Security Agency (NSA). There would be no point in the FBI just warehousing vast numbers of documents without techniques for analysing and connecting them. KDD may well be more recent for the FBI and this phildickian ‘pre-crime’ is most certainly the future in more ways than one…

There is a lot that interests me here (and indeed, I am currently trying to write a piece about the socio-techncial history of these massive intelligence data analysis systems), but one issue is whether this complex operation will ‘work’ or whether it will throw up so many random and worthless ‘connections’ (the ‘six-degrees of Kevin Bacon’ syndrome) that it will actually slow-down or damage actual investigations into real criminal activities. That all depends on the architecture of the system, and that is something we know little about, although there are a few hints in the EFF report…

(thanks to Rosamunde van Brakel for the link)

Tracking disease spread on the Internet

Internet disease tracking using interactive maps or mash-ups seems to be be one of the more constructive uses of the surveillance potential that comes with the combination of easy-to-use digital mapping and online communications. Both Computer World and The Guardian tech blog reported a  few days back how Google, following on from its use to track previous flu epidemics, is experimenting with tracking swine flue cases in Mexico.

Google flu trends for Mexico
Google Flu Trends mapping system

However other web-crawler-based systems also exist for tracking the spread of disease (or indeed potentially almost anything) as The Guardian reported on Wednesday. Leading the way is HealthMap, which comes complete with Twitter feeds and suchlike.

HealthMap
Swine Flu mapping from Healthmap.com

As the latter report makes it clear however, this is not all just good news; there are many problems with the use of web-crawlers in providing ‘reliable’ data not least because the signal to noise ratio on the Internet is so high. The other problem is that although the might appear current or even ‘predictive’ by virtue of their speed and interactivity, they are of course actually always already in the past, as they are compilations of reports many of which may already be dated before they are uploaded to the ‘net. Better real-time reporting from individuals may be possible with mobile reports, but these  could lack the filter of expert medical knowledge and may lead to the further degredation in the reliability of the data. Can you have both more reliability and speed / predictability with systems like this? That’s the big question…

(Thanks to Seda Gurses for pointing out the CW article to me!)

New report on facial recognition out now

There is an excellent new report on facial recognition now available for free download. The report is written by my one-time co-author on the subject, Lucas Introna of Lancaster University, and new Surveillance & Society advisory board member, Helen Nissenbaum of New York University.

The report is aimed primarily at people who developing policy on, or thinking of commissioning or even using facial recognition and therefore concentrates on the practical questions (does it work? what are its limitations?) however it does not neglect the moral and political issues of both overt and covert use. What is quite interesting for me is how little the technical problems with the systems have changed since Lucas and I wrote our piece back in 2004; the ability of facial recognition to work in real-world situations as opposed to controlled environments still appears limited by environmental and systemic variables like lighting, the size of the gallery of faces and so on.

The report is probably the best non-technical summary available and is perfect for non-specialists who want to understand what is the state-of-the-art in facial recognition and the range of issues associated with the technology. Very much recommended.

Global CCTV datamining project revealed

As a result of an annual report on datamining sent to the US Congress by the Office of the Director of National Intelligence, a research project, Video Analysis and Content Extraction (VACE), has been revealed. The program is aiming to produce an computer system that will be able to search and analyse video images, especially “surveillance-camera data from countries other than the United States” to identify “well-established patterns of clearly suspicious behavior.”

Conducted by the Office of Incisive Analysis, part of the Intelligence Advanced Research Projects Activity (IARPA), the program has apparently been running since 2001,and is merely one of several post-9/11 research projects aiming to create advanced dataveillance systems to analyse data from global sources. How the USA would obtain the information is not specified…

One could spend a long time listing all the DARPA and IARPA projects that are running, many of which are speculative and come to nothing. The report also mentions the curious Project Reynard that I have mentioned before, which aims to analyse the behaviours of avatars in online gaming environments with the aim of detecting ‘suspicious behaviours’. Reynard is apparently achieving some successful results, but we have no real idea at what stage VACE is, and the report only states that some elements are being tested with real world data. This implies that there is nowhere near a complete system. Nevertheless the mentality behind these projects is worrying. It is hardly the first time that the USA has tried to create what Paul Edwards called a ‘closed world’ and these utopian projects which effectively try to know the whole world in some way (like ECHELON, or the FBI’s proposed Server in the Sky) are an ongoing US state obsession.

It is the particular idea that ‘suspicious patterns of behaviour’ can be identified through constant surveillance and automated analysis, that our behaviour and indeed thoughts are no longer our own business. Because it is thoughts and anticipating action that is the ultimate goal. One can see this, at a finer grain, of programs like Project Hostile Intent, a Department of Homeland Security initiative to analyse ‘microexpressions’, supposedly preconscious facial movements. The EU is not immune from such incredibly intrusive proposals: so-called ‘spy in the cabin’ cameras and microphones in the back of every seat have been proposed by the EU-funded SAFEE project, which is supported by a large consortium of security corporations. The European Commission has already hinted that it might try to ‘require’ airlines to use the system when developed.

No doubt too, because of the close (and largely secret and unaccountable) co-operation of the EU and USA on security issues, all the images and recordings would find their way into these proposes databases and their inhuman agents would check them over to make sure we are all passive, good humans with correct behaviours, expressions and thoughts, whether we are in the real or the virtual world…

Surveillance and the Recession

In the editorial of the latest issue of Surveillance & Society, I speculated that that the global recession would lead to surveillance and security coming up against the demands of capital to flow (i.e. as margins get squeezed, things like complex border controls and expensive monitoring equipment become more obvious costs). This was prompted by news that in the UK, some Local Authorities were laying off staff employed to monitor cameras and leaving their control rooms empty.

However an article in the Boston Globe today says different. The piece in the business section claims that – at least in its area of coverage – the recession is proving to be good business for surveillance firms, especially high tech ones. The reasons are basically that both crime and the costs of dealing with it become comparatively larger in lean periods. The article doesn’t entirely contradict my reasoning: organisations in the USA are also starting to wonder about the costs of human monitoring within the organisation, but instead they are installing automated software monitoring or are outsourcing the monitoring to more sophisticated control rooms provided by security companies elsewhere.

Shouting cameras in the UK (The Register)
Shouting cameras in the UK (The Register)

They also note that human patrols are in some case being replaced (or at least they can be replaced – it’s unclear exactly how much of the article is PR for the companies involved and how much is factual reporting) by ‘video patrols’, i.e.: remote monitoring combined with reassuring (or instructive) disembodied voices from speakers attached to cameras. Now, we’ve seen this before in the UK as part of New Labour’s rather ridiculous ‘Respect Zones’ plan, but the calming voice of authority from a camera, now what famous novel does that sound like? Actually if it’s not Nineteen Eighty-Four, it is also rather reminiscent of the ubiquitous voice of Edgar Friendly in that odd (but actually rather effective) combination of action movie and Philip K. Dickian future, Demolition Man. The point is that this is what Bruce Schneier has called the ‘security show’. It doesn’t provide any real security, merely the impression that there might be.

How long will it be before people – not least criminals – start to get cynical about the disembodied voice of authority? This then has the potential to undermine more general confidence in CCTV and technological solutions to crime and fear of crime, and could end by increasing both.

Corrupting automated surveillance

OK, so automated surveillance systems are always right, aren’t they? I mean, they wouldn’t allow systems to be put into place that didn’t work, would they?

Image from t-redspeed system (KRIA)
Image from t-redspeed system (KRIA)

That was probably the attitude of many Italians who were supposedly caught jumping red lights by a new T-redspeed looped-camera system manufactured by KRIA. However, the BBC is reporting today that the system had been rigged by shortening the traffic light sequence, and that hundreds of officials were involved in the scam that earned them a great deal of money.

Now, the advocates of automated surveillance will say that there was nothing wrong with the technology itself, and that may be true in this case, but technologies exist within social systems and, unless you try to remove people altogether or by developing heuristic systems – both of which have their own ethical and practical problems – then these kind of things are always going to happen. It’s something those involved in assessing technologies for public use should think about, but in this case it seems they had thought about it, and their only thought was how much cash they could make…

Learning Surveillance Systems

Several years ago, during my postdoctoral work on algorithmic surveillance, I was warning of the problems associated with the development of heuristic (learning) software applied to surveillance systems. I argued that their popularity would grow as the amount of data acquired through growing numbers of cameras that could not possibly be watched by human operators. Today in CSO Security and Risk, a magazine for ´security executives´, there is a piece by Eric Eaton which summarises the current state of the art, and which makes exactly that argument as a reason why operators of surveillance systems should consider using such learning software. As Eaton says,

¨Various tools have emerged that not only “see video better,” but also analyze the digitized output of video cameras in real time to learn and recognize normal behavior, and detect and alert on all abnormal patterns of activity without any human input.¨

Judge, jury... and executioner
Judge, jury... and executioner (IPC magazines)

The key issue here, and one which I mentioned in my post about angry robots the other day, is the automated determination of normality. As French theorist, Michalis Lianos, argued in Le Nouveau Contrôle Social back in 2001, the implementation of these kinds of systems threatens to replace social negotiation with a process of judgement and sometimes even the consequences (which in some border control systems can be fatal – Judge Dredd as computer).

Eaton identifies the first part of this conundrum when he says that ¨the key to successful surveillance is learning normal behaviors¨and he believes that this will enable systems to filter out such activity and ¨help predict, and prevent, future threats.¨ He does admit that in many cases the numbers of actual instances of systems detecting suspicious behaviour is very small and therefore the cost of systems may not be economically justified, but there is as usual amongst his ´5 musts´ for security operators, no place to be found for ethics, human rights (or indeed humanity outside of systems operators) or consideration of the wider social impacts of the growth in use of learning security machines.

No doubt he would say, as most developers and operators do, that this is simply a matter of how systems are used in compliance with best practice and the law, which in itself ignores the possibility of already consciously or unconsciously programmed-in biases, but the more that systems become intelligent and are able to make decisions independently of human operators, the thinner this legalistic response becomes. I´m not saying that we are about to have automatic car park CCTV cameras with guns any time soon, but it´s about time we had some forward-looking policy on the use of heuristic systems before they become as normal as Eaton suggests…

Google and Wikipedia

A nice rant in The Register from Encyclopedia Britannica president, Jorge Cauz, who claims that Google deliberately prioritizes Wikipedia entries. The article by Cade Metz goes on to produce a pretty convincing back up argument that this is true, and the excuse that Google offers that its search algorithms just do their job is clearly bogus. As Metz reminds us ¨those mindless Google algorithms aren’t controlled by mindless Google algorithms. They’re controlled by Google.¨ This truism is something many people tend to forget when they think about automated systems… And this is the company that of course we all use, but is now taking this trust to try to persuade us to give them all of our files with it´s new ´Gdrive´, part of its cloud-computing initiative that is supposed to see personal computing become simply software.

typing_monkeyAnd it´s not as if Wikipedia is a sound source of information. Take a look at the entry on surveillance – it´s a disjointed mess that shows evidence of all sorts of axe-grinding, self-promotion and personal pathologies, along with some increasingly burried attempts at co-ordination and making sense of it all. I generally tell my students to avoid it. The myth of Web 2.0 is that an infinite number of monkeys with typewriters might be able to produce the complete works of Shakespeare but, in practice, a small number of apes with computers can´t even produce a coherent definition of surveillance. This doesn´t mean I am in favour of the proposals to flag revisions to Wikipedia for approval by editors. Wikis are what they are. All that people, including those who run Wikipedia, need to do is be aware of that and not think that Wikipedia is anything more than it is. Especially Google´s programmers.