Peter Tu writes over on Collective Imagination, that greater automation might prevent racism and provide for greater privacy in visual surveillance, providing what he calls ‘race-blind glasses’. The argument is not a new one at all, indeed the argument about racism is almost the same proposition advanced by Professor Gary Marx in the mid-90s about the prospects for face-recognition. Unfortunately, Dr Tu does several of the usual things: he argues that ‘the genie is out of the bottle’ on visual surveillance, as if technological development of any kind is an unstoppable linear force that cannot be controlled by human politics; and secondly, seemingly thinking that technologies are somehow separate from the social context in which they are created and used – when of course technologies are profoundly social. Although he is more cautious than some, this still leads to the rather over optimistic conclusion, the same one that has been advanced for over a century now, that technology will solve – or at least make up for – social problems. I’d like to think so. Unfortunately, empirical evidence suggests that the reality will not be so simple. The example Dr Tu gives on the site is one of a simple binary system – a monitor shows humans as white pixels on a black background. There is a line representing the edge of a station platform. It doesn’t matter who the people are or their race or intent – if they transgress the line, the alarm sounds, the situation can be dealt with. This is what Michalis Lianos refers to as an Automated Socio-Technical Environment (ASTE). Of course these simple systems are profoundly stupid in a way that the term ‘artificial intelligence’ disguises and the binary can hinder as much as it can help in many situations. More complex recognition systems are needed if one wants to tell one person from another or identify ‘intent’, and it is here that all those human social problems return with a vengeance. Research on face-recognition systems, for example, has shown that prejudices can get embedded within programs as much as priorities, in other words the politics of identification and recognition (and all the messiness that this entails) shifts into the code, where it is almost impossible for non-programmers (and often even programmers themselves) to see. And what better justification for the expression of racism can there be that a suspect has been unarguably ‘recognised’ by a machine? ‘Nothing to do with me, son, the computer says you’re guilty…’ And the idea that ‘intent’ can be in any way determined by superficial visualisation is supported by very little evidence which is far from convincing, and yet techno-optimist (and apparently socio-pessimist) researchers push ahead with the promotion of the idea that computer-aided analysis of ‘microexpressions’ will help tell a terrorist from a tourist. And don’t get me started on MRI…
I hope our genuine human ‘collective imagination’ can do rather better than this.