OK, so automated surveillance systems are always right, aren’t they? I mean, they wouldn’t allow systems to be put into place that didn’t work, would they?
Image from t-redspeed system (KRIA)
That was probably the attitude of many Italians who were supposedly caught jumping red lights by a new T-redspeed looped-camera system manufactured by KRIA. However, the BBC is reporting today that the system had been rigged by shortening the traffic light sequence, and that hundreds of officials were involved in the scam that earned them a great deal of money.
Now, the advocates of automated surveillance will say that there was nothing wrong with the technology itself, and that may be true in this case, but technologies exist within social systems and, unless you try to remove people altogether or by developing heuristic systems – both of which have their own ethical and practical problems – then these kind of things are always going to happen. It’s something those involved in assessing technologies for public use should think about, but in this case it seems they had thought about it, and their only thought was how much cash they could make…
Several years ago, during my postdoctoral work on algorithmic surveillance, I was warning of the problems associated with the development of heuristic (learning) software applied to surveillance systems. I argued that their popularity would grow as the amount of data acquired through growing numbers of cameras that could not possibly be watched by human operators. Today in CSO Security and Risk, a magazine for ´security executives´, there is a piece by Eric Eaton which summarises the current state of the art, and which makes exactly that argument as a reason why operators of surveillance systems should consider using such learning software. As Eaton says,
¨Various tools have emerged that not only “see video better,” but also analyze the digitized output of video cameras in real time to learn and recognize normal behavior, and detect and alert on all abnormal patterns of activity without any human input.¨
Judge, jury... and executioner (IPC magazines)
The key issue here, and one which I mentioned in my post about angry robots the other day, is the automated determination of normality. As French theorist, Michalis Lianos, argued inLe Nouveau Contrôle Social back in 2001, the implementation of these kinds of systems threatens to replace social negotiation with a process of judgement and sometimes even the consequences (which in some border control systems can be fatal – Judge Dredd as computer).
Eaton identifies the first part of this conundrum when he says that ¨the key to successful surveillance is learning normal behaviors¨and he believes that this will enable systems to filter out such activity and ¨help predict, and prevent, future threats.¨ He does admit that in many cases the numbers of actual instances of systems detecting suspicious behaviour is very small and therefore the cost of systems may not be economically justified, but there is as usual amongst his ´5 musts´ for security operators, no place to be found for ethics, human rights (or indeed humanity outside of systems operators) or consideration of the wider social impacts of the growth in use of learning security machines.
No doubt he would say, as most developers and operators do, that this is simply a matter of how systems are used in compliance with best practice and the law, which in itself ignores the possibility of already consciously or unconsciously programmed-in biases, but the more that systems become intelligent and are able to make decisions independently of human operators, the thinner this legalistic response becomes. I´m not saying that we are about to have automatic car park CCTV cameras with guns any time soon, but it´s about time we had some forward-looking policy on the use of heuristic systems before they become as normal as Eaton suggests…