Learning Surveillance Systems

Several years ago, during my postdoctoral work on algorithmic surveillance, I was warning of the problems associated with the development of heuristic (learning) software applied to surveillance systems. I argued that their popularity would grow as the amount of data acquired through growing numbers of cameras that could not possibly be watched by human operators. Today in CSO Security and Risk, a magazine for ´security executives´, there is a piece by Eric Eaton which summarises the current state of the art, and which makes exactly that argument as a reason why operators of surveillance systems should consider using such learning software. As Eaton says,

¨Various tools have emerged that not only “see video better,” but also analyze the digitized output of video cameras in real time to learn and recognize normal behavior, and detect and alert on all abnormal patterns of activity without any human input.¨

Judge, jury... and executioner
Judge, jury... and executioner (IPC magazines)

The key issue here, and one which I mentioned in my post about angry robots the other day, is the automated determination of normality. As French theorist, Michalis Lianos, argued in Le Nouveau Contrôle Social back in 2001, the implementation of these kinds of systems threatens to replace social negotiation with a process of judgement and sometimes even the consequences (which in some border control systems can be fatal – Judge Dredd as computer).

Eaton identifies the first part of this conundrum when he says that ¨the key to successful surveillance is learning normal behaviors¨and he believes that this will enable systems to filter out such activity and ¨help predict, and prevent, future threats.¨ He does admit that in many cases the numbers of actual instances of systems detecting suspicious behaviour is very small and therefore the cost of systems may not be economically justified, but there is as usual amongst his ´5 musts´ for security operators, no place to be found for ethics, human rights (or indeed humanity outside of systems operators) or consideration of the wider social impacts of the growth in use of learning security machines.

No doubt he would say, as most developers and operators do, that this is simply a matter of how systems are used in compliance with best practice and the law, which in itself ignores the possibility of already consciously or unconsciously programmed-in biases, but the more that systems become intelligent and are able to make decisions independently of human operators, the thinner this legalistic response becomes. I´m not saying that we are about to have automatic car park CCTV cameras with guns any time soon, but it´s about time we had some forward-looking policy on the use of heuristic systems before they become as normal as Eaton suggests…

Author: David

I'm David Murakami Wood. I live on Wolfe Island, in Ontario, and am Canada Research Chair (Tier II) in Surveillance Studies and an Associate Professor at Queen's University, Kingston.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: