‘Friendly’ Surveillance and Intelligent Socks

I missed putting this up last week, but MIT’s Technology Review blogs had a good summary of a talk by Intel’s Justin Rattner, who was arguing for a new era of more ‘friendly’ surveillance. By this he means an emphasis on ubiquitous computing and sensing technologies, or what the Europeans call ‘ambient intelligence’, for personal and personalized assistance and support. He is quoted in the piece as saying “Future devices will constantly learn about you, your habits, how you go about your life, your friends. They’ll know where you’re going, they’ll anticipate, they’ll know your likes and dislikes.” Rattner himself was wearing some new ‘intelligent socks’ (well, sensors in his socks) during the talk, which can sense whether the wearer has fallen or experienced some other unexpected movement. Of course, the problem with this, apart from the issue of whether we want even our socks to anticipate our movements and more, is that the constant stream of data needed to inform the intelligent systems has to go somewhere, and that ‘somewhere’ is ‘the cloud’, i.e. the most intimate data about you, whatever level of security is in place, would be just out there and far more accessible than the forms of biomedical information currently held by, for example, our doctors.

Another day, another ‘intelligent’ surveillance system…

Yet another so-called ‘intelligent’ surveillance system has been announced. This one comes from Spain and is designed to detect abnormal behaviour on and around pedestrian crossings.

Comparison between the reasoning models of the artificial system and a theoretical human monitor in a traffic-based setting. (Credit: ORETO research group / SINC)
Comparison between the reasoning models of the artificial system and a theoretical human monitor in a traffic-based setting. (Credit: ORETO research group / SINC)

The article in Science Daily dryly notes that it could be used “to penalise incorrect behaviour”… Now, I know there’s nothing intrinsically terribly wrong with movement detection systems, but the trend towards the automation of fines and punishment, nor indeed of everyday life and interaction more broadly, is surely not one that we should be encouraging. I’ve seen these kinds of systems work in demonstrations (most recently at the research labs of Japan Railways, more of which later…) but, despite their undoubtedly impressive capabilities and worthwhile potential, they leave me with a sinking feeling, and a kind of mourning for the further loss of little bits of humanity. Maybe that’s just a personal emotion, but I don’t think we take enough account of both the generation and loss of emotions in response to increasing surveillance and control.

Further Reference: David Vallejo, Javier Albusac, Luis Jiménez, Carlos González y Juan Moreno. (2009) ‘A cognitive surveillance system for detecting incorrect traffic behaviors,’ Expert Systems with Applications 36 (7): 10503-10511