Interesting article on the Guardian website this weekend, which highlights what seems to me not so much either the genuinely socially revolutionary or the threatening aspects of the ‘Internet of Things’ and smart everything, but the general lack of inspiration in so much of what developers are presenting as visions. But why does the Internet of Things frequently look so banal and so… crap?
There seems to be a pervasive failure of the imagination in many popular portrayals of the future, as if imagining the future is always an exercise in nostalgia. The future really ain’t what it used to be, back in the day when energy was going to be too cheap to meter, when we wouldn’t need to work and everything menial would be done by robots, when we’d all have our own personal helicopter (or even spaceship) and, of course, when there would be an end to war. The breakdown of that post-WW2 optimism and with it the faith in either (actually existing) capitalism or communism to deliver, hasn’t been replaced by revolutionary fervour or a brave new visions, but pathetic ideas like toothbrushes that tell us how well we’ve cleaned our teeth. The future is being created by an unholy combination of committees of marketing hacks and security wonks and we need to take it back…
I’ve been thinking a lot recently about why it is that implanted tracking devices have never really taken off in humans. Just a few years ago, there were all kinds of people laying out rather teleological versions of technological trajectories that led inevitably to mass human implanantation – and not just the US Christian right, who saw RFID as the fulfilment of biblical prophecy.
I think there are many reasons, including negative public reaction (implants really are a step too far, even for the ‘nothing to hide, nothing to fear’ crowd) and the fact that a lot of the promotion of human RFID implants was actually the PR work of one very loud company (Verichip) and did not actually have a lot of basis in either social reality or market research. But the other major reason is to do with other technological developments, particularly in wearable computing and sensor networks. In most cases, implants solve a problem that doesn’t exist (the idea that people want to remove a tracking device that might be there for very good – although I am not saying, indisputably good – reasons, usually medical ones). And where there are no good reasons, there’s probably no case for tracking at all.
So devices like this – temporary, printed or stick-on and removable – are far more what is likely to become the solution to any actual problem of tracking or monitoring for medical reasons. And the relative ease with which it can be removed by the wearer does at least mean that there is some room for negotiation and consent at more than just one point in the process. Of course, such removable, wearable tracking are still not somehow free of ethical and political considerations – and some may argue that the very appearance of consent actually hints at the generation of a greater conformity and self-surveillance, but the issues are of a slightly different nature to those raised by implanted devices.
There is a fascinating little piece on Bldg Blog about ‘security geotextiles’ and other actual and speculative surveillance systems that are built in to, underlie or encompass whole landscapes. The argument seems to support what I have been writing and speaking about recently on ‘vanishing surveillance’ (I’ll be speaking about it again in Copenhagen a the first Negotiating (In)visibilities conference in February): the way in which, as surveillance spreads and becomes more intense, moving towards ubiquitous, pervasive or ambient surveillance, that its material manifestations have a tendency to disappear.
There is a standards kind of alarmism that the piece starts with and the assumption that such things are malevolent does strike one as an initial impression, perhaps not surprising given that the piece is inspired by yet another security tech developement – this time a concealed perimeter surveillance system from Israeli firm, GMax. Perhaps if it had begun with urban ubiquitous sensory systems in a universal design context, it might have taken a very different direction. However, what’s particularly interesting about the piece is that it doesn’t stop there, but highlights the possibilities for resistance and subversion using the very same ubiquitous technologies.
But whether hegemonic or subversive, the overall trajectory that post outlines of a move towards a machine-readable world, indeed a world reconfigured for machines, is pretty much indisputable…
I missed putting this up last week, but MIT’s Technology Review blogs had a good summary of a talk by Intel’s Justin Rattner, who was arguing for a new era of more ‘friendly’ surveillance. By this he means an emphasis on ubiquitous computing and sensing technologies, or what the Europeans call ‘ambient intelligence’, for personal and personalized assistance and support. He is quoted in the piece as saying “Future devices will constantly learn about you, your habits, how you go about your life, your friends. They’ll know where you’re going, they’ll anticipate, they’ll know your likes and dislikes.” Rattner himself was wearing some new ‘intelligent socks’ (well, sensors in his socks) during the talk, which can sense whether the wearer has fallen or experienced some other unexpected movement. Of course, the problem with this, apart from the issue of whether we want even our socks to anticipate our movements and more, is that the constant stream of data needed to inform the intelligent systems has to go somewhere, and that ‘somewhere’ is ‘the cloud’, i.e. the most intimate data about you, whatever level of security is in place, would be just out there and far more accessible than the forms of biomedical information currently held by, for example, our doctors.
There’s an amusing article with a serious point to it by the ever-acerbic Charlie Brooker on The Guardian website, on the potential social transformations of so-called ‘augmented reality’ technologies. The idea that ‘augmented reality’ inevitably will diminish or dehumanise as much as it adds or extends is one that has been made many times before, but usually in regard to the ‘subject’, i.e: the person experiencing the augmented reality. What Brooker’s satirical article is saying is that the humanity that is potenitally diminished in these systems is that of ‘others’ who may be effectively hidden by the information that the person using AR desires, and perhaps even deliberately so. I can see this. I think it’s actually a real possibility and the humour of Brooker’s approach shoudn’t disguise the fact that he’s an incredibly perceptive commentator.