I’m far from the only academic studying smart cities and big data-driven urbanism. One of the people who’s most inspired my work (in many ways) over the years, is Rob Kitchin – sometimes I even spell his name right! Rob has this fantastic new book, The Data Revolution, coming out in September from Sage, and very helpfully he has put the bibliography, and a lot of other stuff, online. This is the way scholarship should be. Too many of us still guard our ‘secret’ sources and keep our work-in-progress close to our chests. But if we want people to read what we do, think and take action, then more open scholarship is the way to go.
Category: algorithms
On the ‘Right to Be Forgotten’
While Viktor Mayer-Schönberger is arguing today both that there’s really not a lot new to the European Court of Justice decision to order Google to adjust its search results to accommodate the right to privacy for one individual and that it really won’t be a problem because Google already handles loads of copyright removal requests very quickly, the decision has also sparked some really rather silly comments all over the media, usually from the neoliberal and libertarian right, that this is a kind of censorship or that it will open the door to states being able to control search results.
I think it’s vital to remember that there’s really an obvious difference between personal privacy, corporate copyright and state secrecy. I really don’t think it’s helpful in discussion to conflate all these as somehow all giving potential precedent to the other (and I should be clear that Mayer-Schönberger is not doing this, he’s merely pointing out the ease with which Google already accommodates copyright takedown notices to show that it’s not hard or expensive for them to comply with this ruling). State attempts to remove things that it finds inconvenient are not the same as the protection of personal privacy, and neither are the same as copyright. This decision is not a precedent for censorship by governments or control by corporations and we should very strongly guard against any attempts to use it in this way.
Google algorithms already do a whole range of work that we don’t see and to suggest that they are (or were) open, free and neutral and will now be ‘biased’ or ‘censored’ after this decision is only testament to how much we rely on Google to a large extent, unthinkingly. This is where I start to part company with Mayer-Schönberger is in his dismissal of the importance of this case as just being the same as a records deletion request in any other media. It isn’t; it’s much more significant.
You are sill perfectly free to make the effort to consult public records about the successful complainant in the case (or anyone else) in the ways you always have. The case was not brought against those holding or even making the information public. What the case sought to argue, and what the court’s verdict does, is to imply that there are good social reasons to limit the kind of comprehensive and effortless search that Google and other search engines provide, when it comes to the personal history of private individuals – not to allow that one thing that is over and one to continue to define the public perception of a person anywhere in the world and potentially for the rest of their life (and beyond). Something being public is not the same as something being easily and instantaneously available to everyone forever. In essence it provides for a kind of analog of the right of privacy in public places for personal data. And it also recognizes that the existence and potentials of any information technology should not be what defines society, rather social priorities should set limits on how information technologies are used.
Personally, I believe that this is a good thing. However, as the politics of information play out over the next few years, I also have no doubt that it’s something that will be come up again and again in courts across the world…
PS: I first wrote about this back in 2011 here – I think I can still stand behind what I though then!
The computer did it…
It seems that ‘the computer did it’ is becoming as much a cliché in the early twenty-first century as ‘the butler did it’ was 100 years ago. There’s an interesting link by Cory Doctorow on bOING bOING to a blog post by one Pete Ashton about the already infamous ‘Keep Calm and Rape A Lot’ T-Shirts being sold through Amazon’s marketplace.

Only the explanation given is incomplete in important ways. This is not to encourage people to attack Pete who, as his post explains, is not in any way connected to or responsible for the T-shirts or the company that produces them. However the explanation that ‘it was an algorithm that did this and the company didn’t know what was being produced until it was ordered’ is inadequate as an explanation. Here’s why.
1. This was not simply a product of computer generation nor do algorithms just spring fully formed from nature. All algorithms are written by humans (or by other programs, which are in turn produced by humans) and the use of an algorithm does not remove the need to check what the algorithm is (capable of) generating.
2. There was a specific number of verbs included in the algorithm for generating these T-shirt slogans (621 verbs in fact). Even if they were generated by selecting all the 4 or 5 letter words in a dictionary of some sort, it’s not that hard to check a list of 621 verbs for words that will be offensive.
3. There words following the verb were not even as random as this. In fact, they are specifically A LOT, HER, IN, IT, ME, NOT, OFF, ON, OUT and US. Several people have checked this. There are some very interesting words missing, notably HIM. This list is clearly a human selection and its choices reflect, if not deliberately misogynistic choices, at the very least a patriarchal culture.
Algorithms, as cultural products, are always political. They are never neutral even when they appear to be doing entirely unremarkable things. The politics of algorithms may be entirely banal in these cases, but in some, as in this case, the politics of algorithms is accidentally visible. T-shirts may be a minor issue, but what’s much more important is not just to accept the idea that ‘the computer did it’ as an infallible explanation when it comes to rather more consequential things: all the way from insurance and credit rating through police stop-and-search and no-fly lists to assassination by drone. Otherwise, before we know it, the opportunity to question the politics is buried in code and cabling.
The Unbearable Shallowness of Technology Articles… or, what Facebook Graph Search really means.
Wired has a feature article about Facebook’s new search tool. The big problem with it is that its vomit-inducing fawning over Facebook’s tech staff. In trying to make this some kind of human interest story – well, actually the piece starts off with Mark Zuckerberg’s dog, you see, he is human after all – of heroic tech folk battling with indomitable odds to create something amazing – what in science fiction criticism would be called an Edisonade – it almost completely muffles the impact of what a piece like this should be foregrounding, which is about what this system is, what is has been programmed to do and where it’s going.
And this is what Graph Search does, very simply: it is a search engine that will enable complex, natural language interrogation of data primarily but not limited to Facebook. So instead of trying to second-guess what Google might understand when you want to search for something, you would simply be able tell you what you ask. And because this is primarily ‘social’ – or about connection, and you should have already given up enough information to Facebook to enable it to ‘graph’ you so that it knows you, the results should supposedly be the kind if things you really wanted from your query. Supposedly. An FB developer in the article describes this as “a happiness-inducing experience” and further says, “We’re trying to facilitate good things.” However what this ‘happiness’ means, just like what ‘friendship’ means in the FB context, and what “good” means, just like the use of ‘evil’ in Google’s motto, is rather different than how we might understand such a term outside these contexts.
In the article, one example demonstrated by the developer is as follows:
[He] then tried a dating query — “single women who live near me.” A group of young women appeared onscreen, with snippets of personal information and a way to friend or message them. “You can then add whatever you want, let’s say those who like a certain type of music,” [he] said. The set of results were even age-appropriate for the person posing the query.
So when Mark Zuckerberg is quoted in the article saying that Graph Search is “taking Facebook back to its roots”, he seems to mean creeping on girls, as was, let us not forget, the main intention of the early Harvard version. Doesn’t this generate exactly the concern that the notorious ‘Girls Around Me’ app encountered? As the title of my favourite tumblr site has it, this isn’t happiness. Or it’s the happiness of the predator, the pervert and the psychopath.
But more fundamentally, this isn’t about privacy, or even online stalking. In fact, in many ways, both are side-issues here. This is about control and access: control over my information and how I access other information, not just on Facebook but in general. To me, the plans outlined for Graph Search look worrying, even outside of my idea of what would constitute happiness, because they have nothing to do with how I use Facebook or how I would want to use it. I don’t use Facebook as my gateway to the Web and I am never going to. As Eli Pariser pointed out in The Filter Bubble a couple of years back, that would both be limiting of my experience of the Web (and increasingly therefore of my communications more broadly) and give one organisation way too much power over both that experience and the future of the Web. But this does seem to be how Facebook wants it to be, and further, I suspect that, just like Bill Gates before him with his .NET initiative and other schemes, and just like the walled garden locked-in hardware that Apple produces, Zuckerberg is more interested in Facebook colonizing the entire online experience, or layering itself so entirely, tightly and intimately over the online world that the difference between that world and Facebook would seem all but invisible to the casual user.
These developments are dramatic enough in themselves. Never mind fluffy stories of heroic techies and their canine sidekicks.
Make like a Dandy Highwayman to beat Face Recognition Software
Spoofing biometrics has become a mini-industry, as one would expect as the technologies of recognition become more pervasive. And not all of these methods are high-tech. Tsutomu Matsumoto’s low tech ‘gummy fingerprint‘ approach to beating fingerprint recognition is already quite well-known, for example. I’ve also seen him demonstrate very effective iris scan spoofing using cardboard irises.
Facial recognition would seem the most obvious target for such spoofing given that it is likely to be the system most used in public or other open spaces. And one of the most ingenious systems I have seen recently involves a few very simple tips. Inspired by the increasing hostility of legal systems to masks and head coverings, CV Dazzle claims to be an ‘open-source’ camouflage system for defeating computer vision.
Among the interesting findings of the project, which started as part of the Interactive Telecommunications Program at NYU, is that the more complex and high-fashion disguise-type attempts to beat facial recognition did not work as well as the simpler flat camouflage approaches. The solution suggested thus involves many of the same principles as earlier forms of camouflage: breaking up surface patterns and disguising surface topography. It uses startling make-up techniques which look a bit like 80s New Romantic face painting as deployed by Adam and the Ants – hence the title of this post! The system concentrates especially on key areas of the face which are essential to most facial recognition software systems such as the area around the bridge of the nose, cheekbones and eye socket depth.

So, will we see a revival of the Dandy Highwayman look as a strategy of counter-surveillance? Or more likely, will social embarrassment and the desire to seem ‘normal’ mean that video surveillance operators have a relatively easy life?
