Automation and Imagination

Peter Tu writes over on Collective Imagination, that greater automation might prevent racism and provide for greater privacy in visual surveillance, providing what he calls ‘race-blind glasses’. The argument is not a new one at all, indeed the argument about racism is almost the same proposition advanced by Professor Gary Marx in the mid-90s about the prospects for face-recognition. Unfortunately, Dr Tu does several of the usual things: he argues that ‘the genie is out of the bottle’ on visual surveillance, as if technological development of any kind is an unstoppable linear force that cannot be controlled by human politics; and secondly, seemingly thinking that technologies are somehow separate from the social context in which they are created and used – when of course technologies are profoundly social. Although he is more cautious than some, this still leads to the rather over optimistic conclusion, the same one that has been advanced for over a century now, that technology will solve – or at least make up for – social problems. I’d like to think so. Unfortunately, empirical evidence suggests that the reality will not be so simple. The example Dr Tu gives on the site is one of a simple binary system – a monitor shows humans as white pixels on a black background. There is a line representing the edge of a station platform. It doesn’t matter who the people are or their race or intent – if they transgress the line, the alarm sounds, the situation can be dealt with. This is what Michalis Lianos refers to as an Automated Socio-Technical Environment (ASTE). Of course these simple systems are profoundly stupid in a way that the term ‘artificial intelligence’ disguises and the binary can hinder as much as it can help in many situations. More complex recognition systems are needed if one wants to tell one person from another or identify ‘intent’, and it is here that all those human social problems return with a vengeance. Research on face-recognition systems, for example, has shown that prejudices can get embedded within programs as much as priorities, in other words the politics of identification and recognition (and all the messiness that this entails) shifts into the code, where it is almost impossible for non-programmers (and often even programmers themselves) to see. And what better justification for the expression of racism can there be that a suspect has been unarguably ‘recognised’ by a machine? ‘Nothing to do with me, son, the computer says you’re guilty…’ And the idea that ‘intent’ can be in any way determined by superficial visualisation is supported by very little evidence which is far from convincing, and yet techno-optimist (and apparently socio-pessimist) researchers push ahead with the promotion of the idea that computer-aided analysis of ‘microexpressions’ will help tell a terrorist from a tourist. And don’t get me started on MRI…

I hope our genuine human ‘collective imagination’ can do rather better than this.

Facebook forced to grow up by Canadians

Wel, Facebook has finally been forced to grow up  and develop a sensible approach to personal data. Previously, as I have documented elsewhere, the US-based social networking site had pretty much assumed ownership of all personal data in perpetuity. However it has now promised to develop new privacy and consent rules and ways of allowing site users to chose which data they will allow to be shared with third parties.

So why the sudden change of heart? Well, it’s all down to those pesky Canucks. Yes, where the USA couldn’t bothered and where the EU didn’t even try, the Canadian Privacy Commissioner, Jennifer Stoddart, had declared Facebook to be in violation of Canada’s privacy laws. And it turns out that in complying it was just easier for Facebook to make wholesale changes for all customers rather than trying to apply different rules to different jurisdictions.

This suggests an interesting new phenomenon. Instead of transnational corporations being able to always seek out a country with the lowest standards as a basis for compliance on issues like privacy and data protection, a nation with higher standards and an activist regulator has shown itself able to force such a company to adjust its global operations to its much higher standard. This is good news for net users worldwide.

However, we shouldn’t rejoice too much: as Google and Yahoo have shown in the case of China, in the absense of any meaningful internal ethical standards, a big enough market can still impose distinct and separate policies that are far more harmful to the interests of individual users in those nations.

Locational Privacy

PDF file

The Electronic Frontier Foundation has a very good little report on locational privacy, “the ability of an individual to move in public space with the expectation that under normal circumstances their location will not be systematically and secretly recorded for later use.”

As usual for EFF, it is written in clear, understandable language and is free-to-access and download.

* I’m going to be away up to the mountains for a couple of days, so there won’t be any more posts here until Sunday at the earliest… next week is a slow one here in Japan as it is O-bon, the Buddhist festival of the dead, and many people go back to their family home and offices are generally closed for some or all of the week. I won’t be doing much in the way of interviewing, but I still have quite a few interviews and visits from the last two weeks to write up.

US cameras to see the whole of the moon…

There’s been a story developing for a while now on the US-Canadian border. This used to be one of the most casual and friendly of borders, indeed there are families stretched across both sides and in many places the border meant only slight differences in the price of some goods…

But no more. There might be a new president, but Obama seems to be allowing the Bush-era plans for strengthening the border with Canada to continue. There are now CCTV towers being erected, Unmanned Aerial Vehicles (UAVs) patrolling, and new much stricter passport regulations and customs and immigration checks. As usual this seems to be being done with a kind of macho indifference to the opinions of the Canadians that is making the US actions doubly unpopular.

If this seems like some kind of sci-fi nightmare then then most crazy, Philip K. Dick-style element is to be found on the Michigan-Ontario border at Port Huron, where the Sierra Nevada Corporation, a US military aerospace company, has launched a tethered balloon camera (the company calls it an MAA (medium altitude airship) pointed at the town of Sarnia across the border. This isn’t even an official scheme, it’s a private company trying to sell this insanity to the Department of Homeland Security, and naturally the Mayor and citizens of Sarnia are angry about this international violation of their privacy, and many of both sides of this border think that this intensified security as an attack on the trust that exists between Americans and Canadians.

So what are Sarnians doing? They are giving the cameras something to look at, that’s what. More specifically they are planning to drop their pants for a mass ‘moon the balloon’, which in these days of ever more insane surveillance schemes seems just about the only possible response.

‘X-ray vision’ may not be so far away…

Fascinating and disturbing news from the MIT Technology Review blog that a team of researchers appears to have cracked the problem of how to produce cheap, effective Terahertz Wave (TW) cameras and receivers. TW are found between infrared and microwave radiation, and produce what we called in A Report on the Surveillance Society, a ‘virtual strip search’, as they penetrate under layers of clothing but not much further, and can thus produce images of the body ‘stripped’ of clothing. Thus far, they’ve been used on an experimental basis in some airports and not really any further afield.

This is largely because of the way that TW waves have been detected up until now has basically been a bit of a kludge, a side-effect of another process. This has meant that TW equipment has been generally quite large and non-portable (amongst other things).

However one Michel Dyakonov of the University of Montpellier II in France has followed up theoretical work he did in the 1990s, with a new larger team, to show that tiny (nanoscale) ‘field effect transistors’ can – and they are still not quite sure how exactly – both produce and detect TW. The details are in Technology Review, but the crucial thing for those interested in surveillance is that:

  1. the output is ‘good enough for video’; and
  2. ‘they can be built into arrays using standard silicon CMOS technology’ which means small, cheap (and highly portable) equipment. This could be an add-on to standard video cameras.

I’m getting a genie-out-of-bottles feeling with this, but is it really as damaging to personal privacy as it feels? Does this really ‘reveal’ anything truly important? Or will it become something to which we rapidly become accustomed, and indeed with with which we quickly get bored? In some cultures, specially those that regard covering the body and modesty as being god-given, this is clearly going to present massive challenges to social and moral norms. It seems to me that there is also an immediate conflict with current constitutional and legal rights in several jurisdictions, not least the US Fourth Amendment right not to be subjected to warrantless searches and the European Convention Article Eight on the right to privacy.

But it seems that unless such a technology is banned, or at least particular commercial implementations, we’re about to cross another Rubicon almost before we’ve noticed it has happened. Ironically bans on technology can only really be effective in states where intensive surveillance and state control of behaviour is practiced. In other places, I am not sure banning can be effective even if it were desirable, as in reality, a ban simply means reserving the use of the technology to criminals, large corporations which can afford to flout laws, and the state.

Contact Point goes live

The controversial new central database of all children in the UK has gone live today for the North-west of England, and will gradually be rolled out across the UK. The £224M ‘Contact Point’, one of the main planks of the ‘Every Child Matters’ initiative, will be accessible to around 390, 000 police, social workers and other relevant professionals. It is mainly being promoted as a time-saving initiative, allowing quicker and more informed intervention in the case of vulnerable children, which we all hope it does, although this of course depends on the correct information being on the database in the first place. In addition, as the Joseph Rowntree Reform Trust review, Database State, rated the system as ‘red’ for danger in terms of privacy:

“because of the privacy concerns and the legal issues with maintaining sensitive data with no effective opt-out, and because the security is inadequate (having been designed as an afterthought), and because it provides a mechanism for registering all children that complements the National Identity Register.”

Phorm philling

UK satirical magazine, Private Eye, this week brings the ludicrous Stop Phoul Play website to my attention. This is a corporate spin site devoted entirely to defending BT’s underhand and intrusive ‘Phorm’ online advertising technology against what it calls ‘privacy pirates’ who they claim are either being paid or pushed to damage BT.

Those listed as ‘piracy pirates’ include the excellent investigative IT journal, The Register, the Open Rights Group and the brilliant Foundation for Information Policy Research (FIPR), along with numerous bloggers and contributors to web forums. Now, it may be that some other corporations with rival technologies would like Phorm to fail, just as Microsoft probably enjoys it a great deal every time Google takes a PR hit (or vice-versa), but to suggest that everyone who make a criticism of Phorm is secretly part of some conspiracy against BT is frankly, either stupid paranoid.

And there are very good reasons for being critical of Phorm in the trojan-like manner of its operation and the way in which it has been tested without the consent of users. As Private Eye also reminds us, Phorm has landed the UK government in legal trouble with the EU. It hardly needs a conspiracy to make people justifiably annoyed.

This is one of the weirder exercises in PR I have seen, not least because its paranoia and promotion of conspiracies can only be damaging to BT. Thus it is no surprise to find that, according to the The Register, that it is the product of the fevered imagination of Patrick Robertson, whose previous clients include the lovely General Pinochet and former Tory MP and convicted liar, Jonathan Aitkin. So go take a look at Stop Phoul Play (while it still exists…) – it really is quite insane.

Behind the cameras

While the vast majority of those monitoring CCTV screens are probably decent people who stick within the legal and ethical guidelines (such as they are), it is worth remembering that pervasive surveillance offers unprecedented opportunities to perverts, stalkers and sex offenders. This is not just secret cameras set up by weirdo voyeurs, it is the people who work with CCTV. This was noted by Clive Norris and collaborators back in the 1990s in Britain in their work on control rooms when they reported on operators making private tapes of women they saw in the street. Yesterday, The Daily Telegraph reported on a case in the US, where two FBI agents spied on girls changing for a charity fashion show for the underprivileged. They have been charged with criminal violation of privacy, which I am glad to see is a crime in the US. But, don’t forget that behind the cameras, if there is anyone these days, is a human being and that human being has as many flaws and secret desires as anyone else.

Is sousveillance the answer?

Marina Hyde in the Guardian last week wrote a very interesting piece on the ongoing fallout from the death of Ian Tomlinson at the G20 protests in London. She argued that the appearance of mobile telephone camera foogtage, which revealed more about the way the police treated the passerby, showed that this kind of inverse surveillance (or what Steve Mann calls ‘sousveillance’) was the way to fight the increase of surveillance in British society.

I’ve been suggesting this as one possible strategy for many years too, however what Hyde didn’t really deal with is the other side of the coin: the fact that the authotorities in Britain already know that this is a potential response and are trying to cut down on the use of photographic equipment in public places. Anti-terrorism laws already make it illegal to photograph members of the armed forces, and in the new Counter-Terrorism Act, there is a provision to allow the police to isue an order preventing photography in particular circumstances. Further, it is now regarded as suspicious by police to be seen taking an interest in surveillance cameras.

The bigger issue here is the fight for control of the means of visibility, and the legitimate production of images. The British state is slowly trying to restrict the definition of what is considered to be ‘normal’ behaviour with regards to video and photography. In the new normality, state video is for the public good, but video by the public is potential terrorism; police photographing demonstrators is important for public order, but demonstrators photographing police is gathering material potentially of use in the preparation of a terrorist act.

However, I am not 100% in favour of the proliferation of cameras, whoever is wielding them. I think it’s essential that we, at this moment in time, turn our cameras on an overintrusive and controlling state. However a society in which we all constantly film each other is not one in which I would feel comfortable living either. A mutual surveillance society in which cameras substitute for richer social interactions and social negotiation, is still a surveillance society and still a society of diminished privacy and dignity. I worry that sousveillance, rather than leading to a reduction in the intrusiveness of the state, will merely generate more cameras and more watchers, and merely help reinforce a new normality of being constantly observed and recorded.