Spain vs. Google or Freedom of Expression vs. the Right to Be Forgotten

Several outlets are reporting today, the interesting clash between Spanish courts and Google. The argument is over whether Google should carry articles that have been challenged by Spanish citizens as breaching their privacy. An injunction was won in the courts by the Spanish data protection commissioner over publication of material that is being challenged under privacy legislation.

Clearly there are two main issues here. One is the specific issue of whether Google, as a search engine, can be considered as a publisher, or as it claims, simply an intermediary which publishes nothing, only linking to items published by others. This is important for Google as a business and for those who use it.

But the other is a more interesting issue which is the deeper question of what is going on here which is the struggle between two kinds of rights. The right to freedom of expression, to be able to say what one likes, is a longstanding one in democracies, however it is almost nowhere absolute. The problem in a search-engine enabled information age, is that these exceptions, which relate to both the (un)truth of published allegations (questions of libel and false accusation) and of privacy and to several other values, are increasingly challenged by the ability of people in one jurisdiction to access the same (libellous, untrue or privacy-destructive) information from outside that jurisdiction via the Internet.

In Spain, the question has apparently increasingly been framed in terms of a new ‘right to be forgotten’ or ‘right to delete’. This is not entirely new – certainly police records in many countries have elements that are time-limited, but these kinds of official individually beneficial forgettings are increasingly hard to maintain when information is ‘out there’ proliferating, being copied, reposted and so on.

This makes an interesting contrast with the Wikileaks affair. Here, where it comes to the State and corporations, questions of privacy and individual rights should not be used even analogically. The state may assert ‘secrecy’ but the state has no ‘right of privacy’. Secrecy is an instrumental concept relating to questions of risk. Corporations may assert ‘confidentiality’ but this is a question of law and custom relating to the regulation of the economy, not to ‘rights’.

Privacy is a right that can only be attached to (usually) human beings in their unofficial thoughts, activities and existence. And the question of forgetting is really a spatio-temporal extension of the concept of privacy necessary in an information society. Because the nature of information and communication has changed, privacy has to be considered over space and through time in a way that was not really necessary (or at least not for so many people so much of the time) previously.

This is where Google’s position comes back into play. Its insistence on neutrality is premised on a libertarian notion of information (described by Erik Davis some time ago as a kind of gnostic American macho libertarianism that pervades US thinking on the Internet). But if this is ‘freedom of information’ as usually understood in democratic societies, it does have limits and an extreme political interpretation of such freedom cannot apply. Should Google therefore abandon the pretence of neutrality and play a role in helping ‘us’ forget things that are untrue, hurtful and private to individuals?

The alternative is challenging: the idea that not acting is a morally ‘neutral’ position is clearly incorrect because it presages a new global norm of information flow presaged on not forgetting, and on the collapse of different jurisdictional norms of privacy. In this world, whilst privacy may not be dead, the law can no longer be relied on to enforce it and other methods from simple personal data management, to more ‘outlaw’ technological means of enforcement will increasingly be the standard for those who wish to maintain privacy. This suggests that money and/or technical expertise will be the things that will allow one to be forgotten, and those without either will be unable to have meaningful privacy except insofar as one is uninteresting or unnoticed.

Facebook face-recognition

Reports are that US users can now use an automated face-recognition function to tag people in photos posted to the site. To make it clear, this is not the already dubious practice of someone else tagging you in a photo, but an automated service – enter a picture and the system will search around identifying and tagging.

As a Facebook engineer is quoted as saying:

“Now if you upload pictures from your cousin’s wedding, we’ll group together pictures of the bride and suggest her name… Instead of typing her name 64 times, all you’ll need to do is click ‘Save’ to tag all of your cousin’s pictures at once.”

Once again, just as with Facebook Places, the privacy implications of this do not appear to have been thought through (or more likely just disregarded) and it’s notable that this has not yet been extended to Canada, where the federal Privacy Commissioner has made it very clear that Facebook cannot unilaterally override privacy laws.

Let’s see how this one plays out, and how much, once again, Facebook has to retrofit privacy settings…

Czech Republic operating illegal ‘gay’ screening

The Czech Republic is violating the European Convention on Human Rights by using a controversial and highly privacy-invasive method of screening those seeking asylum on grounds of being persecuted for their sexual orientation.

A BBC report (via BoingBoing) says that the country’s interior ministry has been criticised by the EU Agency for Fundamental Rights for using a ‘penile plethysmograph‘ on such claimants.

This so-called ‘phallometric test’ uses sensors attached to the penis which measure blood flow when different images are shown. The evidence from such tests is not recognised by courts in many countries due to its many problems including lack of standardization and the highly subjective interpretation of results.

Spying on Your Neighbours

One of the characteristics one would expect in a ‘surveillance society’ is that surveillance would become seen as a more ‘normal’ reaction to problems at all levels of society. So we start to see instructive stories about surveillance in all kinds of unexpected places. The ‘Home and Garden’ section of the Seattle Times newspaper carries a very interesting report this week on the use of relatively low-cost surveillance systems (some involving digital movement detection) used by ordinary middle-class homeowners to monitor their property and more specifically to catch their neighbours doing very unneighbourly things, such as tossing dog faeces into their gardens or trying to peek in through their windows.

In most of these cases, it seems the surveillance is primarily about defending property and based around specific observed anti-social behaviours. So, is this just a question of the legitimate defence of property rights and privacy (the legal view) or is this any kind of a social problem? I think it is certainly more complicated than just being a question of individuals empowering themselves with technology to do the right thing.

There is a big unvoiced problem behind all of this which is the decline of civility, neighbourliness and trust. It seems that most of the problems are interpersonal ones and would be ideally best resolved not through the secret gathering of information to inform a police investigation, public prosecution or private legal action, but through communication with the neighbours concerned. Richard Sennett, Jane Jacobs and many others have observed that we live increasingly in a ‘society of strangers’. The turn to surveillance not as a last resort but as a ‘natural’ first option, would seem to me not only a recognition of this, but a contribution to the wider problem. We don’t trust our neighbours so we watch them. But in watching them we diminish any remaining trust we had in them, and certainly they lose any trust they had in us.

This adds up. It is social not just interpersonal. It means people accept the diminution of general rights of privacy in public spaces and justifies the intrusion of all kinds of agencies into the lives of individuals and groups. This is only encouraged by government campaigns to watch out for suspicious activity, corporate pleas to all of us to be permanently on guard against ‘identity thieves’, ‘hackers’ and of course, celebrity magazines and websites that encourage a voyeuristic interest in the intimate lives of others.

Article 12: Waking Up in a Surveillance Society

I’m in a film! Article 12: Waking Up in a Surveillance Society is a really essential new documentary made by Junco Films, now doing the rounds of international film festivals. According to the Leeds Film Festival, where it will be shown next

“Article 12 presents an urgent and incisive deconstruction of the current state of privacy, the rights and desires of individuals and governments, and the increasing use of surveillance. The film adopts the twelfth article of the Universal Declaration of Human Rights to chart privacy issues worldwide, arguing that without this right no other human right can truly be exercised. It assembles leading academics and cultural analysts including Noam Chomsky, AC Grayling and Amy Goodman to highlight the devastating potency of surveillance, the dangers of complicity, and the growing movement fighting for this crucial right.”
Showings will be on Fri 12th Nov, 2010 at 20:15 in the Howard Assembly Room and on Tue 16th Nov, 2010 at 17:00 in Leeds Town Hall 2. The Tuesday showing will feature a discussion involving some of the contributors including AC Grayling (not me, although I was asked – it’s a bit too far to go!).
Future showings will include the Geneva International Human Rights Film Festival in March 2011 and hopefully Hotdocs in Toronto. If anyone else is interested in showing this film as part of an event, I’d be happy to contact the makers…

The city where the cameras never sleep… New York, New York

The Gothamist blog has a brief report on the massive upgrading and expansion of the video surveillance system in the New York public transit system. Like Chicago, which I’ve mentioned several times here, the cameras in New York are really just collection devices to feed an evolving suite of video analytic software, that can track suspects or vehicles in real-time or search through old footage to find multiple occurences of particular distinctive objects or people.

The other notable thing is that the new camera system is just completely overlaying the old – in other words there is no attempt to connect the older cameras which are not compatible and have far poorer image quality. As cameras and software gets cheaper, this option looks like being the one many urban authorities will pursue, so cities like London, which pioneered widespread video surveillance, but which, with their disconnected mosaic of incompatible systems, have started to look increasingly ineffective and out-of-date, could deal with this not by expensive and unreliable fixes but simply by sticking in an entirely new integrated algorithmic system on top of or alongside the old ones. Technological fallibility and incompatibility can no longer be relied on as protections for the privacy rights of citizens in public spaces.

Backdoors for Spies in Mobile Devices

There’s been a lot of controversy over this summer about the threats made to several large western mobile technology providers mainly by Asian and Middle-Eastern governments to ban their products and services unless they made it easier for their internal intelligence services and political police to access the accounts of users. The arguments actually started way back in 2008 in India, when the country’s Home Ministry demanded access to all communications made through Research in Motion’s (RIM) famous Blackberry smartphone, which was starting to spread rapidly in the country’s business community. Not much came of this beyond RIM agreeing in principle to the demand. Then over this summer, the issue flared up again, both in India and most strongly in the United Arab Emirates (UAE) and Saudi Arabia. RIM’s data servers were located outside the countries and the UAE’s Telecommunications Regulatory Authority (TRA) said that RIM was providing an illegal service which was “causing serious social, judicial and national security repercussions”. Both countries have notorious internal police and employ torture against political opponents.RIM initially defended its encrypted services and its commitment to the privacy of its users in a full statement issued at the beginning of August. However, they soon caved in when they realised that this could cause a cascade of bans across the Middle-East, India and beyond and promised to place a data server in both nations, and now India is once again increasing the pressure on RIM to do the same for its internal security services. So instead of a cascade of bans, we now have a massive increase in corporate-facilitated state surveillance. It’s Google and China all over again, but RIM put up even less of a fight.

However, a lot of people in these increasingly intrusive and often authoritarian regimes are not happy with the new accord between states and technology-providers, and this may yet prove more powerful than what states want. In Iran, Isa Saharkhiz, a leading dissident journalist and member of the anti-government Green Movement is suing another manufacturer, Nokia Siemens Networks, in a US court for providing the Iranian regime with the means to monitor its mobile networks. NSN have washed their hand of this, saying it isn’t their fault what the Iranian government does with the technology, and insist that they have to provide “a lawful interception capability”, comparing this to the United States and Europe, and claiming that standardisation of their devices means that “it is unrealistic to demand… that wireless communications systems based on global technology standards be sold without that capability.”

There is an interesting point buried in all of this, which is that the same backdoors built into western communications systems (and long before 9/11 came along too) are now being exploited by countries with even fewer scruples about using this information to unjustly imprison and torture political opponents. But the companies concerned still have moral choices to make, they have Corporate Social Responsibility (CSR) which is not simply a superficial agreement with anyone who shouts ‘security’ but a duty to their customers and to the human community. Whatever they say, they are making a conscious choice to make it easier for violent and oppressive regimes to operate. This cannot be shrugged off by blaming it on ‘standards’ (especially in an era of the supposed personal service and ‘mass customization’ of which the very same companies boast), and if they are going to claim adherence to ‘standards’, what about those most important standards of all, as stated clearly in the Universal Declaration of Human Rights, Article 12 of which states: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence,” and in Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

City of Leon to install mass public iris-scanning

The City of Leon in Mexico, if a report in Fast Company are to be believed, is going ahead with a scheme that goes far further than any other urban surveillance project yet in existence. They are already installing scanners that according to their manufacturers, Global Rainmakers Inc., an until recently secretive company with ties to US military operations, can read the irises of up to 50 people per minute.

Now, we have to be careful here. Gizmondo, as usual has gone way over the top with reports of ‘the end of privacy’ (which, if you believed their stories has already happened as many times as the apocalypse for 7th Day Adventists…) and talk of the ‘entire city’ being covered and ‘real-world’ operations (i.e. in uncontrolled settings). In fact, if you read the  Fast Company report, and indeed the actual description of the products from the company, they are far more limited even in their claims (which are likely to be exaggerated anyway). There is no indication that the iris scanners proposed will work in uncontrolled settings. When the company talk about the scanners working ‘on the fly’, they mean that they will work when someone is basically looking at the scanner or near enough whilst no more than 2 metres away (in the most advanced and expensive model and significantly less for most of them) and moving at no more than 1.5 metres per second (and, again, slower for the lower range devices). All the examples on the company website show ‘pinch points’ being used (walls, fences, gates etc.) to channel those being identified towards the scanner. In other words, they would not necessarily work in wide public streets or squares anyway and certainly not when people were moving freely.

So is this what is being proposed? Well, there are two phases of the partnership with Leon that the company has announced – and we have as yet no word from Leon itself on this. Phase I will cover the settings in which one might expect levels of access control to be high: prisons, police stations etc. Phase II will supposedly cover “mass transit, medical centers and banks, among other [unnammed] public and private locations”. It is also worth noting that the scheme’s enrolment is limited to convicted criminals, with all other enrolment on an entirely voluntary basis.

I am not saying that this is not highly concerning – it is. But we need to be careful of all kinds of things here. First of all, the Fast Company report is pure corporate PR, and the dreams of the CEO of Rainmakers, Jeff Carter (basically, world domination and ‘the end of fraud’ – ha ha ha, as if…) are the same kind of macho bulltoffee that one would expect from any thrusting executive in a newish company in a highly competitive marketplace. Secondly, there’s a whole lot of space here for both technological failure and resistance. The current government Leon may well find that the adverse publicity from this will lose rather than gain them votes and that in itself could see the end of the scheme, or its being limited to Phase I. In addition, without this being part of wider national networks, there may in the end be little real incentive for anyone to enrol voluntarily in this. Why would banks in Leon require this form of identification but not those in Mexico City or Toluca for example? Will the city authorities force everyone who use public transport to undergo an iris scan (which would make the ‘voluntary’ enrolment a sham)? This could all end being as insignificant as the Mexican companies offering RFID implants as a supposed antidote to kidnapping, it could be the start of a seismic shift in the nature of urban space, or it could be a messy mixture.

I hope my colleagues in Mexico are paying attention though – and I will try to keep updated on what’s really going on beyond the corporate PR.

Facebook Places: opt-out now or everyone knows where you are?

Facebook Places… what to say? Most of the criticism writes itself because we have been here before with just about every new ‘feature’ that Facebook introduces, and they seem to have learned absolutely nothing from any of the previous criticisms of the way in which they introduce their new apps and the control users have over them. Basically, Facebook Places is just like Google Latitude, but:

1. instead of having to opt-in to it, you are automatically included unless you opt out; and (here’s the really creepy part),
2. instead of just you being able to tell your ‘friends’ where you are, unless you do turn it off, anyone who is your friend can tell anyone else (regardless of their relationship to you) where you are, automatically.

Luckily we know how to turn it off, thanks to Bill Cammack (via Boingboing).

When, if ever, will Facebook realise than ‘opt-out’ is an entirely unethical way of dealing with users? It lacks the key element of active consent. You cannot be assumed to want to give up your privacy because you fail to turn off whatever new app that Facebook has suddenly decided to introduce without your prior knowledge. Facebook is basically a giant scam for collecting as much networked personal data as it can, which eventually it will, whatever it says now, work out how to ‘add value’ to (i.e.: exploit or sell), whether its users like it or not. And surely this is now the ideal time for an open source, genuinely consensual social networking system that isn’t beholden to some group of immature, ethically-challenged rich kids like Zuckerberg et al.?

Surveillance, Coercion, Privacy and the Census

There’s been a huge furore here in Canada about the current government’s decision to abolish the long-form census. I’ve been following the debate more interested in what the proponents and opponents have been saying about privacy and surveillance rather than intervening. But it’s about time I got off the fence, so here’s my two cents’ worth. It may come out as an op-ed piece in one of the papers soon, I don’t know…

Sense about the Census:

Why the Long-form Census debate really matters.

The debate about the scrapping of the long-form census is in danger of being unhelpfully polarized. The result can only benefit the current government to the long-term detriment of the Canadian people. On the one hand, some of those campaigning for the reinstatement of the survey have dismissed issues of surveillance and privacy. On the other hand, supporters of its abolition have referred to ‘privacy’ and ‘coercion’ as if these words in themselves were reason enough to cut the survey. But the whole way in which privacy has been discussed is a red herring. We need to reaffirm a commitment to privacy alongside other collective social values not in opposition to them. We need privacy and we need the census.

First, coercion. The long-form census is undoubtedly a form of coercive state surveillance. One only has to glance at the recent history of state data collection and its role in discrimination and mass-murder to see that that one can be far too blasé about the possibility of states misusing statistics. Examples abound from the Holocaust to the genocide in Rwanda, and there is no reason to suppose that this could never happen again. In fact technology makes discrimination easier and more comprehensive: with sophisticated data-mining techniques, inferences can be made about individuals and groups from disparate and seemingly harmless personal data.

However, just because censuses have the potential for abuse, this does not make them wrong. Surveillance forms the basis of modern societies, good and bad, and coercion is all around us from the time we are children told by our parents not to play on the stairs. Coercion can be caring, protect us and improves our lives. The long-form census would have to be shown to be unfairly coercive, or not have enough beneficial policy outcomes to justify any coercion. This, the government has failed to do, whereas the campaign for the restoration of the survey has highlighted numerous examples of improvements in communities across Canada resulting from long-form census data.

Now to privacy. The campaign to restore the long-form census has seen frequent instances of the argument, ‘nothing to hide, nothing to fear’. This is one of the most glib arguments about privacy and surveillance, not only because of the potential abuse of state data collection but also because it assumes so much about what people should want to keep private. Another common argument is that privacy is irrelevant because ‘everyone gives away their personal information on Facebook anyway’. But the fact that some people chose to share parts of their lives with selected others does not imply that any infringement of privacy is acceptable. Privacy depends on context. Social networking or marketing trends do not mean that ‘anything goes’ with personal data.

In making these arguments, campaigners end up unwittingly bolstering a government strategy that relies not only on the evocation of ‘coercion’ but on pitting individual privacy against collective social goals. Yet, the government’s position is misleading. Privacy is not simply an individual right but also a collective social value. And further, just because the data is collected from individuals by the state, does not mean that the state infringes on privacy. It depends on whether the data is stored without consent in a way that identifies individuals or is used in a way negatively impacts upon them.

However, Statistics Canada have demonstrated a commitment to privacy within the census process. The long-form census data is not used to identify or target individuals. It is aggregated and used for wider community purposes. As Statistics Canada say quite on their website: “No data that could identify an individual, business or organization, are published without the knowledge or consent of the individual, business or organization.” The census returns are confidential and Statistics Canada employees are the only people who will ever have access to the raw returns, and they are bound by The Statistics Act. All this was confirmed by the Office of the Privacy Commissioner of Canada, who found the 2006 census fully compliant with privacy law.

So both privacy and coercion are red herrings. The conduct of the long-form census has demonstrated a commitment to privacy alongside other collective social values in support of individuals and the wider community. This moderate, sensible and profoundly Canadian position is now under threat. That is why this debate matters.