Spain vs. Google or Freedom of Expression vs. the Right to Be Forgotten

Several outlets are reporting today, the interesting clash between Spanish courts and Google. The argument is over whether Google should carry articles that have been challenged by Spanish citizens as breaching their privacy. An injunction was won in the courts by the Spanish data protection commissioner over publication of material that is being challenged under privacy legislation.

Clearly there are two main issues here. One is the specific issue of whether Google, as a search engine, can be considered as a publisher, or as it claims, simply an intermediary which publishes nothing, only linking to items published by others. This is important for Google as a business and for those who use it.

But the other is a more interesting issue which is the deeper question of what is going on here which is the struggle between two kinds of rights. The right to freedom of expression, to be able to say what one likes, is a longstanding one in democracies, however it is almost nowhere absolute. The problem in a search-engine enabled information age, is that these exceptions, which relate to both the (un)truth of published allegations (questions of libel and false accusation) and of privacy and to several other values, are increasingly challenged by the ability of people in one jurisdiction to access the same (libellous, untrue or privacy-destructive) information from outside that jurisdiction via the Internet.

In Spain, the question has apparently increasingly been framed in terms of a new ‘right to be forgotten’ or ‘right to delete’. This is not entirely new – certainly police records in many countries have elements that are time-limited, but these kinds of official individually beneficial forgettings are increasingly hard to maintain when information is ‘out there’ proliferating, being copied, reposted and so on.

This makes an interesting contrast with the Wikileaks affair. Here, where it comes to the State and corporations, questions of privacy and individual rights should not be used even analogically. The state may assert ‘secrecy’ but the state has no ‘right of privacy’. Secrecy is an instrumental concept relating to questions of risk. Corporations may assert ‘confidentiality’ but this is a question of law and custom relating to the regulation of the economy, not to ‘rights’.

Privacy is a right that can only be attached to (usually) human beings in their unofficial thoughts, activities and existence. And the question of forgetting is really a spatio-temporal extension of the concept of privacy necessary in an information society. Because the nature of information and communication has changed, privacy has to be considered over space and through time in a way that was not really necessary (or at least not for so many people so much of the time) previously.

This is where Google’s position comes back into play. Its insistence on neutrality is premised on a libertarian notion of information (described by Erik Davis some time ago as a kind of gnostic American macho libertarianism that pervades US thinking on the Internet). But if this is ‘freedom of information’ as usually understood in democratic societies, it does have limits and an extreme political interpretation of such freedom cannot apply. Should Google therefore abandon the pretence of neutrality and play a role in helping ‘us’ forget things that are untrue, hurtful and private to individuals?

The alternative is challenging: the idea that not acting is a morally ‘neutral’ position is clearly incorrect because it presages a new global norm of information flow presaged on not forgetting, and on the collapse of different jurisdictional norms of privacy. In this world, whilst privacy may not be dead, the law can no longer be relied on to enforce it and other methods from simple personal data management, to more ‘outlaw’ technological means of enforcement will increasingly be the standard for those who wish to maintain privacy. This suggests that money and/or technical expertise will be the things that will allow one to be forgotten, and those without either will be unable to have meaningful privacy except insofar as one is uninteresting or unnoticed.

Internet doit être défendu! (4)

I write this addition to my ongoing series of thoughts on the implications of the Wikileaks scandal, en Francais because according to Le Point, the the Assemblée Nationale has passed a bill, Loppsi 2, which, amongst other things, in its Article 4, allows the French government to ban particular websites, and essentially to ‘filter’ the Internet. The Bill of course has ‘good intentions’, in this case, it is aimed at paedophiles, but the wording is such that it allows a far wider use against “la cybercriminalité en général”. Regardless, as the article points out: “Les expériences de listes noires à l’étranger ont toutes été des fiascos,” in other words such bills have generally been a complete failure as in most cases the state’s technology and expertise cannot deliver what the law allows.

However, I am left wondering what makes this any different from what China does, and what moral right the French state now has to criticise Chinese censorship or indeed any other regime that is repressive of information rights. And of course, what other very reasonable ‘good intentions’ could be drawn upon for closing the Net – opposing ‘information terrorism’, par example?

Facebook face-recognition

Reports are that US users can now use an automated face-recognition function to tag people in photos posted to the site. To make it clear, this is not the already dubious practice of someone else tagging you in a photo, but an automated service – enter a picture and the system will search around identifying and tagging.

As a Facebook engineer is quoted as saying:

“Now if you upload pictures from your cousin’s wedding, we’ll group together pictures of the bride and suggest her name… Instead of typing her name 64 times, all you’ll need to do is click ‘Save’ to tag all of your cousin’s pictures at once.”

Once again, just as with Facebook Places, the privacy implications of this do not appear to have been thought through (or more likely just disregarded) and it’s notable that this has not yet been extended to Canada, where the federal Privacy Commissioner has made it very clear that Facebook cannot unilaterally override privacy laws.

Let’s see how this one plays out, and how much, once again, Facebook has to retrofit privacy settings…

The Internet Must Be Defended!

As I am just putting the finishing touches on a new issue of Surveillance & Society, on surveillance and empowerment, the furore over the Wikileaks website and it’s publication of secret cables from US diplomatic sources has been growing. Over the last few days, Julian Assange, the public face of the website and one of its founders has been arrested in London on supposedly unrelated charges as US right-wing critics call for his head, the site’s domain name has been withdrawn, Amazon has kicked the organization off its US cloud computing service, one of Assange’s bank accounts has been seized, and major companies involved in money transfer, Paypal, Visa and Mastercard, have all stopped serving Wikileaks claiming that Wikileaks had breached their terms of service.

At the same time, hundreds of mirror sites for Wikileaks have been set up around the world, and the leaks show no sign of slowing down. The revelations themselves are frequently mundane or confirm what informed analysts knew already, but it is not the content of these particular leaks that is important, it is the point at which they come in the struggle over information rights and the long-term future of the Internet.

The journal which I manage is presaged on open-access to knowledge. I support institutional transparency and accountability at the same time as I defend personal privacy. It is vital not to get the two mixed up. In the case of Wikileaks, the revelation of secret information is not a breach of anyone’s personal privacy, rather it is a massively important development in our ability to hold states to account in the information age. It is about equalization, democratization and the potential creation of a global polity to hold the already globalized economy and political elites accountable.

John Naughton, writing on The Guardian website, argues that western states who claim openness is part of freedom and democracy cannot have it both ways. We should, he says, ‘live with the Wikileakable world’. It is this view we accept, not the ambivalence of people like digital critic, Clay Shirky, who, despite being a long-term advocate of openness seemingly so long as the openness of the Internet remained safely confined to areas like economic innovation, cannot bring himself to defend this openness when its genuinely political potential is beginning to be realised.

The alternative to openness is closure, as Naughton argues. The Internet, created by the US military but long freed from their control, is now under thread of being recaptured, renationalized, sterilized and controlled. With multiple attacks on the net from everything from capitalist states’ redefinition of intellectual property and copyrights, through increasingly comprehensive surveillance of Internet traffic by almost all states, to totalitarian states’ censorship of sites, and now the two becoming increasingly indistinguishable over the case of Wikileaks, now is the time for all who support an open and liberatory Internet to stand up.

Over 30 years ago, between 1975 and 1976 at the Collège de France, Michel Foucault gave a powerful series of lectures entitled Society Must Be Defended. With so much that is social vested in these electronic chains of connection and communication, we must now argue clearly and forcefully that, nation-states and what they want be damned, “The Internet Must Be Defended!”

UK U-turn on Interception Consulation

The BBC reports that the UK Home Office has been forced by the European Union to accept input from civil and digital rights groups over the revision of its Regulation of Investigatory Powers Act (RIPA) – I’ve posted lots on RIPA here in the past, so it’s worth doing a search of this site for some of the backstory.

The u-turn was apparently sparked by the EU’s report on the Phorm debacle (see also here) which, amongst other things concluded that the UK was in breach of the Privacy Directive for having no adequate complaints procedure or systems of legal redress for those who believe they have been subject to illicit surveillance. Amongst the little nuggets in this story is the fact that since its creation in 1986, the Interception Commissioner has upheld four complaints. Yes, four. 4.

The consultation has also been extended to the 17th of December, so get writing if you haven’t already made your views known. You can find the consultation document here (pdf).

New UK government to go ahead with old government plan on data retention

One of the many promises made by the new Conservative-Liberal Democrat coalition government was that it would “end the storage of internet and e-mail records without good reason.” The obvious flaw in this promise is that all the protection provided was only good so long as the government was unable to invent a ‘good reason.’

Now it appears according to The Guardian newspaper, that such a ‘good reason’ has been defined in the Strategic Defence and Security Review, to keep all web site visits, e-mail and phone calls made in the UK. And it is an old reason: basically, everything should be kept in case the police or intelligence services might find it useful in the prevention of a ‘terror-related crime’. Note: not actually terrorism, but terror-related, which is rather more vague and not so clearly defined in law, even given that ‘terrorism’ is already very broadly defined in the relevant laws.

This is pretty much exactly what the last Labour government were planning to do anyway with the proposed Communications Bill. Oh, and dont’t forget that the cost of this has been estimated at around 2Bn GBP ($3.5Bn) in a country that just announced ‘unavoidable’ welfare cuts of 7Bn GBP… that’s the reality of the ‘age of austerity’ for you’. It shows what David Gill argued in his book Policing Politics (1994) that the intelligence service constitute a ‘secret state’ that persists beyond the superficial front of the government of the day.

Night of the Surveillance Dead

In one of those curious synchronicities that occasionally emerge out of the chaotic foam of the internet, I came across two stories (of an entirely different nature) featuring surveillance and ‘zombies’ this week.

The first is one that Ars Technica first publicized recently – the creation of new undeletable cookies. Cookies, for the still unaware, are little bits of code that sit on your computer and store information, usually relating to websites you have visited – so, passwords and the like. Originally they were simply a tool to make it easier to handle the proliferation of sites that needed login details from users. And in most cases, they used to be both moderately consensual (i.e. you would be, or could be, asked if you wanted to have you computer download one) and relatively easy to remove. However, in recent years, this has changed. For a start there are so many sites and applications using cookies that it has become inconvenient to ‘consent’ to them or to manage them in any unautomated way. The new development however is a system that uses the database capabilities in HTML5 rather than being a traditional cookie. The major problem with this, and you can read more about the technical details in the story, is that these cannot ever be deleted by the user, as when they are deleted, they respawn themselves, and recreate the data profile of the user by reaching into other areas of your computer (and even stuff you thought was also deleted). The company concerned, Ringleader Digital, which specializes in ‘targeted, trackable advertising’ for ‘real-time visibility’, says users can ‘opt-out’ by using a form on their website, but this so-called ‘opt-out’ is hedged about with terms and conditions.

Now, Ars Technica reports that an open-source developer, Samy Kamkar, has created ‘evercookie‘, a virtually indestructible cookie designed as an educational tool to make users aware of the presence of these new internet zombies that do their master’s bidding. It’s a neat idea but I wonder – and I hope you will excuse my taking the zombie metaphor just a little further here – whether in raising the dead to show that necromancy is bad, good wizards like Samy Kamkar might in the end just be contributing to the problem. It isn’t as if most ordinary users understand these strange powers. Perhaps the people who need to witness the power of these occult rites are the regulators. It’s not clear to me whether these kinds of programs would be considered in any way legal in most places with strong data-protection and privacy laws, like Canada and the EU – as the controversy over the similar British Telecom system, Phorm, showed. So I would be very interested in what the Canadian Privacy Commissioner has to say about it, for example. I will be asking them.

(The second zombie story I will add later…)

Cyber-Surveillance in Everyday Life: Call for Participation

Call For Participation: Cyber-Surveillance in Everyday Life

Digitally mediated surveillance (DMS) is an increasingly prevalent, but still largely invisible, aspect of daily life. As we work, play and negotiate public and private spaces, on-line and off, we produce a growing stream of personal digital data of interest to unseen others. CCTV cameras hosted by private and public actors survey and record our movements in public space, as well as in the workplace. Corporate interests track our behaviour as we navigate both social and transactional cyberspaces, data mining our digital doubles and packaging users as commodities for sale to the highest bidder. Governments continue to collect personal information on-line with unclear guidelines for retention and use, while law enforcement increasingly use internet technology to monitor not only criminals but activists and political dissidents as well, with worrisome implications for democracy.

This international workshop brings together researchers, advocates, activists and artists working on the many aspects of cyber-surveillance, particularly as it pervades and mediates social life. This workshop will appeal to those interested in the surveillance aspects of topics such as the following, especially as they raise broader themes and issues that characterize the cyber-surveillance terrain more widely:

  • social networking (practices & platforms)
  • search engines
  • behavioural advertising/targeted marketing
  • monitoring and analysis techniques (facial recognition, RFID, video analytics, data mining)
  • Internet surveillance (deep packet inspection, backbone intercepts)
  • resistance (actors, practices, technologies)

A central concern is to better understand DMS practices, making them more publicly visible and democratically accountable. To do so, we must comprehend what constitutes DMS, delineating parameters for research and analysis. We must further explore the way citizens and consumers experience, engage with and respond to digitally mediated surveillance. Finally, we must develop alliances, responses and counterstrategies to deal with the ongoing creep of digitally mediated surveillance in everyday life.

The workshop adopts a novel structure, mainly comprising a series of themed panels organized to address compelling questions arising around digitally mediated surveillance that cut across the topics listed above. Some illustrative examples:

  1. We regularly hear about ‘cyber-surveillance’, ‘cyber-security’, and ‘cyber-threats’. What constitutes cyber-surveillance, and what are the empirical and theoretical difficulties in establishing a practical understanding of cyber-surveillance? Is the enterprise of developing a definition useful, or condemned to analytic confusion?
  2. What are the motives and strategies of key DMS actors (e.g. surveillance equipment/systems/ strategy/”solutions” providers; police/law enforcement/security agencies; data aggregation brokers; digital infrastructure providers); oversight/regulatory/data protection agencies; civil society organizations, and user/citizens?
  3. What are the relationships among key DMS actors (e.g. between social networking site providers)? Between marketers (e.g. Facebook and DoubleClick)? Between digital infrastructure providers and law enforcement (e.g. lawful access)?
  4. What business models are enterprises pursuing that promote DMS in a variety of areas, including social networking, location tracking, ID’d transactions etc. What can we expect of DMS in the coming years? What new risks and opportunities are likely?
  5. What do people know about the DMS practices and risks they are exposed to in everyday life? What are people’s attitudes to these practices and risks?
  6. What are the politics of DMS; who is active? What are their primary interests, what are the possible lines of contention and prospective alliances? What are the promising intervention points and alliances that can promote a more democratically accountable surveillance?
  7. What is the relationship between DMS and privacy? Are privacy policies legitimating DMS? Is a re-evaluation of traditional information privacy principles required in light of new and emergent online practices, such as social networking and others?
  8. Do deep packet inspection and other surveillance techniques and practices of internet service providers (ISP) threaten personal privacy?
  9. How do new technical configurations promote surveillance and challenge privacy? For example, do cloud computing applications pose a greater threat to personal privacy than the client/server model? How do mobile devices and geo-location promote surveillance of individuals?
  10. How do the multiple jurisdictions of internet data storage and exchange affect the application of national/international data protection laws?
  11. What is the role of advocacy/activist movements in challenging cyber-surveillance?

In conjunction with the workshop there will be a combination of public events on the theme of cyber-surveillance in everyday life:

  • poster session, for presenting and discussing provocative ideas and works in progress
  • public lecture or debate
  • art exhibition/installation(s)

We invite 500 word abstracts of research papers, position statements, short presentations, works in progress, posters, demonstrations, installations. Each abstract should:

  • address explicitly one or more “burning questions” related to digitally-mediated surveillance in everyday life, such as those mentioned above.
  • indicate the form of intended contribution (i.e. research paper, position statement, short presentation, work in progress, poster, demonstration, installation)

The workshop will consist of about 40 participants, at least half of whom will be presenters listed on the published program. Funds will be available to support the participation of representatives of civil society organizations.

Accepted research paper authors will be invited to submit a full paper (~6000 words) for presentation and discussion in a multi-party panel session. All accepted submissions will be posted publicly. A selection of papers will be invited for revision and academic publication in a special issue of an open-access, refereed journal such as Surveillance and Society.

In order to facilitate a more holistic conversation, one that reaches beyond academia, we also invite critical position statements, short presentations, works-in-progress, interactive demonstrations, and artistic interpretations of the meaning and import of cyber-surveillance in everyday life. These will be included in the panel sessions or grouped by theme in concurrent ‘birds-of-a-feather’ sessions designed to tease out, more interactively and informally, emergent questions, problems, ideas and future directions. This BoF track is meant to be flexible and contemporary, welcoming a variety of genres.

Instructions for making submissions will be available on the workshop website by Sept 1.

See also an accompanying Call for Annotated Bibliographies, aimed at providing background materials useful to workshop participants as well as more widely.

Timeline:

2010:

Oct. 1: Abstracts (500 words) for research papers, position statements, and other ‘birds-of-a-feather’ submissions

Nov. 15: Notification to authors of accepted research papers, position statements, etc. Abstracts posted to web.

2011:

Feb. 1: Abstracts (500 words) for posters

Mar. 1: Notification to authors of accepted posters.

Apr. 1: Full research papers (5-6000 words) due, and posted to web.

May 12-15 Workshop

Sponsored by: The New Transparency – Surveillance and Social Sorting.

International Program Committee: Jeffrey Chester (Center for Digital Democracy), Roger Clarke (Australian Privacy Foundation), Gus Hosein (Privacy International, London School of Economics), Helen Nissenbaum (New York University),
Charles Raab (University of Edinburgh) and Priscilla Regan (George Mason University)

Organizing Committee: Colin Bennett, Andrew Clement, Kate Milberry & Chris Parsons.

University of Toronto & University of Victoria.

Backdoors for Spies in Mobile Devices

There’s been a lot of controversy over this summer about the threats made to several large western mobile technology providers mainly by Asian and Middle-Eastern governments to ban their products and services unless they made it easier for their internal intelligence services and political police to access the accounts of users. The arguments actually started way back in 2008 in India, when the country’s Home Ministry demanded access to all communications made through Research in Motion’s (RIM) famous Blackberry smartphone, which was starting to spread rapidly in the country’s business community. Not much came of this beyond RIM agreeing in principle to the demand. Then over this summer, the issue flared up again, both in India and most strongly in the United Arab Emirates (UAE) and Saudi Arabia. RIM’s data servers were located outside the countries and the UAE’s Telecommunications Regulatory Authority (TRA) said that RIM was providing an illegal service which was “causing serious social, judicial and national security repercussions”. Both countries have notorious internal police and employ torture against political opponents.RIM initially defended its encrypted services and its commitment to the privacy of its users in a full statement issued at the beginning of August. However, they soon caved in when they realised that this could cause a cascade of bans across the Middle-East, India and beyond and promised to place a data server in both nations, and now India is once again increasing the pressure on RIM to do the same for its internal security services. So instead of a cascade of bans, we now have a massive increase in corporate-facilitated state surveillance. It’s Google and China all over again, but RIM put up even less of a fight.

However, a lot of people in these increasingly intrusive and often authoritarian regimes are not happy with the new accord between states and technology-providers, and this may yet prove more powerful than what states want. In Iran, Isa Saharkhiz, a leading dissident journalist and member of the anti-government Green Movement is suing another manufacturer, Nokia Siemens Networks, in a US court for providing the Iranian regime with the means to monitor its mobile networks. NSN have washed their hand of this, saying it isn’t their fault what the Iranian government does with the technology, and insist that they have to provide “a lawful interception capability”, comparing this to the United States and Europe, and claiming that standardisation of their devices means that “it is unrealistic to demand… that wireless communications systems based on global technology standards be sold without that capability.”

There is an interesting point buried in all of this, which is that the same backdoors built into western communications systems (and long before 9/11 came along too) are now being exploited by countries with even fewer scruples about using this information to unjustly imprison and torture political opponents. But the companies concerned still have moral choices to make, they have Corporate Social Responsibility (CSR) which is not simply a superficial agreement with anyone who shouts ‘security’ but a duty to their customers and to the human community. Whatever they say, they are making a conscious choice to make it easier for violent and oppressive regimes to operate. This cannot be shrugged off by blaming it on ‘standards’ (especially in an era of the supposed personal service and ‘mass customization’ of which the very same companies boast), and if they are going to claim adherence to ‘standards’, what about those most important standards of all, as stated clearly in the Universal Declaration of Human Rights, Article 12 of which states: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence,” and in Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

Facebook Places: opt-out now or everyone knows where you are?

Facebook Places… what to say? Most of the criticism writes itself because we have been here before with just about every new ‘feature’ that Facebook introduces, and they seem to have learned absolutely nothing from any of the previous criticisms of the way in which they introduce their new apps and the control users have over them. Basically, Facebook Places is just like Google Latitude, but:

1. instead of having to opt-in to it, you are automatically included unless you opt out; and (here’s the really creepy part),
2. instead of just you being able to tell your ‘friends’ where you are, unless you do turn it off, anyone who is your friend can tell anyone else (regardless of their relationship to you) where you are, automatically.

Luckily we know how to turn it off, thanks to Bill Cammack (via Boingboing).

When, if ever, will Facebook realise than ‘opt-out’ is an entirely unethical way of dealing with users? It lacks the key element of active consent. You cannot be assumed to want to give up your privacy because you fail to turn off whatever new app that Facebook has suddenly decided to introduce without your prior knowledge. Facebook is basically a giant scam for collecting as much networked personal data as it can, which eventually it will, whatever it says now, work out how to ‘add value’ to (i.e.: exploit or sell), whether its users like it or not. And surely this is now the ideal time for an open source, genuinely consensual social networking system that isn’t beholden to some group of immature, ethically-challenged rich kids like Zuckerberg et al.?