On the ‘Right to Be Forgotten’

While Viktor Mayer-Schönberger is arguing today both that there’s really not a lot new to the European Court of Justice decision to order Google to adjust its search results to accommodate the right to privacy for one individual and that it really won’t be a problem because Google already handles loads of copyright removal requests very quickly, the decision has also sparked some really rather silly comments all over the media, usually from the neoliberal and libertarian right, that this is a kind of censorship or that it will open the door to states being able to control search results.

I think it’s vital to remember that there’s really an obvious difference between personal privacy, corporate copyright and state secrecy. I really don’t think it’s helpful in discussion to conflate all these as somehow all giving potential precedent to the other (and I should be clear that Mayer-Schönberger is not doing this, he’s merely pointing out the ease with which Google already accommodates copyright takedown notices to show that it’s not hard or expensive for them to comply with this ruling). State attempts to remove things that it finds inconvenient are not the same as the protection of personal privacy, and neither are the same as copyright. This decision is not a precedent for censorship by governments or control by corporations and we should very strongly guard against any attempts to use it in this way.

Google algorithms already do a whole range of work that we don’t see and to suggest that they are (or were) open, free and neutral and will now be ‘biased’ or ‘censored’ after this decision is only testament to how much we rely on Google to a large extent, unthinkingly. This is where I start to part company with Mayer-Schönberger is in his dismissal of the importance of this case as just being the same as a records deletion request in any other media. It isn’t; it’s much more significant.

You are sill perfectly free to make the effort to consult public records about the successful complainant in the case (or anyone else) in the ways you always have. The case was not brought against those holding or even making the information public. What the case sought to argue, and what the court’s verdict does, is to imply that there are good social reasons to limit the kind of comprehensive and effortless search that Google and other search engines provide, when it comes to the personal history of private individuals – not to allow that one thing that is over and one to continue to define the public perception of a person anywhere in the world and potentially for the rest of their life (and beyond). Something being public is not the same as something being easily and instantaneously available to everyone forever. In essence it provides for a kind of analog of the right of privacy in public places for personal data. And it also recognizes that the existence and potentials of any information technology should not be what defines society, rather social priorities should set limits on how information technologies are used.

Personally, I believe that this is a good thing. However, as the politics of information play out over the next few years, I also have no doubt that it’s something that will be come up again and again in courts across the world…

PS: I first wrote about this back in 2011 here – I think I can still stand behind what I though then!

Is Google taking a stand?

According to Wired’s Threat Level blog, Google is taking a rather tougher stance towards the US federal government when it comes to requests for cloud-stored data for investigations. The company is now, it says, asking for judicial warrants from state organisations. As Wired points out, even though this might seem ethically sound, it is dubious legal ground as the US Electronic Privacy Communications Act allows the federal government access to such documents without a warrant. And yet, no court challenge has yet been made by the government to Google’s stance.

So what is going on here? Is Google serious about taking on ‘the feds’ in favour of users? Is this new pro-user line by Google merely contingent and once something ‘really important’ is demanded, the company would cave in? Is there some other kind of backroom deal? Is Google actually being rather cynical because the company knows that the NSA can access everything they have anyway (and probably by arrangement – after all, the NSA helped Google out a lot in its battle with China’s authorities)? I suspect there is much more to this apparently casual revelation…

Spain vs. Google or Freedom of Expression vs. the Right to Be Forgotten

Several outlets are reporting today, the interesting clash between Spanish courts and Google. The argument is over whether Google should carry articles that have been challenged by Spanish citizens as breaching their privacy. An injunction was won in the courts by the Spanish data protection commissioner over publication of material that is being challenged under privacy legislation.

Clearly there are two main issues here. One is the specific issue of whether Google, as a search engine, can be considered as a publisher, or as it claims, simply an intermediary which publishes nothing, only linking to items published by others. This is important for Google as a business and for those who use it.

But the other is a more interesting issue which is the deeper question of what is going on here which is the struggle between two kinds of rights. The right to freedom of expression, to be able to say what one likes, is a longstanding one in democracies, however it is almost nowhere absolute. The problem in a search-engine enabled information age, is that these exceptions, which relate to both the (un)truth of published allegations (questions of libel and false accusation) and of privacy and to several other values, are increasingly challenged by the ability of people in one jurisdiction to access the same (libellous, untrue or privacy-destructive) information from outside that jurisdiction via the Internet.

In Spain, the question has apparently increasingly been framed in terms of a new ‘right to be forgotten’ or ‘right to delete’. This is not entirely new – certainly police records in many countries have elements that are time-limited, but these kinds of official individually beneficial forgettings are increasingly hard to maintain when information is ‘out there’ proliferating, being copied, reposted and so on.

This makes an interesting contrast with the Wikileaks affair. Here, where it comes to the State and corporations, questions of privacy and individual rights should not be used even analogically. The state may assert ‘secrecy’ but the state has no ‘right of privacy’. Secrecy is an instrumental concept relating to questions of risk. Corporations may assert ‘confidentiality’ but this is a question of law and custom relating to the regulation of the economy, not to ‘rights’.

Privacy is a right that can only be attached to (usually) human beings in their unofficial thoughts, activities and existence. And the question of forgetting is really a spatio-temporal extension of the concept of privacy necessary in an information society. Because the nature of information and communication has changed, privacy has to be considered over space and through time in a way that was not really necessary (or at least not for so many people so much of the time) previously.

This is where Google’s position comes back into play. Its insistence on neutrality is premised on a libertarian notion of information (described by Erik Davis some time ago as a kind of gnostic American macho libertarianism that pervades US thinking on the Internet). But if this is ‘freedom of information’ as usually understood in democratic societies, it does have limits and an extreme political interpretation of such freedom cannot apply. Should Google therefore abandon the pretence of neutrality and play a role in helping ‘us’ forget things that are untrue, hurtful and private to individuals?

The alternative is challenging: the idea that not acting is a morally ‘neutral’ position is clearly incorrect because it presages a new global norm of information flow presaged on not forgetting, and on the collapse of different jurisdictional norms of privacy. In this world, whilst privacy may not be dead, the law can no longer be relied on to enforce it and other methods from simple personal data management, to more ‘outlaw’ technological means of enforcement will increasingly be the standard for those who wish to maintain privacy. This suggests that money and/or technical expertise will be the things that will allow one to be forgotten, and those without either will be unable to have meaningful privacy except insofar as one is uninteresting or unnoticed.

Google vs. Privacy Commissioners Round 1

Google and a group of Information and Privacy Commissioners have been having an interesting set-to over the last couple of days. First, a group including Canada’s Privacy Commissioner and the UK’s Information Commissioner sent a letter to Google expressing concern about their inadequate privacy policies, especially with regard to new developments like Buzz, Google’s new answer to Facebook.

Then Google put up a post on its blog, unveiling a new tool with maps out various governments requests for censorship of Google’s internet services. Interestingly, it framed this by reference to Article 19 of the Universal Declaration on Human Rights.

So now we have two sets of bodies referring to different ‘human rights’ as the basis for their politics. Of course they are not incompatible. Google is right to highlight state intervention in consensual information-sharing as a threat, but equally the Privacy Commissioners are right to pull up Google for lax privacy-protection practices. The problem with Google is that it thinks it is at the leading edge of a revolution in openness and transparency (which not coincidentally will lead to most people storing their information in Google’s ‘cloud’), and the problem with the Privacy Commissioners is that they are not yet adapting fast-enough to the multiple and changing configurations of personal privacy and openness that are now emerging as they have to work with quite outdated data-protection laws.

This won’t be the end, but let’s hope it doesn’t get messy…

Google does the right thing, but…

Google is, as I type this, closing down its Chinese site as the first stage of its withdrawal of service from mainland China, in response to numerous attacks on the company’s computers from hackers allegedly connected to the Chinese state and ongoing demands to provide a censored service with which they felt they could not comply. The company claims that Chinese users will still be able to use Google, only through the special Hong Kong website, http://www.google.com.hk, which for historical reasons falls outside the Chinese state’s Internet control regime. Whether this will mean that the site will actually be accessible to Chinese Net users is debateable. Some say they cannot access it already. There are also numerous ‘fake Google’ sites that have sprung up to try to make some fast cash out of the situation.

But there’s more to this of course. Google has been widely reported to have opened its doors to the US National Security Agency (NSA) in order, they say, to solve the hacking issue, but the NSA only get involved in matters of US national security – if Google is essentially saying it is effectively beholden to US intelligence policy and interests, I am not sure that this is a whole lot better than bowing to China. You can be sure as well, that once invited in, the NSA will insinuate themselves into the company. Having a proper official backdoor into Google would make things a lot easier for the NSA, especially in populating its shiny new data warehouse in Utah

Google: ‘give us data or you could die!’

I’ve been keeping a bit of an eye on the way that online systems are being used to map disease spread, including by Google. What I didn’t anticipate is that Google would use this as a kind of emotional blackmail to persuade governments to allow them as much data as they like for as long as possible.

Arguing against the European Commission’s proposal that Google should have to delete personal data after 6 months, Larry Page claims that to do so would be “in direct conflict with being able to map pandemics” and that without this the “more likely we all are to die.”

Google talk a lot of sense sometimes –  I was very impressed with their Privacy counsel, Richard Fleischer, at a meeting I was at the other week – and in many ways they are now an intimate part of the daily lives of millions of people, but this kind of overwrought emotionalism does them no favours and belies their moto, ‘don’t be evil’.

(again, thanks to Seda Gurses for finding this)

Tracking disease spread on the Internet

Internet disease tracking using interactive maps or mash-ups seems to be be one of the more constructive uses of the surveillance potential that comes with the combination of easy-to-use digital mapping and online communications. Both Computer World and The Guardian tech blog reported a  few days back how Google, following on from its use to track previous flu epidemics, is experimenting with tracking swine flue cases in Mexico.

Google flu trends for Mexico
Google Flu Trends mapping system

However other web-crawler-based systems also exist for tracking the spread of disease (or indeed potentially almost anything) as The Guardian reported on Wednesday. Leading the way is HealthMap, which comes complete with Twitter feeds and suchlike.

HealthMap
Swine Flu mapping from Healthmap.com

As the latter report makes it clear however, this is not all just good news; there are many problems with the use of web-crawlers in providing ‘reliable’ data not least because the signal to noise ratio on the Internet is so high. The other problem is that although the might appear current or even ‘predictive’ by virtue of their speed and interactivity, they are of course actually always already in the past, as they are compilations of reports many of which may already be dated before they are uploaded to the ‘net. Better real-time reporting from individuals may be possible with mobile reports, but these  could lack the filter of expert medical knowledge and may lead to the further degredation in the reliability of the data. Can you have both more reliability and speed / predictability with systems like this? That’s the big question…

(Thanks to Seda Gurses for pointing out the CW article to me!)

Google Latitude: no place to hide?

the mixture of assumptions seems dangerous: a lack of genuine understanding combined with categorical friendship (analogous to categorical suspicion, the basis of profiling in policing) and technologies that unless actively adjusted all the time for all of those massive number of connections, allow you to be utterly exposed…

I’ve just seen that Google has launched its Latitude service, which allows you (once you register and add your phone number) to be tracked by all your ‘friends’, and correspondingly, for you so see your ‘friends’ – if they have signed up. I put the words friends in inverted commas with some sadness because the word seems to have become increasingly meaningless in the age of Facebook when accumulating ‘friends’ seems to have become a competitive sport. This is not entirely irrelevant to Latitude for reasons we will come to in a minute.

There are various questions about this.

A colleague comments that like many other tracking services, the way it is set up he assumed you could access the project if you just had access to someone else’s phone and a computer (or WAP/3G phone) at the same time. Perfect for a over-protective or suspicious parent, a suspicious, husband, wife, boyfriend, girlfriend – or anyone else for that matter.

The privacy policies are a mixture of Google’s standard (and already questionable) privacy statement and a new set of policies on ‘location privacy’, which state that:

“Google does not share an individual person’s location with third parties without explicit permission. Before someone can view your location, you must either send a location request by adding them as a friend or accept their location request and choose to share back your location.”

You can also change settings so that your location can be automatically tracked, manually selected, or hidden. If you are signed out of the service, you will not be on any map either. You can also change settings for specific friends, including hiding your location from them, share only the city you are in, or removing them from your Latitude list.

Now this all sounds very good, even fun – although it could be a recipe for all kinds of suspicions and jealousies – but it all depends on what the nature of ‘friendship’ means to the person using the service. Friendship no longer seems to require personal knowledge but simply matching categories. I was writing earlier about the loss of trust in South Korea, but the reformation of trust that occurs through social networking seems not to require the dense networks of interdependence in real life that traditional forms of social trust were built on. It doesn’t seem like a substitute, the mixture of assumptions seems dangerous: a lack of genuine understanding combined with categorical friendship (analogous to categorical suspicion, the basis of profiling in policing) and technologies that unless actively adjusted all the time for all of those massive number of connections, allow you to be utterly exposed, laid bare in time and space.

The most extreme examples of this personal surveillance are not in the relatively comfortable worlds that tech enthusiasts inhabit but firstly, in conflict zones – after all ‘I know where you live’ has always been one of the most terrifying and chilling expressions you can hear in such circumstances (see Nils Zurawski’s article on Northern Ireland in Surveillance & Society) and now it could be in real time; and secondly, in authoritarian, or even just paranoid countries. Here, real-time location data could be a goldmine for intelligence services, and it is not as if Google and Yahoo and others have bravely resisted the attempt of, for example, the Chinese government to suborn them to its illiberal requirements.

Now, perhaps this makes me sound very conservative. I’ve never joined a single social networking service – like, how Twentieth Century is that?! – but I am also sure that this service will be both used and abused in all kinds of ways, some that we expect and some that we don’t. It might be a tool for overprotective parents, for jealous lovers, for stalkers and even for killers; but it will also be a tool for new forms of creativity, deception, performance and play.

Or it could be just utterly pointless and no-one will bother using it at all.

(thanks to simon for the heads up. As it happens, Surveillance & Society currently has a call for papers out on ‘Performance, New Media and Surveillance’, to be edited by John McGrath and Bill Sweeney)

Who killed Bambi?

Google Street View seems to be the surveillance system we currently love to hate, and now those horrible, nasty people only gone and have killed a baby deer. How can Google possibly top this? Perhaps only if Larry Page was captured on camera punching some cute fluffy kittens…

googlebambi
The Daily What preserves the evidence - they have now been removed from Street View

The ironic thing is that the incident was uncovered by viewing Google Street View. Don’t people have better things to do? The more you discover about participatory surveillance or synopticism, the more you realise that the answer is probably not… which is exactly the urge (or lack of it) that Google Street View taps into: the terminal ennui of spectacular consumer capitalism. Sometimes, it’s not Foucault or Kafka or Orwell we should be reading but J.G. Ballard