Nations stop tracking H1N1 cases

The Associated Press is reporting that many nations, in particular the USA, have changed their surveillance methods for keeping track of Swine Flue (H1N1), and are no longer counting confirmed cases. The justification for this is that the confirmed cases count was already massively underestimating the numbers affected, and in any case, it is no longer useful once the disease hits a certain proportion of the population. This may be true on a whole population level, but the move away from counting cases means that changes in particular populations and areas below subnational level are less observable – and this is a problem if the disease is affecting some groups and places more than others. It might for example be crucial to deciding who and where receives vaccinations, for example. There is also the added complication of budget cuts in local government surveillance resulting from the recession. As with many kinds of caring surveillance, one key question is not whether the surveillance is perfectly accurate, but whether the surveillance is ‘good enough’ for the purpose for which it is intended, and in the case of diseases, this is sometime a tricky thing to determine.

So are there better ways of doing it? Some private companies certainly think so. As I have reported before, Google and others reckon that online disease tracking systems will be vital in the future, so much so that Google in particular has gone rather over the top in its claims about what would happen if access to the data it used for these systems were restricted…

Google: ‘give us data or you could die!’

I’ve been keeping a bit of an eye on the way that online systems are being used to map disease spread, including by Google. What I didn’t anticipate is that Google would use this as a kind of emotional blackmail to persuade governments to allow them as much data as they like for as long as possible.

Arguing against the European Commission’s proposal that Google should have to delete personal data after 6 months, Larry Page claims that to do so would be “in direct conflict with being able to map pandemics” and that without this the “more likely we all are to die.”

Google talk a lot of sense sometimes –  I was very impressed with their Privacy counsel, Richard Fleischer, at a meeting I was at the other week – and in many ways they are now an intimate part of the daily lives of millions of people, but this kind of overwrought emotionalism does them no favours and belies their moto, ‘don’t be evil’.

(again, thanks to Seda Gurses for finding this)

Tracking disease spread on the Internet

Internet disease tracking using interactive maps or mash-ups seems to be be one of the more constructive uses of the surveillance potential that comes with the combination of easy-to-use digital mapping and online communications. Both Computer World and The Guardian tech blog reported a  few days back how Google, following on from its use to track previous flu epidemics, is experimenting with tracking swine flue cases in Mexico.

Google flu trends for Mexico
Google Flu Trends mapping system

However other web-crawler-based systems also exist for tracking the spread of disease (or indeed potentially almost anything) as The Guardian reported on Wednesday. Leading the way is HealthMap, which comes complete with Twitter feeds and suchlike.

HealthMap
Swine Flu mapping from Healthmap.com

As the latter report makes it clear however, this is not all just good news; there are many problems with the use of web-crawlers in providing ‘reliable’ data not least because the signal to noise ratio on the Internet is so high. The other problem is that although the might appear current or even ‘predictive’ by virtue of their speed and interactivity, they are of course actually always already in the past, as they are compilations of reports many of which may already be dated before they are uploaded to the ‘net. Better real-time reporting from individuals may be possible with mobile reports, but these  could lack the filter of expert medical knowledge and may lead to the further degredation in the reliability of the data. Can you have both more reliability and speed / predictability with systems like this? That’s the big question…

(Thanks to Seda Gurses for pointing out the CW article to me!)