Internet disease tracking using interactive maps or mash-ups seems to be be one of the more constructive uses of the surveillance potential that comes with the combination of easy-to-use digital mapping and online communications. Both Computer World and The Guardian tech blog reported a few days back how Google, following on from its use to track previous flu epidemics, is experimenting with tracking swine flue cases in Mexico.
However other web-crawler-based systems also exist for tracking the spread of disease (or indeed potentially almost anything) as The Guardian reported on Wednesday. Leading the way is HealthMap, which comes complete with Twitter feeds and suchlike.
As the latter report makes it clear however, this is not all just good news; there are many problems with the use of web-crawlers in providing ‘reliable’ data not least because the signal to noise ratio on the Internet is so high. The other problem is that although the might appear current or even ‘predictive’ by virtue of their speed and interactivity, they are of course actually always already in the past, as they are compilations of reports many of which may already be dated before they are uploaded to the ‘net. Better real-time reporting from individuals may be possible with mobile reports, but these could lack the filter of expert medical knowledge and may lead to the further degredation in the reliability of the data. Can you have both more reliability and speed / predictability with systems like this? That’s the big question…
(Thanks to Seda Gurses for pointing out the CW article to me!)