Charles Arthur 

Google Flu Trends promises are overstated, researchers say

New study finds way to improve Google Flu Trends accuracy threefold - but says systems must be more open. By Charles Arthur
  
  

Injections for seasonal influenza inoculations. Knowing the levels of flu ahead of time helps in planning.
Injections for seasonal influenza inoculations. Knowing the levels of flu ahead of time helps in planning. Photograph: David Levene Photograph: David Levene

Google Flu Trends and other hopes of providing public health breakthroughs by analysing huge amounts of medical data have been overstated, according to a new study published in the American Journal of Preventive Medicine.

"If we actually began relying on the claims made by big data surveillance in public health, we would come to some peculiar conclusions, said John Ayers, a research professor at San Diego State University who was an author on the study. "Some of these conclusions may even pose serious public health harm."

The finding casts doubt on claims made by Google's chief executive Larry Page that more analysis of peoples' health data could save up to 100,000 lives per year. Speaking this year, Page said that excessive worries about privacy were holding back developments in the field. Page has not specified how the figure for lives saved is calculated.

Ayers told the Guardian: "Big data has big value, and that includes saving lives. But to realise these gains we need better science."

However the authors of the new study point out that even one of Google's simplest health data mining systems, Google Flu Trends, has consistently failed to provide useful forecasts of flu cases in the US. Google Flu Trends tries to use searches made through the site to predict forthcoming numbers of influenza cases in the US.

In March 2014 David Lazer, a professor at Northeastern University, published a paper showing that Google Flu Trends had overestimated flu level for 100 out of 108 weeks when compared with authoritative figures from the US Centres for Disease Control (CDC).

However, Ayers's team showed that they could use open-source, publicly available data from Google's archive to significantly improve the accuracy of the flu prediction.

Rather than monitoring a particular group of influenza-related queries - as Google does - they monitored how the queries changed, and gave some queries more weight than others. They also built in an automatic updating system using artificial intelligence systems which adjusted the bias given to any query every week, rather than the occasional manual system that as Google uses.

They showed that that was more accurate during the flu seasons in both 2009 and 2012/13 than Google's model for each week.

“With these tweaks, Google Flu Trends could live up to the high expectations it originally aspired to,” Ayers said.

The researchers pointed out the Google typically predicted far more cases than occurred. In the 2012/13 season, Google's system predicted that 10.6% of the population had flu - compared to just 6.1% according to patient records, an overstatement of 73%. The revised model suggested 7.7% infection - a 26% overstatement.

“Big data is no substitute for good methods, and consumers need to better discern good from bad methods,” Ayers said.

Like Lazer previously, one of the new paper's co-authors, Benjamin Althouse, suggested that more clarity was needed about the models used by "big data" research to produce public health information and advice. “When dealing with big data methods, it is extremely important to make sure they are transparent and free,” Althouse said. “Reproducibility and validation are keystones of the scientific method, and they should be at the centre of the big data revolution.”

Google declined to comment specifically on the new paper, but reiterated an earlier comment on Lazer's research: "We review the Flu Trends model each year to determine how we can improve. We welcome feedback on how we can refine Flu Trends to help estimate flu levels and complement existing surveillance systems."

Ayers said that the paper's critiques were not an indictment of the promise of big data. "We certainly don’t want any single entity or investigator, let alone Google — whch has been at the forefront of developing and maintaining these systems — to feel like they are unfairly the targets of our criticism,” Ayers said.

“It’s going to take the entire community recognizing and rectifying existing shortcomings. When we do, big data will certainly yield big impacts.”

We're all being mined for big data - but who are the real winners?

• This story was updated: John Ayers was an author, not lead author, on the study.

 

Leave a Comment

Required fields are marked *

*

*