Victor Henning and William Gunn 

Impact factor: researchers should define the metrics that matter to them

The impact factor assumes that the most cited articles are the most influential, but influence is only one aspect of importance, say Victor Henning and William Gunn
  
  

test tubes
Whether it’s citations, bookmarks, or size of a network of collaborators, researchers should be able to sort through different indicators and decide which ones are important to them. Photograph: Getty Images/MedioImages Photograph: 34760.000000/Getty Images/MedioImages

One of the challenges faced by research funders – both public and private – is how to maximise the amount of work being done on important problems, without institutionalising any particular dogma which may suppress novel ideas. The most common arrangement is to fund good researchers but refrain from being overly prescriptive about outcomes, and, in turn, the way to identify good researchers has been to look at the publications that follow the research they fund.

In 1955, Eugene Garfield, the founder of the Institute for Scientific Information (now part of Thomson Reuters), introduced a means for identifying influential journals according to the number of times work appearing in them was cited. This process results in a number called the impact factor (IF), and it's build on the assumption that those whose works have been the most influential will be the most cited.

However, as anyone who's compared the Twitter following of, say, pop singer Rihanna to astrophysicist Neil deGrasse Tyson knows, influence is only one dimension of importance. While useful for many (pre-digital) years, the IF system, not unlike some celebrities, is not aging gracefully. Not only has it been widely misapplied, it has also had some unintended side effects.

One of these is the widespread misuse of the IF to compare people. While some institutions now say they disallow the use of the IF in decision making, the steep competition for a slot in a high IF journal – among researchers who know the impact it could have on their careers – indicates otherwise. In fact, it was identified to be the chief culprit behind the failings of peer review in professor David Colquhoun's Guardian article about peer review and the corruption of science. Another worrying indictment of citation counting practices, is from a study which suggests that authors had not read 80% of the papers they had cited. With research workflows moving online, shouldn't we be able to follow what's going on in academia better than we did 50 years ago?

There's a growing number of services which are making it possible to look beyond slow and inaccurate citation counts. Mendeley's open research database, which provides metadata and real-time readership statistics, is queried more than 100 million times per month by "altmetrics" (alternative metrics) apps such as Total-Impact.org, Readermeter.org, or Altmetric.com. Last month, research institutions in North America, Europe, Asia signed up to a new Mendeley data dashboard which analyses a university's research activity and impact on the global research community in real time.

Jason Priem, a library and information science researcher at the University of North Carolina, and his colleague Heather Piwowar from the University of British Columbia, are the creators of Total-Impact, which aggregates information from all over the web about how research is being used - beyond simple citations. Using tools like Total-Impact, responsible parties can define for themselves the appropriate metrics to support their decision making processes, whether finding collaborators, discovering research, or making funding or hiring decisions. In an email to me about the need for altmetrics, Jason writes: "It's troublingly naive to imagine one type of metric could adequately inform evaluations across multiple disciplines, departments, career stages, and job types. We don't sign a first baseman on the speed of his fastball, nor should we evaluate every scholar, journal, or department based solely on their citation scores. Whether it's teaching, outreach, readership, provoking discussion, sharing software and data, or providing great feedback, altmetrics help us pay attention to, and reward, scholars and institutions that excel in these important areas".

This multi-dimensional approach allows any community to look at the indicators that matter to them, whether it's citations, bookmarks, or size of a researcher's network of collaborators. It enables them to use data to filter out what's valuable to them, whether it's identifying influential researchers or finding the most relevant research regardless of the prestige of the research group. It also helps them to consider dimensions of quality (as defined by each community) alongside quantitative indicators of impact or influence, such as numbers of citations.

It's important to realise that the reason these tools can do what they do is because of a fundamental openness and interoperability: The places where articles are being written, read, and published or where presentations and software are being shared - such as PLOS, Twitter, Github or Mendeley - make their data available through standardised APIs, which allows librarians, researchers, and funders to pick and choose the data that's most meaningful for any particular use at any particular time.

This openness and multiplicity of views goes hand in hand with open access publishing, which has now become the recommended practice of most major funding bodies, including the EC, RCUK, the US NIH, Wellcome Trust and many others. Enabling open access journals to demonstrate their impact on academia via real-time data, years before they receive citations and thus an impact factor, helps them to recruit authors and increases the exposure of their content, further accelerating adoption of open access business models.

Nobel laureate Paul Lauterbur famously said: "You could write the entire history of science in the last 50 years in terms of papers rejected by Science or Nature". Tempering the need to publish in high-IF journals and moving towards open access publishing models like PLOS ONE – which reviews papers on rigour and technical merit, rather than perceived significance – will give us the chance the make the next 50 years of science look very different. We can improve the quality, transparency, and availability of research for all.

Victor Henning is CEO and co-founder of Mendeley. William Gunn is head of academic outreach at Mendeley

This content is brought to you by Guardian Professional. To get more articles like this direct to your inbox, sign up for free to become a member of the Higher Education Network.

 

Leave a Comment

Required fields are marked *

*

*