Focus On: Citation Metrics

The number of metrics used to evaluate the impact of published research is growing, but what do they all mean?

The Biologist 64(5) p30

Citation metrics can be used to help understand the performance of journals, institutions, university departments and even individual academics. The core principle is that when a piece of research is cited by an academic, it has influenced their work and, therefore, has had an impact on the scientific community.

The validity of this assumption has been debated ever since the idea of an 'impact factor' for journals was first mooted in 1955 (see 'The impact idea', below). Firstly, citations can take many years to appear and there are all sorts of reasons why an academic might choose to cite another person's work – including to critique it. There are also no shortage of ways to boost a low citation rate.

The search for a more sophisticated way to measure research output and journal quality has resulted in an array of new metrics that can be bewildering and difficult to compare. Here's our guide to the most commonly encountered.

Impact Factor

This is a measure of the average number of citations a journal has received in the past year compared with how many papers that journal has published in the past two or five years. The metric is produced by Journal Citation Reports (JCR), which draws data from all papers indexed by the Web of Science literature database. Both JCR and Web of Science are owned by Clarivate Analytics, which was an arm of Thomson Reuters until it was spun off into a separate company last year.

The company produces a variety of other metrics, such as the Immediacy Index, which is essentially a one-year impact factor that helps measure how quickly research is disseminated and cited.

Eigenfactor, SJR and SNIP journal metrics

These metrics are similar to the original impact factor, but with the Eigenfactor and SCImago Journal Rank (SJR), citations from large, prestigious journals produce a higher score than citations from poorly ranked journals.

SNIP (Source Normalized Impact per Paper) combines a journal's average citation count per paper with the 'citation potential' of its subject area, thereby enabling journals in different subject areas to be compared.
Both SJR and SNIP are produced from papers indexed by Scopus, the database of scientific papers owned by publishing giant Elsevier, while the Eigenfactor draws data from the Web of Science. The Eigenfactor has recently been expanded so it can be applied to authors as well as journals.

CiteScore

CiteScore is Elsevier's newest rival to the impact factor. Launched last year, it is essentially the average citations per document that a title receives over a three-year period and is based not just on articles and reviews, but also letters, notes, editorials, conference papers and other documents indexed by Scopus.

For anyone who hasn't had enough metrics yet, CiteScore is, in fact, merely the main one of a family of eight new Scopus indicators.

H-index
This metric is meant to indicate the productivity and citation impact of a scientist or scholar.

The H-index of a researcher indicates the number of papers, H, that have been cited at least H times. For example, an H-index of 9 means that a researcher has produced nine papers that have been cited at least nine times each. (A researcher who has published six papers that were cited one, three, four, nine, 11 and 40 times respectively will have an H-index of 4.)

The H-index does not take into account the age of documents or citations, and calculating the figure using different databases can give very varied results.

Things to consider
Citation metrics should not be compared without careful attention to the phenomena that influence citation rates.
• The discipline of the paper Social sciences and humanities take longer to cite papers and cite more books, for example.
• The age of the paper Older papers have had more time to accrue citations than new ones.
• Paper type Review articles are highly cited, whereas case studies are more rarely cited.
• Data source Citation scores for an article are likely to be higher in an index that draws from the largest database of journals.

Google Scholar Metrics

Google's academic searching service mainly uses the 'h5-index', which is like the H-index shown above, but for journals and limited to papers published in the last five years. It also shows the h5 median, which is the median number of citations from the papers counted in a journal's (or author's) h5-index. Google's metrics are 'rolling' – that is, based on the continuously changing dataset that Google Scholar accesses.

DCI

The Data Citation Index allows researchers to receive credit for their contribution to data repositories and attribution when their data is used or informs other work.

Altmetrics

Altmetrics are relatively new measures of how often work has been read, cited and discussed – sometimes described as an article's 'attention score'. Altmetrics may include mention of the work in the media, on social media or in blog posts; the reuse of datasets, or how often a paper is accessed or downloaded online. These metrics are still being refined and methods for collecting data varies between publishers. Institutions including Harvard are experimenting with metrics provided by firms such as Mendeley, which looks at how often papers are downloaded, shared and commented on by other researchers (rather than the general public).

Live Metrics

Most major journals offer so-called 'live' (constantly updated) citation counts online from Web of Science and Scopus, as well as Altmetric scores, at the article level.

The impact idea
The development of an 'impact factor' was first mooted by the American bibliographer Eugene Garfield in an editorial for Science magazine in 1955. He went on to develop and launch the dominant indexing and citation services of the 20th century, such as JCR and ISI. Garfield has since said he had no idea the influence his idea would have on academic life. "It did not occur to me that 'impact' would one day become so controversial," he wrote in an essay in 2005. "I expected it to be used constructively while recognising that in the wrong hands it might be abused."