Join us

Become a member and discover where geography can take you.

Join us

By Jenny Neophytou, Wiley

 

In a world of ever-increasing publication output, bibliometrics can help us identify and assess academic content. Usage, social networking statistics and other alternative metrics have received a lot of attention recently but citation metrics remain the most prevalent.

 

Citation metrics

Citation metrics are based on the assumption that when an article is cited by another academic, it has had an impact on their research. From this, metrics are produced at an article and journal level.

The 2015 Journal Impact Factor, for example, measures the average number of citations received in 2015 to articles published in the journal in the previous two years (2013 and 2014). This aggregation of data means it is not necessarily representative of individual articles within the journal: one article may be very highly cited while others have not been cited at all.

At an article level, is it a fair assumption that citations are an indication of either impact or quality? Perhaps not – people cite for many reasons. Citations can also be manipulated, for example through the use of controversial words in an article title, self-citation, or deliberately aligning an article with a more highly-cited discipline. These tricks tend to distort academic research for the purpose of the metric. The fact remains, however, that citations are easily definable and measurable, and tell us something about an article’s usage within the published academic community.

The Declaration on Research Assessment (DORA) recognises the need to improve the ways in which papers are evaluated, encouraging journals and institutions to consider a wider range of metrics beyond the Impact Factor. Some journals will therefore include other metrics including average time to first decision and readership/download numbers for both the journal and individual articles. It is therefore important to take a variety of these metrics into account when deciding which journal to submit to.

 

Factors to consider when comparing citation metrics

Subject area

Different disciplines (and sub-disciplines) have different citation behaviours and different coverage within the main citation databases. Metrics should not be compared across subjects unless these factors are accounted for.

 

Type of research

Reviews typically attract more citations. Case studies are often invaluable for teaching or practical work, but tend to be less well cited in academic research. This does not mean that they are poor quality or less valuable.

 

Time frame

Older articles will have higher citations – not because they are ‘better’, but because they have been around for longer. Metrics that fail to set a time frame can be unfairly weighted towards articles (or academics) that have been around for longer.

 

Citation databases

The main citation databases – Web of Knowledge, Scopus and Google Scholar – are of vastly different size and scope. As citations are only counted from content indexed in the database, citation counts and metrics drawn from different databases should not be compared. Sources of bibliometric data include:

  • Web of Knowledge and Journal Citation Reports. Citation database and static metrics owned by Thomson Reuters. Publications are included according to a review process

  • Scopus. Citation database owned by Elsevier. Publications are included according to a review process

  • SCImago. Journal metrics (including Source Normalised Impact Factor and SCImago Journal & Country Rank) and aggregated data derived from Scopus

  • Google Scholar. Citation database owned by Google. Coverage is automatic for all content that follows an academic format (including abstracts, theses and books). The broader scope of the database means that citation counts can appear higher in Google Scholar than other databases. Where content comes from a recognised academic source, Google Scholar also publishes journal-level metrics based on the H5-Index. A journal has an H5-Index of 10 if in the past five years it has published 10 papers with a minimum of 10 citations each. The H5-Index is based on the H-Index, which was originally designed as an author level metric (e.g. an author with an H-Index of 10 has published 10 papers with a minimum of 10 citations each. For the traditional H-Index, no time frame is specified).

 

Selecting your publishing outlet

Metrics are an essential aspect of academic publishing in the modern world. However, they need to be used in an appropriate fashion, acknowledging their limitations. That said, metrics are not the foundation of academia – academia is the foundation of metrics. Should you use metrics to decide which journal to publish in or tailor your research in an attempt to gain high citations? My advice would be to take metrics for what they are – a valuable but imprecise tool – and to focus instead on serving the needs of the academic community.

 

About this guide

Publishing is a crucial, but sometimes daunting and unexplained, part of academic life. All academic geographers are supposed to do it, but there are few formal guidelines about how best it should be done. Many of us discover how to publish by trial and error or through the mentoring and support of colleagues. Publishing and academic landscapes also change, presenting new challenges to established academics. The publishing and getting read guides have four main aims: to provide clear, practical and constructive advice about how to publish research in a wide range of forms; to encourage you to think strategically about your publication profile and plans; to set out some of the opportunities and responsibilities you have as an author; and to support you in getting your published research read.

Read next ...