Centre for Science and Technology Studies Centre for Science and Technology Studies 2333AL Leiden Zuid Holland 31715273909
  • Blog
  • Q&A on Elsevier's CiteScore metric

Blog archive

This is the archive of our old blog. Please visit our new Leiden Madtrics blog.

Q&A on Elsevier's CiteScore metric



Yesterday Elsevier launched the CiteScore journal metric. Ludo Waltman, responsible for the Advanced Bibliometric Methods working group at CWTS, answers some questions on the bibliometric aspects of this new metric.

CiteScore methodology

Q: Is there a need for the CiteScore metric, in addition to the SNIP and SJR metrics that are also made available by Elsevier?

A: SNIP and SJR are two relatively complex journal metrics, calculated respectively by my own center, CWTS, and by the SCImago group in Spain. Elsevier has now introduced a simple and easy-to-understand metric, CiteScore, which is calculated by Elsevier itself. Having a simple journal metric in addition to the more complex SNIP and SJR metrics makes sense to me. There indeed is a need to have both simple and easy-to-understand metrics such as CiteScore and more complex metrics such as SNIP and SJR.
 

Q: What is the main novelty of CiteScore?

A: The CiteScore metric is not as novel as Elsevier may seem to suggest. CiteScore replaces the IPP (Impact Per Paper) metric that used to be available in Elsevier's Scopus database. (Like the SNIP metric, IPP was calculated by CWTS.) IPP and CiteScore are quite similar. They are both calculated as the average number of citations given in a certain year to the publications that appeared in a journal in the three preceding years. This resembles the journal impact factor calculated by Clarivate Analytics (formerly Thomson Reuters), but there are some technical differences, such as the use of a three-year publication window instead of the two-year window used by the impact factor. The novelty of CiteScore relative to IPP is in the sources (i.e., the journals and conference proceedings) and the document types (i.e., research articles, review articles, letters, editorials, corrections, news items, etc.) that are included in the calculation of the metric. CiteScore includes all sources and document types, while IPP excludes certain sources and document types. Because CiteScore includes all sources, it has the advantage of being more transparent than IPP.
 

Q: Does CiteScore have advantages over the journal impact factor?

A: In the journal impact factor, the numerator includes citations to any type of publication in a journal, while the denominator includes only publications of selected document types. The impact factor is often criticized because of this 'inconsistency' between the numerator and the denominator. CiteScore includes all document types both in the citation count in the numerator and in the publication count in the denominator. Compared with the impact factor, CiteScore therefore has the advantage that the numerator and the denominator are fully consistent.
 

Q: What is your opinion on Elsevier’s choice to include all publications in the calculation of CiteScore, regardless of their document type?

A: Including publications of all document types in the calculation of CiteScore is problematic. It disadvantages in an undesirable way journals such as Nature, Science, New England Journal of Medicine, and Lancet. These journals publish many special types of publications, for instance letters, editorials, and news items. Typically these special types of publications receive relatively limited numbers of citations, and in essence CiteScore therefore penalizes journals for publishing them (see also Richard van Noorden's news article in Nature and a preliminary analysis of CiteScore available at Eigenfactor.org). There is a lot of heterogeneity between the different types of publications that may appear in a journal, and CiteScore does not account for this heterogeneity. Including only articles and reviews in the calculation of CiteScore, both in the numerator and in the denominator, would have been preferable. It would be even better if users could interactively choose which document types to include and which ones to exclude when working with the CiteScore metric.
 

Q: Do you have any other concerns regarding the calculation of CiteScore?

A: Another concern I have relates to the CiteScore Percentile metric, a metric derived from the CiteScore metric. This metric indicates how a journal ranks relative to other journals in the same field, where fields are defined according to the Scopus field definitions. In a recent study co-authored by me, the Scopus field definitions have been shown to suffer from serious inaccuracies. As a consequence, the CiteScore Percentile metric also suffers from these inaccuracies. For instance, in the field of Law, the journal Scientometrics turns out to have a CiteScore Percentile of 94%. However, anyone familiar with this journal will agree that the journal has nothing to do with law. Another example is the journal Mobilization, which belongs to the field of Transportation in Scopus. Interestingly, the journal has no citation relations at all with other Transportation journals. The journal in fact should have been classified in the field of Sociology and Political Science.
 

Q: Are there further opportunities for developing improved journal metrics?

A: I do not think there is a strong need for additional journal metrics. However, Elsevier and also Clarivate Analytics, the producer of the journal impact factor, could be more innovative in the way in which they make journal statistics available to the scientific community. For instance, in addition to a series of journal metrics, Elsevier could make available the underlying citation distributions (as suggested in a recent paper). This could provide a more in-depth perspective on the citation impact of journals. Perhaps most importantly, Elsevier could offer more flexibility to users of journal metrics, for instance by enabling users of the CiteScore metric to choose which document types to include or exclude and whether or not to include journal self-citations, by enabling users to set their own preferred publication window (instead of working with a fixed three-year window), and by enabling users to specify how they want citations to be aggregated from the level of individual publications to the journal level (e.g., by calculating the median or some other percentile-based statistic instead of the mean).
 

Q: There is increasingly strong criticism on the use of journal metrics. What is your perspective on this?

A: The journal impact factor nowadays plays a too dominant role in the evaluation of scientific research. This has important undesirable consequences. It for instance seems to result in questionable editorial practices and increasing journal self-citation rates. However, I do not agree with those who argue that the use of the impact factor and other journal metrics in research evaluation is fundamentally wrong. In certain situations, journal metrics can provide helpful information in assessing scientific research. A deeper discussion on these issues is provided in earlier posts on this blog by myself and by Sarah de Rijcke.
 


About Ludo Waltman

Ludo Waltman is professor of Quantitative Science Studies and scientific director at the Centre for Science and Technology Studies (CWTS) at Leiden University. He is a coordinator of the Information & Openness focal area and a member of the Evaluation & Culture focal area. Ludo is co-chair of the Research on Research Institute (RoRI).


6 comments

Mandatory fields
  • donatie brief voorbeeld December 22nd, 2023 2:27 pm
    Wat een fantastisch voorbeeld van een donatiebrief! De duidelijke structuur en oprechte toon maken het gemakkelijk om de boodschap te begrijpen en aan te voelen. Ik ben geïnspireerd om zelf een donatiebrief operation te stellen, en dit voorbeeld zal zeker als leidraad dienen. Bedankt voor het delen van dit waardevolle donatie brief voorbeeld!
    Reply
  • hussein ali February 3rd, 2017 3:23 pm
    dear ludo,
    how i can find if this journal or cinference is Q1 or Q2 or.....
    best wishes
    Reply
  • Phil Davis December 14th, 2016 4:04 pm
    Dear Ludo,
    According to Wim Meester, Head of Product Management for Content Strategy at Elsevier, Scopus will no longer be calculating and reporting Impact per Publication (IPP), although IPP will still be reported in your journal performance report.
    Can you outline how your group defines the denominator of the IPP? Does it rely on document classification by the publisher, by Scopus, or do you determine your own classification? If the latter, is there a description of the algorithm or rubric you use?
    Thank you again.
    Reply
    • Ludo Waltman December 14th, 2016 9:45 pm
      Phil, a brief explanation of the calculation of IPP is provided at http://journalindicators.com/methodology. We rely on the Scopus document type classification. Only articles, reviews, and conference papers are taken into account. Please let me know if you need more detailed information.
      Reply
  • Phil Davis December 9th, 2016 4:10 pm
    Classification of documents by type (article, review, editorial, among others) can become problematic. For example, is a 5-page commentary with 20 references an article ("citable item") or an editorial ("non-citable item")? As I've written [1], this distinction can be arbitrary and lead to bias when similar documents are classified differently across journals.
    Scopus appears to have given up on the IPP, making me wonder whether they realized that defining what goes into the denominator is an insurmountable problem for them. As a solution, they have decided to abandon document classification altogether, or at least when it comes to metrics.
    [1] Citable Items: The Contested Impact Factor Denominator. https://scholarlykitchen.sspnet.org/2016/02/10/citable-items-the-contested-impact-factor-denominator/
    Reply
    • Bernhard Mittermaier December 12th, 2016 10:52 am
      Phil, do we agree that the way the IF handles the question is least favourable?
      And do we agree that CiteScore is (generally) an improvement in this regard?
      Having said that, I would like to see the CiteScore metrics constructed in a way Ludo Waltman had suggested provided that the attribution of classes of documents to citable and non citabale items is subject to a joint decision between the publisher and Scopus/Elsevier - valid for all documents of a kind in a given journal.
      Reply
Share on:
Subscribe to:
Build on Applepie CMS by Waltman Development