• Blog
  • What do we know about journal citation cartels? A call for information

What do we know about journal citation cartels? A call for information



pokerchipsSystematic knowledge on citation cartels of scientific journals is surprisingly scarce. We have little knowledge about the phenomenon itself, about how often cartels occur, and about where to draw the line between acceptable and unacceptable between-journal citation behavior. With this blogpost, we wish to open up the floor for further scrutiny of journal citation cartels. Our aim is to arrive at a better conceptual and empirical understanding of the phenomenon – and we would like your help with this.

In its latest Journal Citation Reports, published last week, Thomson Reuters suppressed the citation statistics of 18 journals that display ‘anomalous citation patterns’. As explained here, 16 journals were suppressed because of abnormal numbers of self-citations, and two were removed because of citation stacking.

Unlike journal self-citations, citation stacking implies a connection between at least two journals. It refers to the phenomenon of one journal giving a very large number of citations to recent articles in another journal. Citation stacking does not always need to indicate the existence of a journal citation cartel. In principle, a journal can have legitimate reasons for giving a large number of citations to another journal. We therefore reserve the term ‘citation cartel’ for situations in which an individual or group of individuals affiliated with different journals (as authors, editors, et cetera) act with the intent to influence citation statistics of one or more of these journals.

Last year Thomson Reuters suppressed the citation statistics of ten journals because of citation stacking, and as already mentioned, this year the statistics of two journals (Applied Clinical Informatics and Methods of Information in Medicine) were suppressed. In 2013, Thomson Reuters’ algorithm for identifying citation stacking led to the detection of a Brazilian citation cartel. Earlier a citation cartel involving a number of medical journals was revealed by Phil Davis, and recently a case of citation stacking by three Romanian physics journals was identified by Petr Heneberg.

Apart from the above-mentioned cases, little seems to be known about citation stacking and the existence of journal citation cartels. For this reason, we recently decided to perform a systematic scan of papers in the Web of Science database in order to identify articles that may play a role in journal citation cartels. Our analysis is still ongoing, but we already found a few cases that we would like to share. These cases are summarized in the table below.

  Journals Papers
Case 1 Asia Pacific Journal of Tourism Research
Journal of Travel & Tourism Marketing
Leung, Au, & Law (2015)
Hoc Nang Fong, Au, & Law (2015)
Case 2 Methods of Information in Medicine
Applied Clinical Informatics
Lehmann & Haux (2014)
Lehmann & Gundlapalli (2015)
Haux & Lehmann (2014)
Case 3 Acta Physico-Chimica
Chinese Journal of Catalysis
Li (2011)
Ma (2011)
Case 4 Archives of Toxicology
EXCLI Journal
Stewart (2012)
Marchan (2011)
Stewart & Marchan (2012)
Cadenas, Marchan, et al. (2012)

Let us look more closely at the first two cases listed above. In the first case, the article published in the Asia Pacific Journal of Tourism Research contains 161 references (out of 172) to the Journal of Travel & Tourism Marketing, which also published an article with 130 references (out of 161) to the Asia Pacific Journal of Tourism Research. Both papers are co-authored by Rob Law, the managing editor of one of the journals and an editorial board member of the other.

The second case involves a set of three papers, two of them published in Methods of Information in Medicine and the third one in Applied Clinical Informatics. This case also involves editorial board members. More precisely, Applied Clinical Informatics has Christoph Ulrich Lehmann as editor-in-chief and Reinhold Haux and Adi V. Gundlapalli as editorial board members. Lehmann (editorial board member) and Haux (senior consulting editor) are also both on the editorial board of Methods of Information in Medicine. We note that the journals involved in this second case have been suppressed by Thomson Reuters in this year’s Journal Citation Reports because of citation stacking. This indicates a certain convergence between Thomson Reuters’ method for detecting abnormal citation patterns and our own.

In each of the above cases, there is a reciprocal relationship between a pair of journals, with one or more articles in one journal citing heavily to the other journal and the other way around. Many more examples can be given of articles in one journal citing heavily to a specific other journal, but our preliminary analysis suggests that reciprocal relationships similar to the ones listed above are relatively rare.

Trying to conclude whether articles have been published with the specific intent to increase the citation statistics of a cited journal, and in particular the journal’s impact factor, is perhaps a slippery slope. And even if the idea of increasing a journal’s impact factor has played a role in the publication of an article, one could still debate how this should be judged from an ethical perspective. For the moment, we would like to get a better empirical understanding of citation stacking and the related phenomenon of citation cartels. A large-scale analysis of citation data is one approach to obtain such an understanding, but more subtle ways in which citation cartels could potentially operate may go unnoticed in such an analysis. Therefore, we also would like to benefit from the knowledge and the experiences of the readers of this blog post. Sharing your knowledge on citation cartels may help to obtain a more refined picture of the phenomenon of citation cartels, and it may support us in improving our algorithms for detecting suspicious citation behavior.

If you have any information on citation cartels, we would be very grateful if you could share this information with us. This can be done by responding to this blog post or, if you prefer to share your information privately, by contacting us by e-mail. We will collect and analyze all information we receive, and we plan to publish the results either in a follow-up blog post or as part of a scientific paper.

 

Postscript (June 24th, 2016): It has been suggested to us that it may have been more appropriate to anonymize the cases discussed in this blog post. We have chosen not to anonymize the cases because we feel that discussions on citation stacking and citation cartels benefit from everyone having full access to all relevant information. However, we very much welcome a discussion on the pros and cons of anonymization. We would like to emphasize that we have tried not to make any assumptions about authors’ intent in this blog post. We have contacted the authors of the papers mentioned above to inform them about the publication of the blog post, and we have invited them to share their views by responding to the blog post.

 

Postscript (July 19th, 2016): The title of this blog post has been changed from ‘What do we know about journal citation cartels? Some evidence and a call for information’ into ‘What do we know about journal citation cartels? A call for information’. The original title gave the incorrect impression that our blog post provides evidence of journal citation cartels. The evidence that we provide is about citation stacking, not necessarily about citation cartels.


About Philippe Mongeon

PhD Candidate at the École de bibliothéconomie et des sciences de l'information, Université de Montréal, and member of the Canada Research Chair on the Transformations of Scholarly Communication. Philippe's research interests include authorship and scholarly communication practices, the reward system of science, and the ecosystem of scholarly publishing. More information is available here.

About Ludo Waltman

Senior researcher and deputy director of CWTS. Ludo leads the Quantitative Science Studies (QSS) research group. His core research interests focus on the analysis and visualization of bibliometric networks and the development of scientometric indicators.

About Sarah de Rijcke

Associate professor and deputy director of CWTS, and coordinator of the Science and Evaluation Studies research group. Her research focuses on the growing use of assessment procedures and bibliometric indicators in scientific and scholarly research, and the effects on knowledge production.


9 comments

Mandatory fields
  • Biologist November 23rd, 2016 5:21 pm
    An interesting case of citation cartel between two journal editors: Domenico Otranto (Parasites & Vectors, Veterinary Parasitology) and Felipe Dantas-Torres (Parasites & Vectors, International Journal for Parasitology: Parasites and Wildlife). Usually a pair of journals is considered in cartel analysis. In this case a pair of researchers who are editors in at least two journals each of them creates even more efficient an paradoxically more difficult to detect cartel. In 2016 Domenico Otranto has been publishing at paper every 7 days on average (till 23.11.2016). Most of the of the articles of both editors (they published hundreds of articles) is published in journals in which they are editors. By boosting their own H-index they also boost impact factor of their journals: They are prone to self-citations and citing each other in the articles from all four journals they are editors. I believe that a measure of articles published per week should be included in the detection of scientist creating citing cartels. Publishing a paper every week while serving as an editor and a lecturer at the same time should be an alarming phenomenon. Based on the current activities of both editors there should be expected sudden increase of citation number of papers form International Journal for Parasitology: Parasites and Wildlife.
    Reply
    • Philippe Mongeon November 29th, 2016 5:40 am
      Thank you for your comments and suggestions. It might indeed be worth considering the number of publication by a single individual in order to indentify citation cartels. Of course, this would need to be considered in combination with other elements, otherwise we would systematically and unfairly suspect highly productive researchers of attempting to manipulate the IF of some journal, their H-index, or some other indicator.
      Reply
  • medical informatician July 6th, 2016 2:45 am
    While some commentators here have focused on smearing open access journals, they should recognize that they have been barking at the wrong tree.
    Three (toll-access = subscription) journals from the Thomson Reuters Medical Informatics category (Applied Clinical Informatics [ACI], Methods of Information in Medicine [MIM], Journal of Medical Systems) were de-listed by Thomson Reuters this year, and a 4th one (JAMIA) also has a proven history of self-citing. ACI and MIM are bottom-feeder toll-access journals with consistently low impact factors, backed and promoted by scientific societies, namely AMIA and IMIA (American Medical Informatics Association / International Medical Informatics Association). Methods of Information in Medicine (MIM) was edited by Reinhold Haux until last year, who also was president of IMIA (he resigned as editor, and MIM is now edited by Koch, who also participated in the latest impact factor-manipulating publication). Christoph Lehmann (editor of ACI) also held various positions within IMIA and AMIA.
    On the Schattauer website (http://aci.schattauer.de/about/impact-factor-2015.html) the editors of two of these journals (Lehmann for ACI, Haux/Koch for MIM) claim that the impact factor manipulation was “unintentional”. I do believe that being delisted from JCR was unintentional, but they knew very well what they were doing and why they were doing it, and even celebrated their higher impact factor recognizing their self-citation orgies as a source for a higher IF (see below). The scandal is also much broader than just the 2 papers they are citing, the scandal goes further back to 2014 and includes at least 4 questionable papers.
    Applied Clinical Informatics was created in 2009 as a sister journal of the journal Methods of Information in Medicine (MIM). Both are published by a toll-access publisher (Schattauer). The publisher of Schattauer (D. Bergemann) is a close friend of MIM editor Haux (as he himself wrote in his farewell editorial) (as an aside, as further evidence of cronyism, IMIA has never issued any RFP seeking proposals from other publishers for publication of their journals). In 2013, ACI received its first impact (or should we say “non-impact”) factor of 0.386, calculated as a mere 32 citations for the 83 citable articles published in 2011-2012. In other words, nobody cared about this journal, let alone cited it.
    In November 2014, ACI editor Lehmann and MIM editor Haux colluded to publish an awkwardly entitled paper “From Bench to Bed: Bridging from Informatics Theory to Practice” in Method of Inf Med (http://www.ncbi.nlm.nih.gov/pubmed/25377761), whose primary intention seem to cite ACI papers, and a second, similarly titled paper in ACI “From bed to bench: bridging from informatics practice to theory: an exploratory analysis” (http://www.ncbi.nlm.nih.gov/pubmed/25589906), intended to return the favor by citing mainly MIM papers. The objective of the papers was to “To explore, after five years, which congruencies and interdependencies exist in publications of these journals [MIM, ACI] and to determine if gaps exist. To achieve this goal, major topics discussed in ACI and in MIM had to be analysed.”. The 3-page MIM paper contained 85 references, 24 of them to ACI articles published in 2012-2013, which are the years relevant for the 2014 impact factor, and almost all of the remaining references to Methods Inf Med 2012-2013.
    The equally questionable ACI paper (this time, Haux was the first author, and Lehmann the second), entitled “From Bed to Bench: Bridging from Informatics Practice to Theory”. (Haux R, Lehmann CU. From bed to bench: Bridging from informatics practice to theory – an exploratory analysis. Appl Clin Inf 2014; 5: 907–915 http://dx.doi.org/10.4338/ACI-2014-10-RA-0095) was another equally haphazardly put together paper with questionable or non-existing peer-review (“received Oct 17, accepted Oct 22, published Oct 29), and with the familiar sounding objective “to explore which congruencies and interdependencies exist in publications from theory to practice and from practice to theory and to determine existing gaps. Major topics discussed in ACI and MIM were analyzed.” Once again, 50 Method Inf Med papers from 2012-2013 were cited, plus 6 ACI papers.
    So in total, these two papers doubled the sparse citation count of ACI papers, and the two “papers” had the desired effect: the impact factor (2014) of ACI climbed to 1.61, and the impact factor of Meth Inf Med rose from 1.083 to 2.248. MIM editor Haux celebrated this “achievement” with a self-congratulatory editorial entitled “Is Methods of Information in Medicine Now a Better Journal Than in the Years Before?” (http://www.ncbi.nlm.nih.gov/pubmed/26180904) [to his credit he answered this truthfully with “no”], even recognizing the relative contribution of his self-citation orgies by writing that “articles analysing publications like the ones mentioned on translational activities will usually more and considerably contribute to impact factors than other articles.”.
    Because of the “trick” to hide journal self-citations by citing most Methods paper in ACI, and most ACI papers in MIM (aka citation stacking), the citation cartel remained at first undetected and Thomson Reuters did not flag the journals for exclusion in 2014.
    However, because the trick worked so well, the editors repeated their manipulation attempt in 2015 (and this time Thomson Reuters caught them and delisted both journals).
    The first paper was published on Nov 18, 2015 by ACI editor Lehmann in MIM entitled “Improving Bridging from Informatics Practice to Theory” (http://www.ncbi.nlm.nih.gov/pubmed/26577504), where again he “set out to explore congruencies and interdependencies in publications of ACI and MIM”, and where he “conducted a retrospective observational study and reviewed all articles published in ACI during the calendar year 2014 (Volume 5) for their main theme, conclusions, and key words.”, citing all 73 ACI articles in this MIM paper (plus 4 MIM papers). This measure alone would have inflated the impact factor by 0.635 points (73/114 citable items).
    The favor was returned by an article published on Dec 23, 2015 (note that this is only 1 week before the 2015 IF “deadline”) in ACI, with outgoing MIM editor Haux and incoming MIM editor Koch as authors (“Improving Bridging from Informatics Theory to Practice”, http://www.ncbi.nlm.nih.gov/pubmed/26767067). Once again, authors “conducted a retrospective, observational study reviewing MIM articles published during 2014 (N=61) and analyzing reference lists of ACI articles from 2014”, citing 35 MIM papers from the IF-relevant years 2013-2014, plus 13 ACI articles from 2014, inflating the ACI IF further by 0.754 points (86/114).
    This time, Thomson Reuters acted, seeing that ACI was a journal that hardly received any “organic” citations from other journals - 37% of citations actually came from MIM.
    As an aside, the mutual citation orgies were so poorly framed as “research” that it triggered a letter to the editor (https://www.researchgate.net/publication/273617743_Insufficient_Evidence_for_Changing_Editorial_Policy_Letter_to_the_Editor) which pointed out that “the adopted methodology does not seem appropriate to answer the research question [to evaluate if theoretical papers published in MIM influence research in ACI]” and “without a doubt, Lehmann & Haux are well aware that there is not a parallel but a sequential relationship”, thus the methodology to analyze citations from one journal to the other was clearly scientific garbage.
    As ACI-editor Lehmann is also in charge of a third IMIA-sponsored informatics-related publication at Schattauer, the “Yearbook of Medical Informatics”, which intends to identify the best papers in the field and publishing extensive reviews of current trends, it is unclear how his conflict of interest (being also editor or EB member of two other IMIA journals) is managed, if at all. Somebody should investigate if the “Yearbook of Medical Informatics” had in the past couple of years equally cited medical informatics journals (“equally” adjusted by impact factor, because the higher cited papers/journals should be represented in a yearbook that claims to summarize the best work more often) or (as I suspect) avoided to cite certain medical informatics journals and preferentially cited other (e.g. society-backed) journals.
    Somebody should also look closer at their second official journal, JAMIA, edited by Lucilla Ohno-Machado. She engaged in similar self-citation orgies where she claims to investigate “trends in medical informatics” citing long lists of references from her journal. In Dec 2011 she published one paper on trend in informatics (http://jamia.oxfordjournals.org/content/18/Supplement_1/i166) citing a total of 63 articles, with 48 JAMIA articles from 2009-2010 (the years relevant for JIF calculation 2011). Another “recent trends” paper from Nov 2013 cites 92 references, almost all from JAMIA, among them 51 JAMIA articles 2011-2012.
    It will be interesting to see how and whether the AMIA or IMIA executive board will react to this obvious scientific misconduct of their editors to rig the impact factor.
    Reply
  • Ivan Sterligov June 27th, 2016 6:18 pm
    also lots of cartels, stacking, boosting or anything else in the grey zone to be found in Russian Science Citation Index elibrary.ru
    Really tons of cases waiting for empirical research, sadly only in Russian.
    For example, one of the leading publishers has developed a sophisticated tool which automatically suggests articles from its journals to be added to reference lists of submitted articles (which is de-facto mandatory for a paper to be accepted).
    This in turn has led to development of indices aimed at detecting such behaviour (Herfindahl index and it's variants)
    Reply
  • Ivan Sterligov June 27th, 2016 6:02 pm
    some examples I'm aware of:
    Laser Physics + Laser Physics Letters. A well-known case in laser community, cross-citations led to huge IF boost, mainly for LPL. After they've abandoned this practice, IF dropped https://s31.postimg.org/uwox7im1n/lp_lpl.jpg
    Baltic case http://lms.lt/archyvas/files/active/0/47-196-1-PB.pdf + Probably abnormal citation patterns are suspected to continue here:
    TECHNOLOGICAL AND ECONOMIC DEVELOPMENT OF ECONOMY
    INZINERINE EKONOMIKA ENGINEERING ECONOMICS
    JOURNAL OF BUSINESS ECONOMICS AND MANAGEMENT
    JOURNAL OF CIVIL ENGINEERING AND MANAGEMENT
    Reply
  • Jeffrey Beall June 24th, 2016 5:35 pm
    Thank you for opening up the floor to discuss this topic. I rise to discuss it in terms of predatory publishers and research misconduct.
    I think there is a second type of 'citation cartel,' and it involves either individual researchers or groups (two or more) of researchers working together to increase their citation counts and other metrics. I've seen this in action in the library science field.
    First, individuals can increase their citation counts and h-indexes using the easy acceptance that predatory journals offer. They simply find easy-acceptance journals with low editorial standards (most of which are predatory, open-access journals) and they quickly write up and publish articles in these journals. The articles include citations to the researcher's earlier published works, as many as the author can get away with.
    In doing this, researchers select journals that are indexed in Google Scholar, WoS, Scopus, JCR, or any other prescribed whitelist. Typically, whitelists include thousands of journals, and among these thousands there are usually a half-dozen or so that have "gone bad" and seek only to increase their revenue using the gold open-access model, accepting as many papers as possible. Article brokers specialize in finding such journals, and knowledge of them quickly spreads.
    Google Scholar calculates, among other measures, a researcher's raw number of citations and his or her h-index. Using the above described method, researchers can easily increase these measures by finding and exploiting predatory journals indexed in Google Scholar, a common occurrence.
    Because its metrics can be so easily manipulated, Google Scholar should not be used in academic evaluation.
    Second, there are groups of researchers who collude to cite each other to increase their citation counts and their metrics, both traditional and altmetrics. I have seen this in practice in my field. Individuals belonging to a group of researchers collude to cite the other members of the group in the articles they publish, collectively increasing the citation counts and metrics of the group's members. This is also a type of citation cartel. The group may also collude to refrain from citing certain researchers outside their clique.
    In an extreme case of researcher-level citation cartels, I've seen a researcher actually ghostwrite papers for others, citing herself liberally in the papers she writes. The persons benefiting from the ghost authorship were poor writers, and they were happy to have their colleague write the articles for them. The person writing the papers was able to churn out papers quickly and easily and was happy to include citations to her own work in papers published under others' names.
    The people taking credit for the papers benefitted by getting papers written for them for which they took academic credit. The ghostwriter benefitted by increasing her citation count and metrics.
    Reply
    • Ludo Waltman June 24th, 2016 7:17 pm
      Thank you Jeffrey for this contribution to the discussion. Although our focus in on journal citation cartels, I fully agree with you that the study of citation cartels at the level of individual researchers is another very important topic. At the individual researcher level, it is even more difficult to distinguish between proper citation practices and citations given in an attempt to influence citation statistics. I would like to mention that three years ago a blog post by Paul Wouters was published in which he discusses a case of an apparent citation cartel at the level of individual researchers: https://citationculture.wordpress.com/2013/07/04/may-university-rankings-help-uncover-problematic-or-fraudulent-research/.
      Reply
  • Phil Davis June 24th, 2016 1:35 pm
    There is no way to prove intention; however, two or more editors systematically citing each other's journal articles within a limited time frame (i.e. prior two years) does strongly infer intent. Likewise, editors who recommend that submitting authors reference more papers from their own journal before acceptance also suggests a similar intent. Thomson Reuters was deliberate with using the phrase "citation stacking" as it limited to just the pattern of reciprocal citation, although "citation cartel" is more appropriate when referring to intentions and behavior that leads to pattern.
    Reply
    • Sarah de Rijcke June 27th, 2016 2:09 pm
      Dear Phil, thank you for your comment, which triggers a couple of conceptually interesting points. Both your definitions of citation stacking and of citation cartels seem to presuppose reciprocity, while we chose a slightly broader definition of citation stacking (though we talked about a connection between two or more journals, we did not consider reciprocity to be a requirement). I also infer from your comment that, for you, a citation cartel presupposes both intent and a pattern. This makes a lot of sense. The examples we found do not reveal a recurring behavior, which is something that I think we should take into account as we continue to work on this analysis.
      Reply
Share on:
Subscribe to:
Build on Applepie CMS by Waltman Development