Industrial disease

It’s almost four years since Cabells launched its Predatory Reports database, but the battle to overcome predatory journals shows no signs of abating. As a result, Cabells is constantly developing new ways to support authors and their institutions in dealing with the problem, and this week Simon Linacre reports from the virtual SSP Annual Meeting on a new collaboration with Edifix from Inera, which helps identify articles and authors published in predatory journals.

A common retort heard or read on social media whenever there is a discussion on predatory journals can go something like this: “is there really any harm done?”, “some research is only good enough for those kind of journals,” or “everyone knows those journals are fake.” For the latter rejoinders, there is some justification for taking those perspectives, and if recent global events have taught us anything it is that we need a sense of proportion when dealing with scientific breakthroughs and analysis. But the former point really doesn’t hold water because, when you think it through, there is a good deal of harm done to a number of different stakeholders as a result of one article appearing in a predatory journal.

Predatory journals do researchers and their institutions a huge disservice by claiming to be a reputable outlet for publication. Legitimate journals provide valuable services to both promote and protect authors’ work, which simply doesn’t happen with predatory journals. Essentially, there are three key reasons why authors and their employers can suffer harm from publishing in the wrong journals:

  • Their work may be subject to sub-par peer review, or more likely no peer review at all. The peer review system isn’t perfect, but papers that undergo peer review are better for it. Researchers want to make sure they are publishing in a place that values their work and is willing to devote time and resources to improving it.
  • Versions of record could disappear. One of the advantages of publishing with a reputable journal is that they make commitments to preserve authors’ work. Opportunists looking to make a quick buck are not going to care if your paper is still available in five years – or even five weeks.
  • Published articles will be hard to find. Some predatory journals advertise that they are included in well-known databases like Web of Science, Scopus, or Cabells when they are not. Predatory journals invest nothing in SEO or work to include journals in research databases, so research won’t be easily discoverable.

So, it is in the interests of authors, universities, societies, funders and society itself that research is not lost to predatory publishing activities. Checking against a database such as Predatory Reports will help those stakeholders, but to augment their capabilities Cabells is collaborating with Atypon’s Inera division, and specifically its Edifix product to help prevent ‘citation contamination’. This is where illegitimate articles published in predatory journals find their way into the research bloodstream by being referenced by legitimate journals. With Edifix, users can now vet bibliographic reference lists for citations to predatory journals, as identified by Predatory Reports.

This new Edifix web service with the automated Cabells Reference Checking Tool was showcased at SSP’s Annual Meeting (meeting registration required) this week (and previewed in an SSP sponsored session in October 2020) with a host of other new innovations, collaborations and product developments from the scholarly communications industry. While it would have been great to see old friends and colleagues in person at the event, the virtual format enabled much wider, international engagement which contributed to an undoubtedly successful event.

Book review – Gaming the Metrics: Misconduct and Manipulation in Academic Research

The issues of gaming metrics and predatory publishing undoubtedly go hand-in-hand, outputs from the same system that requires academic researchers the world over to sing for their supper in some form or other. However, the two practices are often treated separately, almost as if there was no link at all, so editors Biagioli and Lippman are to be congratulated in bringing them together under the same roof in the shape of their book Gaming the Metrics: Misconduct and Manipulation in Academic Research (MIT Press, 2020).

The book is a collection of chapters that cover the whole gamut of wrongheaded – or just plain wrong – publication decisions on behalf of authors the word over on where to publish the fruits of their research. This ‘submission decision’ is unenviable, as it inevitably shapes academic careers to a greater or lesser degree. The main reason why authors make poor decisions is laid firmly at the doors of a variety of ‘publish or perish’ systems which seek to quantify the outputs from authors with a view to… well, the reason why outputs are quantified is never really explained. However, the reason why such quantification should be a non-starter is well-argued by Michael Power in Chapter 3, as well as Barbara M. Kehm (Ch. 6) in terms of the ever-popular university rankings. Even peer review comes under attack from Paul Wouters (Ch. 4), but as with the other areas any solutions are either absent, or in the case of Wouters proffered with minimal detail or real-world context.

Once into the book, any author would quickly realize that their decision to publish is fraught with difficulty with worrying about predatory publishers lurking on the internet to entice their articles and APCs from them. As such, any would be author would be well advised to heed the call ‘Caveat scriptor’ and read this book in advance of sending off their manuscript to any journals.

That said, there is also a case for advising ‘caveat lector’ before would-be authors read the book, as there are other areas where additional context would greatly help in addressing the problems of gaming metrics and academic misconduct. When it comes to predatory journals, there is a good deal of useful information included in several of the later chapters, especially the case studies in Chapters 7 and 15 which detail a suspiciously prolific Czech author and sting operation, respectively.

Indeed, these cases provide the context that is perhaps the single biggest failing of the book, which through its narrow academic lens doesn’t quite capture the wider picture of why gaming metrics and the scholarly communications system as a whole is ethically wrong, both for those who perpetrate it and arguably the architects of the systems. As with many academic texts that seek to tackle societal problems, the unwillingness to get dirt under the fingernails in the pursuit of understanding what’s really going on simply distances the reader from the problem at hand.

As a result, after reading Gaming the Metrics, one is like to simply shrug one’s shoulders in apathy about the plight of authors and their institutions, whereas a great deal more impact might have been achieved if the approach had been less academic and included more case studies and insights into the negative impact resulting from predatory publishing practices. After all, the problem with gaming the system is that, for those who suffer, it is anything but a game.

Gaming the Metrics: Misconduct and Manipulation in Academic Research, edited by Mario Biagioli and Alexandra Lippman (published Feb. 21 2020, MIT Press USA) ISBN: 978-0262537933.

The power of four

After hearing so many different ways that its Journal Whitelist and Journal Blacklist have been used by customers, Cabells has started to map out how any researcher can use journal data to optimize their decision-making. Fresh from its debut at the World Congress on Research Integrity in Hong Kong last month, Simon Linacre shares the thinking behind the new ‘Four Factor Framework’ and how it could be used in the future.


The 6th World Congress on Research Integrity (WCRI) was held in Hong Kong last month, bringing together the great and the good of those seeking to improve the research process globally. Topics were surprisingly wide, taking a look at such diverse areas as human rights, predatory publishing, data sharing, and policy. It was significant that while much of the focus of the conference was on the need to improve education and learning on how to conduct research ethically, many of the cases presented showed that there is still much to do in this respect.

Cabells was also there and used its presence to share some ideas on how to overcome some of these challenges, particularly with regard to engagement with improved research and publishing practices. Taking the established issues within predatory publishing encountered the world over as a starting point (i.e. choosing the wrong journal), as well as the need to look at as much data as possible (i.e. choosing the right journal), Cabells has very much put the author at the center of its thinking to develop what it has called the ‘Four Factor Framework’:

 

The framework, or FFF, firstly puts the onus on the researcher to rule out any poor, deceptive or predatory journals, using resources such as the Blacklist. This ‘negative’ first step then opens up the next stage, which is to take the four following factors into account before submitting to a research paper to a journal:

  • Strategic: understanding how a journal will impact career ambitions or community perspectives
  • Environmental: bringing in wider factors such as research impact or ethical issues
  • Political: understanding key considerations such as publishing in titles on journal lists, avoiding such lists or journals based in certain areas
  • Cultural: taking into account types of research, peer review or article form

Having talked to many customers over a period of time, these factors all become relevant to authors at some point during that crucial period when they are choosing which journal to publish in. Customers have fed back to Cabells that use of Cabells’ Whitelist and Blacklist – as well as other sources of data and guidance – can be related to as benchmarking, performance-focused or risk management. While it is good to see that the databases can help so many authors in so many different ways, judging by the evidence at WCRI there is still a huge amount to do in educating researchers to take advantage of these optimized approaches. And this will be the main aim of Cabells’ emerging strategy – to enable real impact by researchers and universities through the provision of validated information and support services around scholarly publishing.