As many readers will have noted last week, Cabells’ Predatory Reports database went over 20,000 journals listed for the first time. This is significant not just because that is a LOT of journals – with James Butcher asking if this might be as much as 20% of all published journals in his Journalology newsletter this week – but also because it represents fraud and deception on such a broad scale.
Understanding the different ways in which predatory journals seek to deceive authors is, of course, Cabells’ forte, having spent over a decade perfecting the art. But the tactics used by predatory publishers are always changing, and as such, the way we implement our criteria, as well as the criteria themselves, has to be reviewed and revised regularly.
Behind the scenes
In truth, the actual number assessed for inclusion had passed that milestone a while ago – it was just an update to our internal system that caused the number to be hit and go beyond it as a small backlog of titles came through (currently 20,273 and counting). The changes to the criteria – Version 1.2 – Cabells is now using are extensive, although consisting of many minor tweaks rather than major revisions, aimed at increased rigor and precision, as well as to center the behavior of the journal.
In total, there are 32 changes to the 70+ criteria and one removal, where we have subsumed it into another check. Some of the major changes include:
- “Information received from the journal does not match the journal’s website” has changed to specify the source of said information.
- Now reads: “The journal shares information via email or social media that is inconsistent with the journal’s website.”
- “Evident data that little to no peer review is being done, and the journal claims to be ‘peer reviewed’” has been expanded to be more explicit regarding the lack of peer review.
- Now reads: Evidence of automatic acceptance without peer review, the existence of fake peer review, or fabricated peer reports despite the journal claiming to be “peer reviewed.”
- To address the growing impact of generative AI in the publishing world, “Machine-generated or other ‘sting’ abstracts or papers are accepted” has been expanded.
- Now reads: The journal publishes machine-, algorithm, AI-generated, or sting abstracts/papers.
- “Insufficient resources are spent on preventing and eliminating author misconduct that may result in repeated cases of plagiarism, self-plagiarism, image manipulation, etc. (no policies regarding plagiarism, ethics, misconduct, etc., no use of plagiarism screens)” is now written to emphasize the lack of applicable policies, as it is applied.
- Now reads: The journal or publisher do not have policies to prevent and eliminate misconduct such as repeated cases of plagiarism, self-plagiarism, image manipulation, and other ethical infringements.
- The indicator “The journal uses misleading metrics (i.e., metrics with the words ‘impact factor’ that are not the Clarivate Analytics Impact Factor)” has been expanded to include false claims of said metrics. Previously, in most cases, these were recorded as false claims of indexing.
- Now reads: The journal posts misleading metrics (i.e., metrics with the words “impact factor” that are not the Clarivate Impact Factor), or falsely claims established industry metrics such as Clarivate’s impact factor or Elsevier’s CiteScore.
Why change now?
The final example is perhaps illustrative of the changes that have been made, as Cabells has sought not only to tighten up the language and criteria for journal inclusion, but also focus on the typical behaviors exhibited by predatory journals. Making false claims about journal metrics is an all-too-common trait in predatory journals, and one that is easily verifiable by checking the relevant indices that are typically used in the deception.
As such, while working on updates to the Predatory Reports criteria, the internal team of experts at Cabells that manages the database wanted to see that the criteria were written more precisely, better reflecting the nuances of the way they have been trained to apply them. In addition, they also wanted to incorporate some of the researcher comments that have been received on the criteria, such as this article co-authored by one of the world’s leading experts on predatory journals, Jaime Texeira da Silva.
The criteria may well need to be updated again in the near future, particularly as AI is changing so many things so quickly in scholarly communications. Sadly, what isn’t changing is the continuing proliferation of predatory publishing practices.
