Today’s post was written by Simon Linacre, Director of International Marketing and Development at Cabells, and Irfan Syed, Senior Writer and Editor at Editage Insights.
How do you identify a predatory journal? Easy, look up your spam folder – say seasoned researchers.
Actually, this ‘initial indicator’ is often the key to identifying a predatory journal. Predatory publishers send researchers frequent emails soliciting manuscripts and promising acceptance – messages that, thanks to the email service provider’s parameters, usually go straight to junk mail. Some cleverly disguised ones do make it to the inbox though, and sometimes, unwary researchers click one of these mails, unleashing the predator and an all-too-familiar sequence of events: Researcher sends manuscript. Receives quick acceptance often without a peer review. Signs off copyright. Receives a staggeringly large invoice. Is unable to pay. Asks to withdraw. Receives equally heavy withdrawal invoice – and threats. The cycle continues, the publisher getting incrementally coercive, the researcher increasingly frustrated.
What makes a predator
The term predatory journal was coined by Jeffrey Beall, former Scholarly Initiatives Librarian at the University of Denver, Colorado, in 2010, when he launched his eponymous list (now archived) of fake scientific journals, with an aim to educate the scientific community. The term was supposed to mirror the guile of carnivores in the wild: targeting the weak, launching a surprise ambush, and effecting a merciless finish.
A more academic definition might be: “Entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.” In other words, journals that put commerce before science.
Dubious scientific journals have existed since the 1980s. They were born to lend an easy passage out of the arduous road to acceptance laid by top-rung journals. Recently, they have received a boost from the rise of the open access (OA) movement, which seeks to shift the balance of power towards the researcher. However, with revenues now accruing from the author side, new researchers pressured by a ‘publish or perish’ culture have proved easy targets for predatory publishers that exploit the OA publishing model.
The new face of predation
Today, academia faces another threat, a new predator in scientific communications – predatory author services. The dangers of using predatory author services can be just as acute as those of predatory journals. Authors who pay for such services are risking the abuse of any funding they have received by in turn funding potentially criminal activity. Such predatory author services may not be equipped to make quality edits to an author’s paper – incorrect edits, changes in the author’s intended meaning, and unidentified errors may adversely affect the author’s manuscript. Many authors choose such services to improve their articles and increase their chances of acceptance in high-quality journals, but they are very likely to be disappointed in light of the quality of services they receive.
So, the issue of predatory author services is just as problematic as it is with predatory journals. Despite the efforts of industry bodies such as COPE, it seems there are new players entering the market and even advertising on social media platforms such as Facebook. More worryingly, examples of these predatory services seem to include a veneer of sophistication hitherto not seen before, including well-designed websites, live online chat features, and direct calling.
Spotting a predatory author service
The good news is that these services bear many of the traits of predatory journals, and can be identified with a little background research. Here are some tips on how to separate predatory author services from professional operations such as Cactus’ Editage:
Check the English: For a legitimate journal to have spelling or grammar errors on its site or published articles would be a heinous crime, but this should go double for an author services provider. So, beyond the slick graphics and smiling model faces, check if everything is as it should appear by a thorough check of the English
Click the links: Dead links, links that loop back to the homepage, or links that don’t match the text should further raise your suspicion
Research the partnerships: If a provider genuinely works with Web of Science, Scopus, and The Lancet, there should be evidence of that rather mere logos copied and pasted onto the homepage. Search online for these publicized partnerships to know if they are genuine
Look up the provenance: Many predatory operators leave no address at all. Some though will choose to include a fake address (which turns out to be a long-abandoned dry-cleaning store on a deserted high street or a legitimate address that’s also home to 1,847 other registered companies). A quick search on Google Maps will show whether the address does map
Run if you spot a ghost: The surest giveaway of a predatory author service is the offering of ghostwriting as a service. Ghost authorship, the act of someone else authoring your entire manuscript, is a violation of research integrity. And when even ghostwriting doesn’t suffice, these services are happy to plagiarize another author’s work and pass it off as the client’s own
Ask your peers: Before deciding to use a service, double-check any testimonials on the provider’s homepage or ask around in your peer network.
Taking on the predator – collectively and individually
The scientific predator will no doubt continue to evolve, getting more sophisticated with time. Ultimately, all anyone can do to eradicate predatory author services or journals is to increase awareness among authors and provide resources to help them identify such predators. Cabells, Cactus, and many other industry players continually work to provide this guidance, but a good deal of the burden of responsibility has to be shared by academic researchers themselves. As the Romans might have said, caveat scriptor – author beware!
For any authoring service ad or mail you come across, look it up. Search on the net, ask your fellow researchers, pose a query in a researcher forum, go through recommended journal indices of quality and predatory publications such as those of Cabells. If it’s genuine, it will show up in several searches – and you will live to publish another day.
While Cabells spends much of its time assessing journals for inclusion in its Verified or Predatory lists, probably the greater number of titles reside outside the parameters of those two containers. In his latest blog, Simon Linacre opens up a discussion on what might be termed ‘gray journals’ and what their profiles could look like.
The concept of ‘gray literature’ to describe a variety of information produced outside traditional publishing channels has been around since at least the 1970s, and has been defined as “information produced on all levels of government, academia, business and industry in electronic and print formats not controlled by commercial publishing (ie. where publishing is not the primary activity of the producing body*” (1997; 2004). The definition plays an important role in both characterizing and categorizing information outside the usual forms of academic content, and in a way is the opposite of the chaos and murkiness the term ‘gray’ perhaps suggests.
The same could not be said, however, if we were to apply the same term to those journals that inhabit worlds outside the two main databases Cabells curates. Its Journal Whitelist indexes over 11,000 journals that satisfy its criteria to assess whether a journal is a reputable outlet for publication. As such, it is a list of recommended journals for any academic to entrust their research to. The same cannot be said, however, for the Journal Blacklist, which is a list of over 13,000 journals that NO ONE should recommend publication in, given that they have ‘met’ several of Cabells’ criteria.
So, after these two cohorts of journals, what’s left over? This has always been an intriguing question and one which was alluded to most intelligently recently by Kyle Siler in a piece for the LSE Impact Blog. There is no accurate data available on just how many journals there are in existence, as like grains of sand they are created and disappear before they can all be counted. Scopus currently indexes well over 30,000 journals, so a conservative estimate might be that there are over 50,000 journals currently active, with 10,000 titles or more not indexed in any recognized database. Using Cabells experience of assessing these journals for both Whitelist and Blacklist inclusion, here are some profiles that might help researchers spot which option might be best for them:
The Not-for-Academics Academic Journal: Practitioner journals often fall foul of indexers as they are not designed to be used and cited in the same way as academic journals, despite the fact they look like them. As a result, journals that have quite useful content are often overlooked due to lack of citations or a non-academic style, but can include some good quality content
The So-Bad-it’s-Bad Journal: Just awful in every way – poor editing, poor language, uninteresting research and research replicated from elsewhere. However, it is honest and peer reviewed, so provides a legitimate outlet of sorts
The Niche-of-a-Niche Journal: Probably focusing on a scientific area you have never heard of, this journal drills down into a subject area and keeps on drilling so that only a handful of people in the world have the foggiest what it’s about. But if you are one of the lucky ones, it’s awesome. Just don’t expect citation awards any time soon
The Up-and-Coming Journal: Many indexers prefer to wait a year or two before including a journal in their databases, as citations and other metrics can start to be used to assess quality and consistent publication. In the early years, quality can vary widely, but reading the output so far is at least feasible to aid the publishing decision
The Worthy Amateur Journal: Often based in a non-research institution or little-known association, these journals have the right idea but publish haphazardly, have small editorial boards and little financial support, producing unattractive-looking journals that may nevertheless hide some worthy articles.
Of course, when you arrive at the publication decision and happen upon a candidate journal that is not indexed, as we said last week simply ‘research your research’: check against the Blacklist and its criteria to detect any predatory characteristics, research the Editor and the journal’s advisory board for their publishing records and seek out the opinion of others before sending your precious article off into the gray ether.
*Third International Conference on Grey Literature in 1997 (ICGL Luxembourg definition, 1997 – Expanded in New York, 2004
If you haven’t already completed our survey, there is still time to provide your feedback. Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.
Our short survey should take no more than ten minutes to complete, and can be taken here.
As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.
Many thanks in advance for your valuable feedback!
Predatory publishing generally refers to the systematic for-profit publication of purportedly scholarly content (in journals and articles, monographs, books, or conference proceedings) in a deceptive or fraudulent way and without any regard for quality assurance. Here, ‘for-profit’ refers to profit generation per se… Predatory publishers may cheat authors (and their funders and institutions) through charging publishing-related fees without providing the expected or industry standard services.
The most professional and exact list of such journals, the Journal Blacklist, is offered by Cabells and was launched in 2017 and uses a large number of criteria rather than a specific definition to identify predatory or illegitimate journals. Recently, a coalition of publishers, scholars and funders has provided the following definition that was published in the journal Nature: “Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.”
In other words, profit (mostly illegal) is one of the signs of predation. There are even cases of prosecutions of predators, such as the findings in the case of the US Federal Trade Commission against the OMICS Group: “These publishing companies lied about their academic journals and took millions of dollars from aspiring researchers and writers.”
The illustration in the Nature article depicts a wolf (i.e. a predator) in sheep’s clothing, rendered as an academic journal. But is a researcher always the obedient prey of a predator? Is he or she always a sheep? The vast majority of authors likely fall victim to predatory outfits because of their own incompetence or lack of discrimination. But not all authors are sheep.
There is a group of authors who, from time to time, consciously manipulate data or submit dubious results. Grimes, Bauch, and Ioannidis call them unethical authors.
Among these unethical authors are ‘parasite authors’ who deliberately seek symbiosis with predatory journals.
Such parasites should be considered authors who, when choosing a journal, clearly understand that this journal does not intentionally use the best editorial and publication practices, does not perform the declared review procedure, and, at the same time, for a fee, it is guaranteed to quickly legitimize the text of dubious scientific content by publishing it. Predatory journals and parasite authors co-exist and co-operate by tacit agreement. The journal enjoys the desired profit, and the author has the article he needs for his/her career progression (according to Tove Faber Frandsen, this is the main motive of unethical authors) or other rewards.
Predatory journals indexed by Web of Science Core Collection or Scopus are especially attractive to parasite authors.
There are a number of questions that arise when the articles published in these three journals are analyzed. Firstly, it is interesting to note how the number of articles in the three journals has changed since the indexing of journals in Scopus (Figure 1). The publications of articles in journals indexed by Scopus is often a prerequisite for obtaining an academic degree, academic rank, or contracts in many countries.
Secondly, perhaps not all authors of these 21,926 articles were victims (Figure 2). For example, can we call Author A, who published 80 articles in two BEIESP journals during a year, a victim? What could have caused such hypertrophied publishing activity? Perhaps there are worthwhile incentives for this?
Secondly, in Vietnam the Ministry of Education and Training paid USD 259,000 to 1,718 authors of scientific articles published in international journals in 2018. The University of Economics in Ho Chi Minh City rewards authors into the amount of USD 8,650 for any article published.
It would be interesting to know if the 81 Vietnamese authors who published their articles in the 2019 IJEAT were rewarded?
Thirdly, in these journals, most of the articles were published by authors from India, Malaysia, Indonesia, etc. (Figure 3). Authors from several universities have shown an abnormally high commitment to these journals (Figure 4). Researchers from K L Deemed to be University (India) have published nearly 1,000 articles in three OMICS journals in 2019 alone, and those from Bharath Institute of Higher Education and Research (India) published more than 800 articles. It is difficult to assume that this remained unnoticed by the universities themselves. And was the lack of response from the university management acceptable?
Finally, the success of authors and journals can depend largely on the article citation. When it comes to parasite authors and predatory journals, they can “collaborate fruitfully” even with one publisher.
Such actions lead to abnormal results. For example, Article A published in IJEAT in 2019 managed to get 201 citations from “partner journals” (Figure 5а). Article B received 193 citations (Figure 5b); Article С obtained 85 citations (Figure 5с).
Now, imagine an army of researchers from different countries who have submitted their papers to such journals. They were not confused by either the review process or the payment system or anything else. And, as Grimes, Bauch, and Ioannidis rightly point out, “The authors may use lack of awareness to excuse their actions, but indeed, they search for a low‐barrier way to getting published…”
Therefore, it is critical to find effective mechanisms that will force scientists to accept and apply best publishing practices and ethical principles of scientific publications, and create an environment in which the symbiosis of predatory journals and unethical authors will be impossible.
Editor’s Note: This post is by Rick Anderson, Associate Dean for Collections & Scholarly Communication in the J. Willard Marriott Library at the University of Utah. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro and as Director of Resource Acquisition at the University of Nevada, Reno. Rick serves on numerous editorial and advisory boards and is a regular contributor to the Scholarly Kitchen. He has served as president of the North American Serials Interest Group (NASIG), and is a recipient of the HARRASSOWITZ Leadership in Library Acquisitions Award. In 2015 he was elected President of the Society for Scholarly Publishing. He serves as an unpaid advisor on the library boards of numerous publishers and organizations including biorXiv, Elsevier, JSTOR, and Oxford University Press.
This morning I had an experience that is now familiar, and in fact a several-times-daily occurrence—not only for me, but for virtually every one of my professional colleagues: I was invited to submit an article to a predatory journal.
How do I know it was a predatory journal? Well, there were a few indicators, some strong and some merely suggestive. For one thing, the solicitation addressed me as “Dr. Rick Anderson,” a relatively weak indicator given that I’m referred to that way on a regular basis by people who assume that anyone with the title “Associate Dean” must have a doctoral degree.
However, there were other elements of this solicitation that indicated much more strongly that this journal cares not at all about the qualifications of its authors or the quality of its content. The strongest of these was the opening sentence of the message:
This gave me some pause, since I have no expertise whatsoever “on Heart,” and have never published anything on any topic even tangentially related to medicine. Obviously, no legitimate journal would consider me a viable target for a solicitation like this.
Another giveaway: the address given for this journal is 1805 N Carson St., Suite S, Carson City, NV. As luck would have it, I lived in northern Nevada for seven years and am quite familiar with Carson City. The northern end of Carson Street—a rather gritty stretch of discount stores, coffee shops, and motels with names designed to signal affordability—didn’t strike me as an obvious location for any kind of multi-suite office building, let alone a scientific publishing office, but I checked on Google Maps just to see. I found that 1805 North Carson Street is a non-existent address; 1803 North Carson Street is occupied by the A to Zen Thrift Shop, and Carson Coffee is at 1825. There is no building between them.
Having thus had my suspicion stoked, I decided to give this journal a real test. I created a nonsense paper consisting of paragraphs taken at random from articles originally published in a legitimate journal of cardiothoracic medicine, and gave it a title consisting of syntactically coherent but otherwise randomly-chosen terms taken from the discipline. I invented several fictional coauthors, created an email account under the assumed name of the lead author, submitted the manuscript via the journal’s online system and settled down to wait for a decision (which was promised within “14 days,” following the journal’s usual “double blind peer review process”).
While we wait for word from this journal’s presumably distinguished team of expert peer reviewers, let’s talk a little bit about the elephant in the room: the fact that the journal we’re testing purports to publish peer-reviewed research on the topic of heart surgery.
The problem of deceptive or “predatory” publishing is not new; it has been discussed and debated at length, and it might seem as if there’s not much new to be said about it: as just about everyone in the world of scholarly publishing now knows, a large and apparently growing number of scam artists have created thousands upon thousands of journals that purport to publish rigorously peer-reviewed science, but will, in fact, publish whatever is submitted (good or bad) as long as it’s accompanied by an article processing charge. Some of these outfits go to great expense to appear legitimate and realize significant revenues from their efforts; OMICS (which was subject to a $50 million judgment after being sued by the Federal Trade Commission for deceptive practices) is probably the biggest and most famous of predatory publishing outfits. But most of these outfits are relatively small; many seem to be minimally staffed fly-by-night operations that have invested in little more than the creation of a website and an online payment system. The fact that so many of these “journals” exist and publish so many articles is a testament to either the startling credulity or the distressing dishonesty of scholars and scientists the world over—or, perhaps, both.
But while the issue of predatory publishing, and its troubling implications for the integrity of science and scholarship, is discussed regularly in broad terms within the scholarly-communication community, I want to focus here on one especially concerning aspect of the phenomenon: predatory journals that falsely claim to publish rigorously peer-reviewed science in fields that have a direct bearing on human health and safety.
In order to try to get a general idea of the scope of this issue, I did some searching within Cabell’s Journal Blacklist to see how many journals from such disciplines are listed in that database. My findings were troubling. For example, consider the number of predatory journals found in Cabell’s Blacklist that publish in the following disciplines (based on searches conducted on 25 and 26 November 2019):
# of Titles
Obviously, it’s concerning when scholarship or science of any kind is falsely represented as having been rigorously reviewed, vetted, and edited. But it’s equally obvious that not all scholarship or science has the same impact on human health and safety. A fraudulent study in the field of sociology certainly has the capacity to do significant damage—but perhaps not the same kind or amount of damage as a fraudulent study in the field of pediatric anesthesiology, or diagnostic oncology. The fact that Cabell’s Blacklist has identified nearly 4,000 predatory journals in the general field of medicine is certainly cause for very serious concern.
At the risk of offending my hosts, I’ll just add here that this fact leads me to really, really wish that Cabell’s Blacklist were available to the general public at no charge. Recognizing, of course, that a product like this can’t realistically be maintained at zero cost—or anything close to zero cost—this begs an important question: what would it take to make this resource available to all?
I can think of one possible solution. Two very large private funding agencies, the Bill & Melinda Gates Foundation and the Wellcome Trust, have demonstrated their willingness to put their money where their mouths are when it comes to supporting open access to science; both organizations require funded authors to make the published results of their research freely available to all, and allow them to use grant funds to pay the attendant article-processing charges. For a tiny, tiny fraction of their annual spend on research and on open-access article processing charges, either one of these grantmakers could underwrite the cost of making Cabell’s Blacklist freely available. How tiny? I don’t know what Cabell’s costs are, but let’s say, for the sake of argument, that it costs $10 million per year to maintain the Blacklist product, with a modest amount of profit built in. That would represent two tenths of a percent of the Gates Foundation’s annual grantmaking, or 2.3 tenths of a percent of Wellcome’s.
This, of course, is money that they would then not be able to use to directly subsidize research. But since both fundmakers already commit a much, much larger percentage of their annual grantmaking to APCs, this seems like a redirection of funds that would yield tremendous value for dollar.
Of course, underwriting a service like Cabell’s Blacklist would entail acknowledging that predatory publishing is real, and a problem. Oddly enough, this is not universally acknowledged, even among those who (one might think) ought to be most concerned about the integrity of the scholcomm ecosystem and about the reputation of open access publishing. Unfortunately, among many members of that ecosystem, APC-funded OA publishing is largely—and unfairly—conflated with predatory publishing.
Well, it took much longer than promised (or expected), but after receiving, over a period of two months, occasional messages telling me that my paper was in the “final peer review process,” I finally received the long-awaited-for response in late January: “our” paper had been accepted for publication!
Journal Blacklist entry for Journal of Cardiothoracic Surgery and Therapeutics
Over the course of several subsequent weeks I received a galley proof for my review—along with an invoice for an article-processing charge in the amount of $1,100. In my guise as lead author, I expressed shock and surprise at this charge; no one had said anything to me about an APC when my work was solicited for publication. I received a conciliatory note from the editor, explaining that the lack of notice was due to a staff error, and further explaining that the Journal of Cardiothoracic Surgery and Therapeutics is an open-access journal and uses APCs to offset its considerable costs. He said that by paying this fee and allowing publication to go forward I would be ensuring that the article “will be available freely which allows the scientific community to view, download, distribution of an article in any medium (provided that the original work is properly cited) thereby increasing the views of article.” He also promised that our article will be indexed “in Crossref and many other scientific databases.” I responded that I understood the model but had no funds available to pay the fee, and would therefore have to withdraw the paper. “You may consider our submission withdrawn,” I concluded.
Then something interesting happened. My final communication bounced back. I was informed by a system-generated message that my email had been “waitlisted” by a service called Boxbe, and that I would have to add myself to the addressee’s “guest list” in order for it to be delivered. Apparently, the editor no longer wanted to hear from me.
Also interesting: despite my nonpayment of the APC, the article has now been published and can be seen here. It will be interesting to see how long it remains in the journal.
We need to be very clear about one thing here: the problem with my article is not that it represents low-quality science. The problem with my article is that it is nonsense and it is utterly incoherent. Not only is its content entirely plagiarized, it’s so randomly assembled from such disparate sources that it could not possibly be mistaken for an actual study by any informed reader who took the time to read any two of its paragraphs. Furthermore, it was “written” by authors who do not exist, whose names were taken from famous figures in history and literature, and whose institutional affiliations are entirely fictional. (There is no “Brockton State University,” nor is there a “Massapequa University,” nor is there an organization called the “National Clinics of Health.”)
What all of this means is that the fundamental failing of this journal—as it is of all predatory journals—is not its low standards, or the laxness of its peer review and editing. Its fundamental failing is that despite its claims, and despite charging authors for these services, it has no standards at all, performs no peer review, and does no editing. If it did have any standards whatsoever, and if it performed even the most perfunctory peer review and editorial oversight, it would have detected the radical incoherence of my paper immediately.
One might reasonably ask, though: if my paper is such transparently incoherent nonsense, why does its publication pose any danger? No surgeon in the real world will be led by this paper to do anything in an actual surgical situation, so surely there’s no risk of it affecting a patient’s actual treatment in the real world.
This is true of my paper, no doubt. But what the acceptance and publication of my paper demonstrates is not only that the Journal of Cardiothoracic Surgery and Therapeutics will publish transparent nonsense, but also—more importantly and disturbingly—that it will publish anything. Dangerously, this includes papers that may not consist of actual nonsense, but that were flawed enough to be rejected by legitimate journals, or that were written by the employees of device makers or drug companies that have manipulated their data so as to promote their own products, or that were written by dishonest surgeons who have generally legitimate credentials but are pushing crackpot techniques or therapies. The danger illustrated by my paper is not so much that predatory journals will publish literal nonsense; the more serious danger is that they will uncritically publish seriously flawed science while presenting it as carefully-vetted science.
In other words, the defining characteristic of a predatory journal is not that it’s a “low-quality” journal. The defining characteristic of a predatory journal is that it falsely claims to provide quality control of any kind—precisely because to do so would restrict its revenue flow. This isn’t to say that no legitimate science ever gets published in predatory journals; I’m sure quite a bit does since there’s no reason why a predatory journal would reject it, any more than it would reject the kind of utter garbage this particular journal has now published under the purported authorship of Jackson X. Pollock. But the appearance of some legitimate science does nothing to resolve the fundamental issue here, which is one of scholarly and scientific fraud.
Such fraud is distressing wherever it occurs. In the context of cardiothoracic surgery—along with all of the other health-related disciplines in which predatory journals currently publish—it’s terrifying.
This week the Cabells Journal Blacklisthas hit 13,000 titles, and while the number itself is not that significant, its continued rate of growth shows that the problem of predatory publishing shows no sign of abating. In his latest post, Simon Linacre shares a case study of what a predatory journal looks like and why their continued growth should concern us all.
Firstly, a warning: this post will share a link to a journal that Cabells has identified as predatory in nature, and as such, you should take precautions before giving it a click. This is because there is evidence to show that some predatory journal websites, whether it is by accident or design, contain malware that can infect your computers and its networked systems. So, if you do click on it, please don’t share any information as it could infect your hardware.
Welcome to the dark world of predatory publishing.
Despite the risks, it is useful to look at a specific predatory journal to gain some insight into how they operate and what they contain. The example we are using is the International Journal of Science Technology & Management, which appears to be based in India and has been publishing several issues annually since 2012, and includes hundreds of articles freely accessible as pdfs. This particular journal has one of the highest numbers of breaches of our Blacklist criteria, some of which are included below to help explain why the journal is predatory:
Editors do not actually exist or are deceased. The journal does not name an Editor or Editors but has a huge list of names and affiliations, many of which do not actually exist or are listed without their knowledge.
The journal’s website does not have a clearly stated peer review policy. The journal states it is “refereed”, but there is no evidence this occurs.
Falsely claims indexing in well-known databases (especially SCOPUS, DOAJ, JCR, and Cabells). This is a key indicator of predatory journals, and can be easily checked – this particular journal claims it is indexed by Cabells (this is categorically untrue), listed by DOAJ (also false) and has an Impact Factor of 2.012 (most definitely incorrect).
The website does not identify a physical address for the publisher or gives a fake address. Sometimes an address will be given that is the same address as 8,459 other businesses, which is remarkable in that it turns out to be a small terraced house in suburban England. In this example, there is an address you can find after some searching, but the address is spelled incorrectly and the location in India is also home to dozens of other journals and conferences the publisher operates, but no offices.
The publisher or journal’s website seems too focused on the payment of fees. Many predatory publishers charge the going rate of $1,000+ to publish in them, but this journal ‘only’ charges $60 (plus $20 if you require a certificate). This may seem a bargain to some, but authors are being ripped off even at this low price.
There are many other problems with the journal, not least that the quality of articles published in it would embarrass any high school student, let alone an academic. However, while the desire and ease of publishing in such journals persists, Cabells will have to increase its Journal Blacklist by many more thousands to keep pace with demand.
A recent paper published in Nature has provided a tool for researchers to use to check the publication integrity of a given article. Simon Linacre looks at this welcome support for researchers, and how it raises questions about the research/publication divide.
Earlier this month, Nature published a well-received comment piece by an international group of authors entitled ‘Check for publication integrity before misconduct’ (Grey et al, 2020). The authors wanted to create a tool to enable researchers to spot potential problems with articles before they got too invested in the research, citing a number of recent examples of misconduct. The tool they came up with is a checklist called REAPPRAISED, which uses each letter to identify an area – such as plagiarism or statistics and data – that researchers should check as part of their workflow.
As a general rule for researchers, and as a handy mnemonic, the tool seems to work well, and undoubtedly authors using this as part of their research should avoid the potential pitfalls of using poorly researched and published work. Perhaps we at Cabells would argue that an extra ‘P’ should be added for ‘Predatory’, and the checks researchers should make to ensure the journals they are using and intend to publish in are legitimate. To do this comprehensively, we would recommend using our own criteria for the Cabells Journal Blacklist as a guide, and of course, using the database itself where possible.
The guidelines also raise a fundamental question for researchers and publishers alike as to where research ends and publishing starts. For many involved in academia and scholarly communications, the two worlds are inextricably linked and overlap, but are nevertheless different. Faculty members of universities do their research thing and write articles to submit to journals; publishers manage the submission process and publish the best articles for other academics to read and in turn use in their future research.
Journal editors seem to sit at the nexus of these two areas as they tend to be academics themselves while working for the publisher, and as such have feet in both camps. But while they are knowledgeable about the research that has been done and may actively research themselves, as editor their role is one performed on behalf of the publisher, and ultimately decides which articles are good enough to be recorded in their publication; the proverbial gatekeeper.
What the REAPPRAISED tool suggests, however, is that for authors the notional research/publishing divide is not a two-stage process, but rather a continuum. Only if authors embark on research intent on fully appraising themselves of all aspects of publishing integrity can they guarantee the integrity of their own research, and in turn this includes how and where that research is published. Rather than a two-step process, authors can better ensure the quality of their research AND publications by including all publishing processes as part of their own research workflow. By doing this, and using tools such as REAPPRAISED and Cabells Journal Blacklist along the way, authors can better take control of their academic careers.
In the penultimate post of 2019, Simon Linacre looks at the recent publication of a new definition of predatory publishing and challenges whether such a definition is fit for purpose for those who really need it – authors
In this season of glad tidings and good cheer, it is worth reflecting that not everyone who approaches academic researchers bearing gifts are necessarily Father Christmas. Indeed, the seasonal messages popping into their inboxes at this time of year may offer opportunities to publish that seem too good to miss, but in reality, they could easily be a nightmare before Christmas.
Predatory publishers are the very opposite of Santa Claus. They will come into your house, eat your mince pies, but rather than leave you presents they will steal your most precious possession – your intellectual property. Publishing an article in a predatory journal could ruin an academic’s career, and it is very hard to undo once it has been done. Interestingly, one of the most popular case studies this year on COPE’s website is on what to do if you are unable to retract an article from a predatory journal in order to publish it in a legitimate one.
Cabells has added over two thousand journals to its Journals Blacklist in 2019 and will reach 13,000 in total in the New Year. Identifying a predatory journal can be tricky, which is why they are often so successful in duping authors; yet defining exactly what a predatory journal is can be fraught with difficulty. In addition, some commentators do not like the term – from an academic perspective ‘predatory’ is hard to define, while others think it is too narrow. ‘Deceptive publishing’ has been put forward, but this, in turn, could be seen as too broad.
Cabells uses over 70 criteria to identify titles for inclusion in its Journals Blacklist and widens the net to encompass deceptive, fraudulent and/or predatory journals. Defining what characterizes these journals in just a sentence or two is hard, but this is what a group of academics has done following a meeting in Ottowa, Canada earlier in 2019 on the topic of predatory publishing. The output of this meeting was the following definition:
“Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.” (Grudniewicz et al, 2019)
The definition is presented as part of a comment piece published in Nature last week and came from a consensus reached at the Ottowa meeting. It is a pity that Cabells was not invited to the event and given the opportunity to contribute. As it is, the definition and accompanying explanation has been met with puzzlement in the Twittersphere, with a number of eminent Open Access advocates saying it allows almost any publisher to be described as predatory. For it to be relevant, it will need to be adopted and used by researchers globally as a test for any journal they are thinking of submitting to. Only time will tell if this will be the case.
From all of us at Cabells, we wish everyone a joyous holiday season and a healthy New Year. Our next blog will be published on January 15, 2020.
How do you know if a journal is a good or a bad one? It is a simple enough question, but there is a lack of clear information out there for researchers, and often scams that lay traps for the unaware. In his latest post, Simon Linacre presents some new videos from Cabells that explain what it does to ensure authors can keep fully informed.
On a chilly Spring day in Edinburgh, myself and one of my colleagues were asked to do what nobody really wants to do if they can help it, and that is to ‘act natural’. It is one of life’s great ironies that it is so difficult to act naturally when told to do so. However, it was for a good cause, as we had been asked to explain to people through a short film what it was that Cabells did and why we thought it was important.
Video as a medium has been relatively ignored by scholarly publishers until quite recently. Video has of course been around for decades, and it has been possible to embed video on websites next to articles for a number of years. However, embedding video into pdfs has been tricky, and as every publisher will tell you when they ask you about user needs – academics ‘just want the pdf’. As a result, there has been little in the way of innovation when it comes to scholarly communication, despite some brave attempts such as video journals, video abstracts and other accompaniments to the humble article.
Video has been growing as a means of search, particularly for younger academics, and it can be much more powerful when it comes to engagement and social media. Stepping aside from the debate about what constitutes impact and whether Altmetrics and hits via social media really mean anything, video can be ‘sticky’ in the sense that people spend longer watching it than skipping over words on a web page. As such, the feeling is that video is a medium whose time may have yet to come when it comes to scholarly communications.
So, in that spirit, Cabells has shot a short video with some key excerpts that take people through the Journal Whitelist and Journal Blacklist. It is hoped that it answers some questions that people may have, and spurs others to get in touch with us. The idea of the film is the first step towards Cabells’ development of a number of resources in lots of different platforms that will help researchers drink in knowledge of journals to optimize their decision-making. In a future of Open Access, new publishing platforms, and multiple publishing choices, the power to publish will increasingly be in the hands of the author, with the scholarly publishing industry increasingly seeking ways to satisfy their needs. Knowledge about publishing is the key to unlocking that power.
In his latest post, Simon Linacre argues that in order for authors to make optimal decisions – and not to get drawn into predatory publishing nightmares – research and publishing efforts should overlap substantially.
In a recent online discussion on predatory publishing, there was some debate as to the motivations of authors to chose predatory journals. A recent study in the ALPSP journal Learned Publishing found that academics publishing in such journals usually fell into one of two camps – either they were “uninformed” that the journal they had chosen to publish in was predatory in nature, or they were “unethical” in knowingly choosing such a journal in order to satisfy some publication goals.
However, a third category of researcher was suggested, that of the ‘unfussy’ author who neither cares nor knows what sort of journal they are publishing in. Certainly, there may be some overlap with the other two categories, but what they all have in common is bad decision-making. Whether one does not know, does not care, or does not mind which journal one publishes in, it seems to me that one should do so on all three counts.
It was at this point where one of the group posed one of the best questions I have seen in many years in scholarly communications: when it comes to article publication, where does the science end in scientific research? Due in part to the terminology as well as the differing processes, the concept of research and publication are regarded as somehow distinct or separate. Part of the same eco-system, for sure, but requiring different skills, knowledge and approaches. The question is a good one as it challenges this duality. Isn’t is possible for science to encompass some of the publishing process itself? And shouldn’t the publishing process become more involved in the process of research?
The latter is already happening to a degree in moves by major publishers to climb up the supply chain and become more involved in research services provision (e.g. the acquisition of article platform services provider Atypon by Wiley). On the other side, there is surely an argument that at the end of experiments or data collection, analyzing data logically and writing up conclusions, there is a place for scientific process to be followed in choosing a legitimate outlet with appropriate peer review processes? Surely any university or funder would expect such a scientific approach at every level from their employees or beneficiaries. And a failure to do this allows in not only sub-optimal choices of journal, but worse predatory outlets which will ultimately delegitimize scientific research as a whole.
I get that that it may not be such a huge scandal if some ho-hum research is published in a ‘crappy’ journal so that an academic can tick some boxes at their university. However, while the outcome may not be particularly harmful, the tacit allowing of such lazy academic behavior surely has no place in modern research. Structures that force gaming of the system should, of course, be revised, but one can’t help thinking that if academics carried the same rigor and logic forward into their publishing decisions as they did in their research, scholarly communications would be in much better shape for all concerned.
COUNTER (Counting Online Usage of NeTworked Electronic Resources) is a non-profit organization that helps libraries from around the world determine the value of electronic resources provided by different vendors by setting standards for the recording and reporting of usage stats in a consistent and compatible way. The COUNTER Code of Practice was developed with the assistance of library, publisher, and vendor members through working groups and outreach.
By implementing the Code of Practice, publishers and vendors support their library customers by providing statistics in a way that allows for meaningful analysis and cost comparison. This allows libraries to closely asses user activity, calculate cost-per-use data, and make informed purchasing and infrastructure planning decisions, ensuring limited funds are spent in the most efficient way possible.