What lies beneath

The first set of data from Cabells’ collaboration with Inera’s Edifix shows that nearly 300 article checks included references to predatory journals. Simon Linacre looks behind the data to share more details about ‘citation contamination.’


A few months ago, Cabells announced a trial partnership with the Edifix service, an article checking tool from Wiley’s Inera division (watch the free webinar discussing the collaboration from SSP’s OnDemand Library). Subscribers to Edifix can check their article’s references against Cabells’ Predatory Reports database for free during an open beta phase, and the first results of this offer have been announced by Edifix on their latest blog. The results show that:

  • A total of 295 jobs have had at least one reference flagged as having been included in a journal that is currently listed by Cabells’ Predatory Reports since May 2021
  • When you look at all 295 of those jobs, there were 66 (22%) that also included multiple references from predatory journals
  • Over the same period, Edifix processed a total of 7102 jobs (containing 104,140 submitted references, of which Edifix was able to fully process 89,180), so overall around 4% of all live jobs included at least one reference flagged by Cabells’ Predatory Reports database.

To recap, it is in the interests of all stakeholders in scholarly communications – authors, universities, societies, funders, and society as a whole – that research is not lost to predatory publishing activities. The Edifix and Cabells collaboration is designed not only to offer access to a database such as Predatory Reports to help all these stakeholders, but to augment their capabilities to produce the best research.

In addition, the collaboration represents a step forward in preventing ‘citation contamination’, where articles published in predatory journals find their way into legitimate journals by being referenced by them directly. The new service allows users to vet references for citations to predatory journals, as identified by Predatory Reports, and reduce the contamination of the scholarly record.

It is important to underline that while checking references won’t remove the predatory journal publications in the first place, it will ensure that those articles are cited less, and also that the research they include is checked. Authors cite articles assuming what is included in them has been peer reviewed, which is the very thing that is most unlikely to happen with a predatory journal. If an author understands the work they are citing may not have had any peer review – or a sub-standard or superficial one – they can find other literature to support their case. The analogy of contamination is a strong one as not only does it conjure up the stench many feel predatory publishing practices represents, it also describes how the problem can ‘cross-contaminate’ other journals and research projects. By empowering authors to clean up their research, and highlighting the problem of contamination more widely, it is hoped that this early experiment can lead to more steps forward in the fight against predatory publishing.

Peer Review Week 2021: Identity in Peer Review

Peer Review Week 2021 has been announced for September 20–24 with the theme of Identity in Peer Review. Simon Linacre, who volunteers for the event’s Steering Committee, takes a look at the importance of the event and this year’s chosen theme.


For those new to scholarly communication, the annual celebration of peer review probably seems one of the more unlikely events to occur in the crowded calendar. It makes sense for relatively novel ideas such as open access and open science to have their day – or week – in the sun in October, while other events supporting academic research and universities in general pepper the rest of the year. So why is boring old peer review so special?

Well, it may be a surprise to learn it’s not that old, and when you dig deeper you find it is anything but boring. While journals began life in the 17th Century – 1665, to be precise – it seems the first peer reviews only took place in the 18th Century, and external reviews in the Victorian period. According to academic publishing historian Alex Csiszar, peer reviews grew from these beginnings very slowly, and only took hold in mainstream science journals in the post-war period.

Furthermore, this year’s theme shows that issues and challenges facing the world today are very much relevant to the process of peer review. Identity in Peer Review was the first Peer Review Week theme to be chosen by the public, and will explore the role of both personal and social identity in peer review. It is hoped that the various events and activities during the week will develop a more diverse, equitable and inclusive approach to peer review. Academia has seen increased emphasis on the taking of steps to ensure research literature reflects and amplifies diverse voices, and of course the manner in which peer review is conducted is key to that.

Peer Review Week steering committee co-chair Danielle Padula says: “If the past year has taught us anything, I think it’s that recognizing the composite of identities that make up who we are as individuals, organizations, and populations, and the links between those identities, is essential to the future of scholarship and, ultimately, global progress. The pandemic has illuminated myriad deep-seated inequities that we need to address in all areas of society, with academia being no exception. And I think that starts with unpacking various aspects of personal and social identity and how we need to rethink the systems in which we operate to acknowledge and make space for diverse identities.”

Looking back to learn about the future is an apt approach, given that the past of peer review is not far behind us, and radical change potentially so near in the future. As ever, focusing on peer review makes a lot of sense for everyone with an interest in knowledge sharing and scholarly communications. Roll on September.

If you are interested in learning more or volunteering, please visit the Peer Review Week website, or you can contact Danielle Padula (dpadula@scholasticahq.com) or Jayashree Rajagopalan (jayashreer@cactusglobal.com), who are co-chairing this year’s PRW steering committee.

No signs of slowing

Cabells adds journals to its Predatory Reports database continuously, with over 10,000 added to the original 4,000 it launched with in 2017. But can we learn anything from the journals that have been added recently? To find out, Simon Linacre takes a look at the predatory journals listed throughout June 2021.


Fancy reading up on some research to learn about COVID-19? A lot of people will have been doing the same thing over the last 18 months as they try and figure out for themselves what on earth has been happening. They will do some Google searches and read articles published in journals like the British Medical Journal, New England Journal of Medicine and the Open Journal of Epidemiology. Sure, the third one doesn’t sound quite as prestigious as the other two, but it has a bunch of articles on epidemics, and it’s all free to read – so that’s good, right?

Sadly, it’s not so good. The Open Journal of Epidemiology is one of 99 journals that Cabells listed in its Predatory Reports database last month, and is published by a well-known predatory publisher known as SCIRP (Scientific Research Publishing) based in China. The journal – not to be confused with the British Open Journal of Epidemiology or the American Open Epidemiology Journal, both in Predatory Reports as well – has dubious publication practices such as falsely claiming indexation in well-known databases, promising unusually quick peer review and publishing authors several times in the same journal and/or issue.

The journal’s search function points a handful articles relating to ‘COVID’, including one on ex-patients and their recovery which has been downloaded 200 times and viewed nearly 600 times according to the website. But we know that this journal was unlikely to have been given a full peer review, if any at all, and the data on the website is difficult to trust – the Open Journal of Epidemiology was just one of 26 journals from the same publisher which Cabells listed last month.

In total there were eight publishers who had journals listed in June, with the biggest being Bilingual Published Co. based in Singapore, with 30 journals in total. Other publishers had fewer journals listed and were based in several different countries – India, Hong Kong, Kenya and even Belgium – and it is worth pointing out that Cabells reviews each journal independently rather than the publisher as a whole.

What else can we glean from this selection of predatory titles? Just 11 out of 99 had no ISSN, further underlining the folly of using the existence of an ISSN as evidence of legitimacy. On average the journals were four-to-five years old, so reasonably well established, and predominantly based in STEM research areas. Of the 99 journals listed, just 13 were in non-STEM areas such as Education and Management. The most common subject was Medicine with no fewer than 38 journals represented. However, it is worth pointing out that many predatory publishers are either hopelessly generic, or will publish anything even if the article has nothing to do with the core topics of the journal.

Cabells is being kept rather busy by reviewing all these journals, but if you do spot a suspicious journal or receive those annoying spam emails, do let us know at journals@cabells.com and we can perform a review so that others won’t be deceived or fall into the numerous traps being laid for them.

The top nine questions on predatory journals – answered!

In the course of researching a book on predatory publishing, Simon Linacre wanted to find some answers to common questions on the subject. In his latest blog post, he shares why straightforward is never easy when it comes to this controversial topic.


Have you ever wondered where those questions come from near the top of a Google search? Headlined ‘People Also Ask’ (PAA), the feature was introduced by Google in 2015 to aid search activities, and, according to SearchEngineWatch.com, the feature now appears in around half of all searches. The algorithms that trigger the feature seem to work more with searches based on questions and that include multiple keywords, and now form part of the standard toolbox of any digital marketer, as it opens up a wider range of sites than the top three hits at the top of a Google search engine results page (SERP).

For academic researchers, the feature is probably both a benefit and an irrelevance. While it may help some to gain a wider understanding of what kinds of questions are being asked about a topic – and it certainly helped me in this regard – it will also annoy others with much more sophisticated skills and needs for their search activities, where being sent down a potential blind alley is something to be avoided.

But do the questions posed by the algorithm any use? To put it to the test, here are the top nine results to the question, ‘What is a predatory journal?’, which was posed on Wednesday 23rd June 2021. The initial question reveals four results (see Figure A), clicking on the first answer reveals a further two (Figure B), and clicking on the second question reveals a total of nine questions (Figure C). These questions will differ depending on which question is clicked, as the algorithm seeks to provide further related questions to the initial one that was clicked on.

Figure A

Figure B

Figure C

Each question provides a summary answer and link through to the original web page, which will inevitably vary greatly in how useful they actually are. Some sources are blogs, some university library guides, and others Wikipedia. What is perhaps concerning is the direction the questions take, in that it is not the sources per se that provide worrying information, but the questions that are posed in the first place, presumably from an algorithm based on usage data and relevance to the questions being asked. So, to try and set the record straight in our own small way, here are some short and more realistic answers to the nine questions Google puts forward as most relevant to the predatory journal question:

Q. What is meant be predatory Journal?

Wikipedia supplies as good a short description as any, with the addition that there is rarely if any peer review at all: “Predatory publishing is an exploitative academic publishing business model that involves charging publication fees to authors without checking articles for quality and legitimacy, and without providing editorial and publishing services that legitimate academic journals provide, whether open access or not.”

Q. How do you know if a journal is predatory?

Common indicators include fake claims of an Impact Factor, lack of information/lies about the Editorial Board, and unrealistic promises of a fast turnaround.

Q. What happens if you publish in a predatory journal?

It stays published – retraction is highly unlikely, and to try and republish the article in a legitimate journal will only compound the problem by breaching publication ethics guidelines.

Q. What is a predatory journal a journal published over Internet?

Predatory journals began life by taking advantage of online publication as well as the Open Access model – both things were simply combined to create the right circumstances for predatory journals to evolve.

Q. Why are predatory journals bad?

Predatory journals do not check the validity or accuracy of submitted research but present it as if they have. As a result junk science, propaganda, and faked research can appear and be accessed by other academics and the general public alike, causing confusion and potential harm to anyone adopting that research for another purpose.

Q. Is PLOS ONE a predatory journal?

No, not at all. PLOS ONE like many so-called ‘mega-journals’ publish large numbers of articles based on a light-touch peer review that nevertheless checks the validity and accuracy of the research articles submitted.

Q. How can you detect and avoid predatory journals?

Research the topic and use the many guidelines provided by university libraries around the world. You can also use Cabells’ own criteria it uses to identify them for inclusion in its Predatory Reports database.

Q How many predatory journals are there?

There are currently 14,647 journals listed on Cabells’ Predatory Reports database.

Q. What is the warning sign that a journal or publisher is predatory?

In addition to the common indicators listed above, other more superficial signs can include poor grammar/spelling, very broad coverage of a topic, or solicitation of article submissions with excessive flattery in spam emails.

The rise and rise of predatory journals and conferences

Editor’s Note: Today’s post is by Tracey Elliott, Ph.D. Dr. Elliott is the Project Director at InterAcademy Partnership (IAP), currently overseeing Combatting Predatory Academic Journals and Conferences.


Predatory academic journals and, even more so, predatory conferences have been given surprisingly little attention in academic circles, despite their rapid growth and sophistication in recent years.  Juxtaposed with the pervasive “publish or perish” research assessment culture, where quantity trumps quality, the research community risks sleepwalking into a perfect storm.  Predatory academic practices are one manifestation of a surge in online scams and deceit that are deluging many sectors, fuelled further by changes in (post-) pandemic lifestyles, but their impact on the knowledge economy, research enterprise, and public policy is potentially profound. 

The InterAcademy Partnership (IAP) – the global network of over 140 academies of science, engineering and medicine – is leading an international project “Combatting predatory journals and conferences” which seeks to better understand the growing menace of these practices, gauge their extent and impact, what drives them and what actions are required to curb them.  With the number of predatory journals now estimated to be at least 14,500 (Cabells) and predatory conferences believed to outnumber legitimate ones (THES), this project is imperative and our recent survey of researchers all over the world is illuminating.

Conducted in November-December 2020, the survey gives concerning insight into the extent and impact of predatory practices across the world.  Based on the 1800+ respondents, two headlines are particularly striking:

1. Over 80% of respondents perceived predatory practices to be a serious problem or on the rise in their country.
2. At least a quarter of respondents had either published in a predatory journal, participated in a predatory conference, or did not know if they had.  Reasons cited for this included a lack of awareness of such scams and encouragement by their peers. Indeed, there is anecdotal evidence to suggest that the use of predatory journals and conferences is embedded, or at least tolerated, in some institutions/networks.

Contrary to some studies citing that early career researchers are especially vulnerable, we found no correlation between a researcher’s career stage, or their discipline, with their likelihood to publish in a predatory journal or participate in a predatory conference.  However, there is a small correlation with the economic status of the country in which they work, with those in lower- and middle-income countries more likely to participate or publish than those in high-income countries. If left unchecked, the research gap between higher- and lower-income countries risks widening. Putting definitive guidance on predatory journals behind paywalls, whilst sometimes unavoidable, risks exacerbating this further.

A challenge for such essential services, whether paywalled or not, is how to distinguish fraudulent, deceitful journals from low quality but well-intentioned and legitimate ones. Whilst bringing the clarity researchers crave, journal safelists and watchlists force an in or out binary decision that is increasingly inadequate and unfair.  In reality, there is a spectrum of fast-evolving and highly nuanced publishing practices that makes Cabell’s and its counterparts’ work very difficult. IAP is currently exploring a subset of Cabell’s-listed predatory journals using internet scraping and spidering techniques for data on predatory publishing.

Our project report, anticipated by early 2022, will include recommendations for all key stakeholder communities – researchers, research funders, publishers, academies and universities, libraries, and indexing services. With IAP as a conduit to academies and research communities throughout the world, we will focus on awareness-raising, training, and mentoring resources, and mobilising governments, multilateral and intergovernmental organisations.

Industrial disease

It’s almost four years since Cabells launched its Predatory Reports database, but the battle to overcome predatory journals shows no signs of abating. As a result, Cabells is constantly developing new ways to support authors and their institutions in dealing with the problem, and this week Simon Linacre reports from the virtual SSP Annual Meeting on a new collaboration with Edifix from Inera, which helps identify articles and authors published in predatory journals.

A common retort heard or read on social media whenever there is a discussion on predatory journals can go something like this: “is there really any harm done?”, “some research is only good enough for those kind of journals,” or “everyone knows those journals are fake.” For the latter rejoinders, there is some justification for taking those perspectives, and if recent global events have taught us anything it is that we need a sense of proportion when dealing with scientific breakthroughs and analysis. But the former point really doesn’t hold water because, when you think it through, there is a good deal of harm done to a number of different stakeholders as a result of one article appearing in a predatory journal.

Predatory journals do researchers and their institutions a huge disservice by claiming to be a reputable outlet for publication. Legitimate journals provide valuable services to both promote and protect authors’ work, which simply doesn’t happen with predatory journals. Essentially, there are three key reasons why authors and their employers can suffer harm from publishing in the wrong journals:

  • Their work may be subject to sub-par peer review, or more likely no peer review at all. The peer review system isn’t perfect, but papers that undergo peer review are better for it. Researchers want to make sure they are publishing in a place that values their work and is willing to devote time and resources to improving it.
  • Versions of record could disappear. One of the advantages of publishing with a reputable journal is that they make commitments to preserve authors’ work. Opportunists looking to make a quick buck are not going to care if your paper is still available in five years – or even five weeks.
  • Published articles will be hard to find. Some predatory journals advertise that they are included in well-known databases like Web of Science, Scopus, or Cabells when they are not. Predatory journals invest nothing in SEO or work to include journals in research databases, so research won’t be easily discoverable.

So, it is in the interests of authors, universities, societies, funders and society itself that research is not lost to predatory publishing activities. Checking against a database such as Predatory Reports will help those stakeholders, but to augment their capabilities Cabells is collaborating with Atypon’s Inera division, and specifically its Edifix product to help prevent ‘citation contamination’. This is where illegitimate articles published in predatory journals find their way into the research bloodstream by being referenced by legitimate journals. With Edifix, users can now vet bibliographic reference lists for citations to predatory journals, as identified by Predatory Reports.

This new Edifix web service with the automated Cabells Reference Checking Tool was showcased at SSP’s Annual Meeting (meeting registration required) this week (and previewed in an SSP sponsored session in October 2020) with a host of other new innovations, collaborations and product developments from the scholarly communications industry. While it would have been great to see old friends and colleagues in person at the event, the virtual format enabled much wider, international engagement which contributed to an undoubtedly successful event.

What really counts for rankings?

University and business school rankings have induced hate and ridicule in equal measure since they were first developed, and yet we are told enjoy huge popularity with students. Simon Linacre looks at how the status quo could change thanks in part to some rankings’ own shortcomings.


In a story earlier this month in Times Higher Education (THE), it was reported that the status of a university vis-à-vis sustainability was now the primary consideration for international students, ahead of academic reputation, location, job prospects and even accessibility for Uber Eats deliveries – OK, maybe not the last one. But for those who think students should place such considerations at the top of their lists, this was indeed one of those rare things in higher ed in recent times: a good news story.

But how do students choose such a university? Amazingly, THE produced a ranking just a week later providing students with, you guessed it, a ranking of universities based on their sustainability credentials. Aligned with the UN’s now-ubiquitous Sustainability Development Goals (SDGs), the ranking is now well-established and this year proclaimed the University of Manchester in the UK as the number one university that had the highest impact ranking across all 17 SDGs, although it was somewhat of an outlier for the UK, with four of the top ten universities based in Australia.

Cynics may point out that such rankings have become an essential part of the marketing mix for outfits such as THE, the Financial Times and QS. Indeed the latter has faced allegations this week over possible conflicts of interest between its consulting arm and its rankings with regard to universities in Russia – a charge which QS denies. However, perhaps most concerning is the imbalance that has always existed between the importance placed on rankings by institutions and the transparency and/or relevance of the rankings themselves. A perpetual case of the tail wagging the dog.

Take, for instance, the list of 50 journals used by the Financial Times as the basis for one of its numerous criteria for assessing business schools for its annual rankings. The list is currently under review after not changing since 2016, and then it only added 5 journals from the 45 it used prior to that date, which was itself an upgrade from 40 used in the 2000s. In other words, despite the massive changes seen in business and business education – from Enron to the global financial crisis to globalisation to the COVID pandemic – there has been barely any change in the journals used to assess publications from business schools to determine whether they are high quality.

The FT’s Global Education Editor Andrew Jack was questioned about the relevance of the FT50 and the rankings in general in Davos in 2020, and answered that to change the criteria would endanger the comparability of the rankings. This intransigence by the FT and other actors in higher education and scholarly communications was in part the motivation behind Cabells’ pilot study with the Haub School of Business at St Joseph’s University in the US to create a new rating based on journals’ output intensity in terms of the SDGs. Maintaining the status quo also reinforces paradigms and restricts diversity, marginalizing those in vulnerable and alternative environments.

If students and authors want information on SDGs and sustainability to make their education choices, it is beholden on the industry to try and supply it in as many ways as possible. And not to worry about how well the numbers stack up compared to a world we left behind a long time ago. A world that some agencies seem to want to cling on to despite evident shortcomings.

No more grist to the mill

Numerous recent reports have highlighted the problems caused by published articles that originated from paper mills. Simon Linacre asks what these 21st Century mills do and what other dangers could lurk in the future.


For those of us who remember life before the internet, and have witnessed its all-encompassing influence rise over the years, there is a certain irony in the idea of recommending a Wikipedia page as a trusted source of information on academic research. In the early days of Jimmy Wales’ huge project, whilst it was praised for its utility and breadth, there was always a knowing nod when referring someone there as if to say ‘obviously, don’t believe everything you read on there.’ Stories about fake deaths and hijacked pages cemented its reputation as a useful, but flawed source of information.

However, in recent years those knowing winks seem to have subsided, and in a way, it has become rather boring and reliable. We no longer hear about Dave Grohl dying prematurely and for most of us, such is our level of scepticism we can probably suss out if anything on the site fails to pass the smell test. As a result, it has become perhaps what it always wanted to be – the first port of call for quick information.

That said, one would hesitate to recommend it to one’s children as a sole source of information, and any researcher would think twice before citing it in their work. Hence the irony in recommending the following Wikipedia page as a first step towards understanding the dangers posed by paper mills: https://en.wikipedia.org/wiki/Research_paper_mill. It is the perfect post, describing briefly what paper mills are and citing updated sources from Nature and COPE on the impact they have had on scholarly publishing and how to deal with them.

For the uninitiated, paper mills are third party organisations set up to create articles that individuals can submit to journals to gain a publication without having to do much or even any of the original research. Linked to their cousin the essay mill which services undergraduates, paper mills could have generated thousands of articles that have subsequently been published in legitimate research journals.

What the reports and guidance from Nature and COPE seem to suggest is that while many of the paper mills have sprung up in China and are used by Chinese authors, recent changes in Chinese government policy moving away from strict publication-counting as performance measurement could mitigate the problem. In addition, high-profile cases shared by publishers such as Wiley and Sage point to some success in identifying transgressions, leading to multiple retractions (albeit rather slowly). The problems such articles present is clear – they present junk or fake science that could lead to numerous problems if taken at face value by other researchers or the general public. What’s more, there are the worrying possibilities of paper mills increasing their sophistication to evade detection, ultimately eroding the faith people have always had in peer-reviewed academic research. If Wikipedia can turn round its reputation so effectively, then perhaps it’s not too late for the scholarly publishing industry to act in concert to head off a similar problem.

Cabells becomes a member of United Nations SDG Publishers Compact

Cabells is proud to announce its acceptance as a full member of the United Nations SDG Publishers Compact, becoming one of the first U.S. organizations and non-primary publishers globally to be awarded membership. Cabells joined the initiative as part of its ongoing commitment to support research and publications focused on sustainable solutions.

The SDG Publisher Compact was launched at the end of 2020 as a way to stimulate action among the scholarly communications community. It was launched in collaboration with the International Publishers Association (IPA) with the aim of speeding up progress towards the UN’s 17 Sustainable Development Goals (SDGs) by 2030.

As a signatory of the Publishers Compact, Cabells commits to developing sustainable practices and playing a key role in its networks and communities as a champion of the SDGs during what is becoming known as the ‘Decade of Action‘ from 2020–2030. As such, Cabells is developing a number of solutions designed to help identify SDG-relevant journals and research for authors, librarians, funders, and other research-focused organizations.

Cabells’ Director of International Marketing & Development, Simon Linacre, said: “The UN SDGs have already done a remarkable job in directing funding and research to the most important questions facing our planet at this time. Becoming part of the UN SDG Publishers Compact will inspire Cabells into further playing our part in meeting these grand challenges.”

For more information, visit www.cabells.com or read the UN’s original press release.

Predatory journals vs. preprints: What’s the difference?

While working towards publication in a legitimate journal, however circuitous the route, is of course a much better path than publishing in an illegitimate journal, Simon Linacre examines why this is a useful question to consider.

A blog post this week in The Geyser pointed out the problems surrounding version control of the same article on multiple preprint servers and on the F1000 platform.

TL;DR? It isn’t pretty.

The article used as an example is unquestionably a legitimate study relating to the coronavirus pandemic, and as such is a small but important piece in the jigsaw being built around science’s pandemic response. That this article has yet to be validated – and as such enabled as a piece that fits the COVID-19 jigsaw – is something that will presumably be achieved once it is published in a recognized peer-reviewed journal.

However, this does raise the following rather thorny question: how is the article any better served fragmented on different preprint servers and publishing platforms than it would be having been published as a single entity in a predatory journal?

I am being facetious here – working towards a legitimate publication, however circuitous the route, is far better than publishing in an illegitimate journal. However, comparing the two options is not as strange as one might think, and perhaps offers some guidance for authors uncertain about where to publish their research in the first place.

Firstly, early career researchers (ECRs), while often offered very little direction when it comes to publication ethics and decision-making, are understandably worried about sharing their data and findings on preprint servers for fear of being ‘scooped’ by other researchers who copy their results and get published first. This is a legitimate fear, and is one explanation why a researcher, although unfamiliar with a journal, might submit their research for a low fee and quick turnaround.

Secondly, ECRs or more experienced researchers may be incentivised by their institutions to simply achieve a publication without any checks on the type of journal they publish in. As such, they need a journal to validate their publication – even if the journal itself has not been validated – which is something preprints or non-journal platforms are unable to provide.

Finally, while recent research has shown that just over half of articles published in predatory journals do not receive any citations, just less than 50% did receive citations, and authors may prefer one sole accessible source for their research than multiple sources across different preprints. This is not to say that preprints can’t receive citations – indeed Google Scholar reveals 22 citations to the article above from its original posting on Arxiv – but the perception may be that only journals can deliver citations, and will therefore be the aim for some authors.

Of course, authors should know the very real difference between a predatory journal and a preprint, but the evidence of 14,000+ journals on Cabells Predatory Reports database and the millions of spam emails received daily from illegitimate journals points to at least some researchers falling for the same tricks and continue to line the pockets of predatory publishers. While research publishing options remain as varied and as complex as they are – and while higher education institutions and funders simply assume every researcher has an effective publishing strategy – then as many will fall into the predatory trap as they have always done.