Beware of publishers bearing gifts

In the penultimate post of 2019, Simon Linacre looks at the recent publication of a new definition of predatory publishing and challenges whether such a definition is fit for purpose for those who really need it – authors


In this season of glad tidings and good cheer, it is worth reflecting that not everyone who approaches academic researchers bearing gifts are necessarily Father Christmas. Indeed, the seasonal messages popping into their inboxes at this time of year may offer opportunities to publish that seem too good to miss, but in reality, they could easily be a nightmare before Christmas.
 
Predatory publishers are the very opposite of Santa Claus. They will come into your house, eat your mince pies, but rather than leave you presents they will steal your most precious possession – your intellectual property. Publishing an article in a predatory journal could ruin an academic’s career, and it is very hard to undo once it has been done. Interestingly, one of the most popular case studies this year on COPE’s website is on what to do if you are unable to retract an article from a predatory journal in order to publish it in a legitimate one. 
 
Cabells has added over two thousand journals to its Journals Blacklist in 2019 and will reach 13,000 in total in the New Year. Identifying a predatory journal can be tricky, which is why they are often so successful in duping authors; yet defining exactly what a predatory journal is can be fraught with difficulty. In addition, some commentators do not like the term – from an academic perspective ‘predatory’ is hard to define, while others think it is too narrow. ‘Deceptive publishing’ has been put forward, but this, in turn, could be seen as too broad.
 
Cabells uses over 70 criteria to identify titles for inclusion in its Journals Blacklist and widens the net to encompass deceptive, fraudulent and/or predatory journals. Defining what characterizes these journals in just a sentence or two is hard, but this is what a group of academics has done following a meeting in Ottowa, Canada earlier in 2019 on the topic of predatory publishing. The output of this meeting was the following definition:
 
Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.” (Grudniewicz et al, 2019)
 
The definition is presented as part of a comment piece published in Nature last week and came from a consensus reached at the Ottowa meeting. It is a pity that Cabells was not invited to the event and given the opportunity to contribute. As it is, the definition and accompanying explanation has been met with puzzlement in the Twittersphere, with a number of eminent Open Access advocates saying it allows almost any publisher to be described as predatory. For it to be relevant, it will need to be adopted and used by researchers globally as a test for any journal they are thinking of submitting to. Only time will tell if this will be the case.


From all of us at Cabells, we wish everyone a joyous holiday season and a healthy New Year. Our next blog will be published on January 15, 2020.

Bringing clarity to academic publishing

How do you know if a journal is a good or a bad one? It is a simple enough question, but there is a lack of clear information out there for researchers, and often scams that lay traps for the unaware. In his latest post, Simon Linacre presents some new videos from Cabells that explain what it does to ensure authors can keep fully informed.


On a chilly Spring day in Edinburgh, myself and one of my colleagues were asked to do what nobody really wants to do if they can help it, and that is to ‘act natural’. It is one of life’s great ironies that it is so difficult to act naturally when told to do so. However, it was for a good cause, as we had been asked to explain to people through a short film what it was that Cabells did and why we thought it was important.

Video as a medium has been relatively ignored by scholarly publishers until quite recently. Video has of course been around for decades, and it has been possible to embed video on websites next to articles for a number of years. However, embedding video into pdfs has been tricky, and as every publisher will tell you when they ask you about user needs – academics ‘just want the pdf’. As a result, there has been little in the way of innovation when it comes to scholarly communication, despite some brave attempts such as video journals, video abstracts and other accompaniments to the humble article.

Video has been growing as a means of search, particularly for younger academics, and it can be much more powerful when it comes to engagement and social media. Stepping aside from the debate about what constitutes impact and whether Altmetrics and hits via social media really mean anything, video can be ‘sticky’ in the sense that people spend longer watching it than skipping over words on a web page. As such, the feeling is that video is a medium whose time may have yet to come when it comes to scholarly communications.

So, in that spirit, Cabells has shot a short video with some key excerpts that take people through the Journal Whitelist and Journal Blacklist. It is hoped that it answers some questions that people may have, and spurs others to get in touch with us. The idea of the film is the first step towards Cabells’ development of a number of resources in lots of different platforms that will help researchers drink in knowledge of journals to optimize their decision-making. In a future of Open Access, new publishing platforms, and multiple publishing choices, the power to publish will increasingly be in the hands of the author, with the scholarly publishing industry increasingly seeking ways to satisfy their needs. Knowledge about publishing is the key to unlocking that power.

Updated CCI and DA metrics hit the Journal Whitelist

Hot off the press, newly updated Cabell’s Classification Index© (CCI©) and Difficulty of Acceptance© (DA©) scores for all Journal Whitelist publication summaries are now available. These insightful metrics are part of our powerful mix of intelligent data leading to informed and confident journal evaluations.

Research has become increasingly cross-disciplinary and, accordingly, an individual journal might publish articles relevant to several fields.  This means that researchers in different fields often use and value the same journal differently. Our CCI© calculation is a normalized citation metric that measures how a journal ranks compared to others in each discipline and topic in which it publishes and answers the question, “How and to whom is this journal important?” For example, a top journal in computer science might sometimes publish articles about educational technology, but researchers in educational technology might not really “care” about this journal the same way that computer scientists do. Conversely, top educational technology journals likely publish some articles about computer science, but these journals are not necessarily as highly regarded by the computer science community. In short, we think that journal evaluations must be more than just a number.

CCI screenshot 2019 updates

The CCI© gauges how well a paper might perform in specific disciplines and topics and compares the influence of journals publishing content from different disciplines. Further, within each discipline, the CCI© classifies a journal’s influence for each topic that it covers. This gives users a way to evaluate not just how influential a journal is, but also the degree to which a journal influences different disciplines.

For research to have real impact it must first be seen, making maximizing visibility a priority for many scholars. Our Difficulty of Acceptance© (DA©) metric is a better way for researchers to gauge a journal’s exclusivity to balance the need for visibility with the very real challenge of getting accepted for publication.

DA screenshot 2019 updates

The DA© rating quantifies a journal’s history of publishing articles from top-performing research institutions. These institutions tend to dedicate more faculty, time, and resources towards publishing often and in “popular” journals. A journal that accepts more articles from these institutions will tend to expect the kind of quality or novelty that the availability of resources better facilitates. So, researchers use the DA© to find the journals with the best blend of potential visibility and manageable exclusivity.

For more information on our metrics, methods, and products, please visit www.cabells.com.

The Journal Blacklist surpasses the 12,000 journals listed mark

Just how big a problem is predatory publishing? Simon Linacre reflects on the news this week that Cabells announced it has reached 12,000 journals on its Journal Blacklist and shares some insights into publishing’s dark side.


Predatory publishing has seen a great deal of coverage in 2019, with a variety of sting operations, opinion pieces and studies published on various aspects of the problem. It seems that while on the one side, there is no doubt that it is a problem for academia globally, on the other side there is huge debate as to the size, shape and relative seriousness of that problem.

On the first of those points, the size looks to be pretty big – Cabells announced this week that its Journal Blacklist has hit the 12,000 mark. This is less than a year since it hit 10,000, and it is now triple the size it was when it was launched in 2017. Much of this is to do with the incredibly hard work of its evaluations team, but also because there are a LOT of predatory journals out there, with the numbers increasing daily.

On the last of those points, the aftershocks of the Federal Trade Commission’s ruling against OMICS earlier this year are still being felt. While there is no sign of any contrition on the part of OMICS – or of the $50m fine being paid – the finding has garnered huge publicity and acted as a warning for some academics not to entrust their research with similar publishers. In addition, it has been reported that CrossRef has now cut OMICS membership.

However, the shape of the problem is still hard for many to grasp, and perhaps it would help to share some of the tools of the trade of deceptive publishers. Take one journal on the Cabells Journal Blacklist – the British Journal of Marketing Studies.

Cabells Blacklist Screenshot

Sounds relatively normal, right? But a number of factors relating to this journal highlight many of the problems presented by deceptive journals:

  • The title includes the word ‘British’ as a proxy for quality, however, over 600 journals include this descriptor in the Blacklist compared to just over 200 in Scopus’ entire index of over 30,000 journals
  • The journal is published by European-American Journals alongside 81 other journals – a remarkable feat considering the publisher lists a small terraced house in Gillingham as its main headquarters
  • When Cabells reviewed it for inclusion in the Blacklist, it noted among other things that:
    • It falsely claimed to be indexed in well-known databases – we know this because among these was Cabells itself
    • It uses misleading metrics, including an “APS Impact Factor” of 6.80 – no such derivation of the Web of Science metric exists, apart from on other predatory journal sites
    • There is no detailed peer review policy stated
    • There is no affiliation for the Editor, one Professor Paul Simon, and searches cannot uncover any marketing professors with such a name (or a Prof. Garfunkel, for that matter)

This IS a problem for academia because, no matter what the size and seriousness of predatory publishing may be unless researchers learn to spot the signs of what it looks like, they will continue to get drawn in and waste their research, funding dollars, and even career, on deceptive publishing practices.

When does research end and publishing begin?

In his latest post, Simon Linacre argues that in order for authors to make optimal decisions – and not to get drawn into predatory publishing nightmares – research and publishing efforts should overlap substantially.


In a recent online discussion on predatory publishing, there was some debate as to the motivations of authors to chose predatory journals. A recent study in the ALPSP journal Learned Publishing found that academics publishing in such journals usually fell into one of two camps – either they were “uninformed” that the journal they had chosen to publish in was predatory in nature, or they were “unethical” in knowingly choosing such a journal in order to satisfy some publication goals.

However, a third category of researcher was suggested, that of the ‘unfussy’ author who neither cares nor knows what sort of journal they are publishing in. Certainly, there may be some overlap with the other two categories, but what they all have in common is bad decision-making. Whether one does not know, does not care, or does not mind which journal one publishes in, it seems to me that one should do so on all three counts.

It was at this point where one of the group posed one of the best questions I have seen in many years in scholarly communications: when it comes to article publication, where does the science end in scientific research? Due in part to the terminology as well as the differing processes, the concept of research and publication are regarded as somehow distinct or separate. Part of the same eco-system, for sure, but requiring different skills, knowledge and approaches. The question is a good one as it challenges this duality. Isn’t is possible for science to encompass some of the publishing process itself? And shouldn’t the publishing process become more involved in the process of research?

The latter is already happening to a degree in moves by major publishers to climb up the supply chain and become more involved in research services provision (e.g. the acquisition of article platform services provider Atypon by Wiley). On the other side, there is surely an argument that at the end of experiments or data collection, analyzing data logically and writing up conclusions, there is a place for scientific process to be followed in choosing a legitimate outlet with appropriate peer review processes? Surely any university or funder would expect such a scientific approach at every level from their employees or beneficiaries. And a failure to do this allows in not only sub-optimal choices of journal, but worse predatory outlets which will ultimately delegitimize scientific research as a whole.

I get that that it may not be such a huge scandal if some ho-hum research is published in a ‘crappy’ journal so that an academic can tick some boxes at their university. However, while the outcome may not be particularly harmful, the tacit allowing of such lazy academic behavior surely has no place in modern research. Structures that force gaming of the system should, of course, be revised, but one can’t help thinking that if academics carried the same rigor and logic forward into their publishing decisions as they did in their research, scholarly communications would be in much better shape for all concerned.

Faking the truth

Predatory publishing can cause harm in all sorts of ways, but so can fighting it with the wrong ammunition. In this blog post, Simon Linacre looks at examples of how organizations have gone the wrong way about doing the right thing.


One of the perks – and also the pains – of working in marketing is that you have to spend time trawling through social media posts. It is painful because no matter how good your filters are, there is a huge amount of unnecessary, unearthly and unhealthy content being shared in absolute torrents. On the flip side, however, there are a few gems worth investigating further. Many of them prove to be rabbit holes, but nevertheless, the chase can be worthwhile.

Searching through some posts earlier this month I happened upon mention of an updated list of recommended and predatory journals. Obviously, this is our gig at Cabells so I was genuinely intrigued to find out more. It turns out that the Directorate General of Scientific Research and Technological Development (RSDT) in Algeria has produced three lists on its website – two of recommended journals for Algerian researchers in two subject categories, and one of predatory journals and publishers.

Judgment

A cursory look at the predatory list shows that the first 100 or so journals match Beall’s archived list almost exactly. Futhermore, there is nowhere on the website that suggests how and why such a list exists, other than an open warning to authors who publish in one of the journals listed:

“To this effect, any publication in a journal in category A or B which is predatory or published by a predatory publisher, or that exclusively publishes conference proceedings, is not accepted for defense of a doctoral thesis or university tenure.” (own translation)

In other words, your academic career could be in trouble if you publish in a journal in the RSDT list.

Consequences

The rights and wrongs, accuracies and inaccuracies of Beall’s list have been debated elsewhere, but it is fair to say that as Beall was trying to eradicate predatory publishing practices by highlighting them, some journals were missed while some publishers and their titles were perhaps unfairly identified as predatory. Now the list is over two years out of date, with one version being updated by no-one-knows-who. However, what are the consequences for Algerian academics – and authors from anywhere else who are judged by the same policy – of publishing in a journal?

  1. Publish in RSDT-listed journal that is not predatory but on the list: Career trouble
  2. Publish in RSDT-listed journal that is predatory and on the list: Career trouble
  3. Publish in journal not listed by RSDT but is predatory: Career trouble
  4. Publish in journal not listed by RSDT and is not predatory: Career OK

Option 4 is obviously the best option, and Option 2 is a sad result for authors not doing their homework and de-risking their publishing strategy. But it seems there will be a large number of academics who make a valid choice (1) based on independent criteria who will fall foul of an erroneous list, or who think they are safe because a journal is not on the RSDT list but is predatory (3).

Comparison

One of my colleagues at Cabells cross-referenced the RSDT list and the Cabells Blacklist, which now has over 11,595 journals reviewed and validated as predatory. The results show that due to a lack of crossover between the lists, many academics in Algeria, and potentially elsewhere, could be wrongly condemned or unwittingly publish in predatory journals:

  • In terms of publishers, the RSDT list contains 1601 unique publishers, while the Blacklist contains 443
  • There are exactly 200 publishers on both lists, meaning that around 12% of the publishers on the RSDT list are also included in the Blacklist, while 43% of the Blacklist publishers are also on the RSDT list
  • The RSDT list contains 2488 unique journals, of which only 81 are the same as the 11,500+ Blacklist journals
  • Less than 1% (0.7%) of the Blacklist is also on the RSDT list; conversely, about 3% of the RSDT list is also included on the Blacklist.

As always, the moral of the story for authors is ‘research your research’ – fully utilize the skills you have gained as an academic by applying them to researching your submission decision and checking multiple sources of information. Failure to do so could mean serious problems, wherever you are.

Why asking the experts is always a good idea

In the so-called ‘post-truth’ age where experts are sidelined in favor of good soundbites, Simon Linacre unashamedly uses expert insight in uncovering the truth behind poor publishing decisions… with some exciting news at the end!


Everyone in academia or scholarly publishing can name at least one time they came across a terrible publishing decision. Whether it was an author choosing the wrong journal, or indeed the journal choosing the wrong author, articles have found their way into print that never should have, and parties on both sides must live with the consequences for evermore.

My story involved an early career researcher (ECR) in the Middle East whom I was introduced to whilst delivering talks on how to get published in journals. The researcher had submitted an article to well-regarded Journal A, but, tired of waiting on a decision, submitted the same article to a predatory-looking Journal B without retracting the prior submission. Journal B accepted the paper… and then so did Journal A after the article had already appeared in Journal B’s latest issue. Our hapless author went ahead and published the same article in Journal A – encouraged, so I was told, by his boss – and was then left with the unholy mess of dual publication and asking for my guidance. A tangled web indeed.

Expert advice

The reason why our author made a poor publishing choice was both out of ignorance and necessity, with the same boss telling him to accept the publication in the better-ranked journal, the same boss who wanted to see improved publishing outputs from their faculty. At Cabells, we are fast-approaching 11,000 predatory journals on our Blacklist and it is easy to forget that every one of those journals is filled with articles from authors who, for some reason, made a decision to submit their articles to them for publication.

The question therefore remains: But why?

Literature reviewed

One researcher decided to answer this question herself by, you guessed it, looking at what other experts had said in the form of a literature review of related articles. TF Frandsen’s article is entitled, “Why do researchers decide to publish in questionable journals? A review of the literature” and is published by Wiley in the latest issue of Learned Publishing (currently available as a free access article here). In it, Frandsen draws the following key points:

  • Criteria for choosing journals could be manipulated by predatory-type outlets to entrap researchers and encourage others
  • A ‘publish or perish’ culture has been blamed for the rise in ‘deceptive journals’ but may not be the only reason for their growth
  • Identifying journals as ‘predatory’ ignores the fact that authors may seek to publish in them as a simple route to career development
  • There are at least two different types of authors who publish in so-called deceptive journals: the “unethical” and the “uninformed”
  • Therefore, there should be at least two different approaches to the problem required

For the uninformed, Frandsen recommends that institutions ensure that faculty members are as informed as possible on the dangers of predatory journals and what the consequences of poor choices might be. For those authors making unethical choices, she suggests that the incentives in place that push these authors to questionable decisions should be removed. More broadly, as well as improved awareness, better parameters for authors around the quality of journals in which they should publish could encourage a culture of transparency around journal publication choices. And this would be one decision that everyone in academia and scholarly publishing could approve of.

PS: Enjoying our series of original posts in The Source? The great news is that there will be much more original content, news and resources available for everyone in the academic and publishing communities in the coming weeks… look out for the next edition of The Source for some exciting new developments!