Bringing clarity to academic publishing

How do you know if a journal is a good or a bad one? It is a simple enough question, but there is a lack of clear information out there for researchers, and often scams that lay traps for the unaware. In his latest post, Simon Linacre presents some new videos from Cabells that explain what it does to ensure authors can keep fully informed.


On a chilly Spring day in Edinburgh, myself and one of my colleagues were asked to do what nobody really wants to do if they can help it, and that is to ‘act natural’. It is one of life’s great ironies that it is so difficult to act naturally when told to do so. However, it was for a good cause, as we had been asked to explain to people through a short film what it was that Cabells did and why we thought it was important.

Video as a medium has been relatively ignored by scholarly publishers until quite recently. Video has of course been around for decades, and it has been possible to embed video on websites next to articles for a number of years. However, embedding video into pdfs has been tricky, and as every publisher will tell you when they ask you about user needs – academics ‘just want the pdf’. As a result, there has been little in the way of innovation when it comes to scholarly communication, despite some brave attempts such as video journals, video abstracts and other accompaniments to the humble article.

Video has been growing as a means of search, particularly for younger academics, and it can be much more powerful when it comes to engagement and social media. Stepping aside from the debate about what constitutes impact and whether Altmetrics and hits via social media really mean anything, video can be ‘sticky’ in the sense that people spend longer watching it than skipping over words on a web page. As such, the feeling is that video is a medium whose time may have yet to come when it comes to scholarly communications.

So, in that spirit, Cabells has shot a short video with some key excerpts that take people through the Journal Whitelist and Journal Blacklist. It is hoped that it answers some questions that people may have, and spurs others to get in touch with us. The idea of the film is the first step towards Cabells’ development of a number of resources in lots of different platforms that will help researchers drink in knowledge of journals to optimize their decision-making. In a future of Open Access, new publishing platforms, and multiple publishing choices, the power to publish will increasingly be in the hands of the author, with the scholarly publishing industry increasingly seeking ways to satisfy their needs. Knowledge about publishing is the key to unlocking that power.

The Journal Blacklist surpasses the 12,000 journals listed mark

Just how big a problem is predatory publishing? Simon Linacre reflects on the news this week that Cabells announced it has reached 12,000 journals on its Journal Blacklist and shares some insights into publishing’s dark side.


Predatory publishing has seen a great deal of coverage in 2019, with a variety of sting operations, opinion pieces and studies published on various aspects of the problem. It seems that while on the one side, there is no doubt that it is a problem for academia globally, on the other side there is huge debate as to the size, shape and relative seriousness of that problem.

On the first of those points, the size looks to be pretty big – Cabells announced this week that its Journal Blacklist has hit the 12,000 mark. This is less than a year since it hit 10,000, and it is now triple the size it was when it was launched in 2017. Much of this is to do with the incredibly hard work of its evaluations team, but also because there are a LOT of predatory journals out there, with the numbers increasing daily.

On the last of those points, the aftershocks of the Federal Trade Commission’s ruling against OMICS earlier this year are still being felt. While there is no sign of any contrition on the part of OMICS – or of the $50m fine being paid – the finding has garnered huge publicity and acted as a warning for some academics not to entrust their research with similar publishers. In addition, it has been reported that CrossRef has now cut OMICS membership.

However, the shape of the problem is still hard for many to grasp, and perhaps it would help to share some of the tools of the trade of deceptive publishers. Take one journal on the Cabells Journal Blacklist – the British Journal of Marketing Studies.

Cabells Blacklist Screenshot

Sounds relatively normal, right? But a number of factors relating to this journal highlight many of the problems presented by deceptive journals:

  • The title includes the word ‘British’ as a proxy for quality, however, over 600 journals include this descriptor in the Blacklist compared to just over 200 in Scopus’ entire index of over 30,000 journals
  • The journal is published by European-American Journals alongside 81 other journals – a remarkable feat considering the publisher lists a small terraced house in Gillingham as its main headquarters
  • When Cabells reviewed it for inclusion in the Blacklist, it noted among other things that:
    • It falsely claimed to be indexed in well-known databases – we know this because among these was Cabells itself
    • It uses misleading metrics, including an “APS Impact Factor” of 6.80 – no such derivation of the Web of Science metric exists, apart from on other predatory journal sites
    • There is no detailed peer review policy stated
    • There is no affiliation for the Editor, one Professor Paul Simon, and searches cannot uncover any marketing professors with such a name (or a Prof. Garfunkel, for that matter)

This IS a problem for academia because, no matter what the size and seriousness of predatory publishing may be unless researchers learn to spot the signs of what it looks like, they will continue to get drawn in and waste their research, funding dollars, and even career, on deceptive publishing practices.

When does research end and publishing begin?

In his latest post, Simon Linacre argues that in order for authors to make optimal decisions – and not to get drawn into predatory publishing nightmares – research and publishing efforts should overlap substantially.


In a recent online discussion on predatory publishing, there was some debate as to the motivations of authors to chose predatory journals. A recent study in the ALPSP journal Learned Publishing found that academics publishing in such journals usually fell into one of two camps – either they were “uninformed” that the journal they had chosen to publish in was predatory in nature, or they were “unethical” in knowingly choosing such a journal in order to satisfy some publication goals.

However, a third category of researcher was suggested, that of the ‘unfussy’ author who neither cares nor knows what sort of journal they are publishing in. Certainly, there may be some overlap with the other two categories, but what they all have in common is bad decision-making. Whether one does not know, does not care, or does not mind which journal one publishes in, it seems to me that one should do so on all three counts.

It was at this point where one of the group posed one of the best questions I have seen in many years in scholarly communications: when it comes to article publication, where does the science end in scientific research? Due in part to the terminology as well as the differing processes, the concept of research and publication are regarded as somehow distinct or separate. Part of the same eco-system, for sure, but requiring different skills, knowledge and approaches. The question is a good one as it challenges this duality. Isn’t is possible for science to encompass some of the publishing process itself? And shouldn’t the publishing process become more involved in the process of research?

The latter is already happening to a degree in moves by major publishers to climb up the supply chain and become more involved in research services provision (e.g. the acquisition of article platform services provider Atypon by Wiley). On the other side, there is surely an argument that at the end of experiments or data collection, analyzing data logically and writing up conclusions, there is a place for scientific process to be followed in choosing a legitimate outlet with appropriate peer review processes? Surely any university or funder would expect such a scientific approach at every level from their employees or beneficiaries. And a failure to do this allows in not only sub-optimal choices of journal, but worse predatory outlets which will ultimately delegitimize scientific research as a whole.

I get that that it may not be such a huge scandal if some ho-hum research is published in a ‘crappy’ journal so that an academic can tick some boxes at their university. However, while the outcome may not be particularly harmful, the tacit allowing of such lazy academic behavior surely has no place in modern research. Structures that force gaming of the system should, of course, be revised, but one can’t help thinking that if academics carried the same rigor and logic forward into their publishing decisions as they did in their research, scholarly communications would be in much better shape for all concerned.

Still without peer?

Next week the annual celebration of peer review takes place, which despite being centuries old is still an integral part of scholarly communications. To show Cabells’ support of #PeerReviewWeek, Simon Linacre looks at why peer review deserves its week in the calendar and to survive for many years to come.


I was recently asked by Cabells’ partners Editage to upload a video to YouTube explaining how the general public benefited from peer review. This is a good question, because I very much doubt the general public is aware at all of what peer review is and how it impacts their day-to-day lives. But if you reflect for just a moment, it is clear it impacts almost everything, much of which is taken for granted on a day-to-day basis.

Take making a trip to the shops. A car is the result of thousands of experiments and validated peer review research over a century to come up with the safest and most efficient means of driving people and things from one place to another; each supermarket product has been health and safety tested; each purchase uses digital technology such as the barcode that has advanced through the years to enable fast and accurate purchasing; even the license plate recognition software that gives us a ticket when we stay too long in the car park will be a result of some peer reviewed research (although most people may struggle to describe that as a ‘benefit’).

So, we do all benefit from peer review, even if we do not appreciate it all the time. Does that prove the value of peer review? For some, it is still an inefficient system for scholarly communications, and over the years a number of platforms have sought to disrupt it. For example, PLoS has been hugely successful as a publishing platform where a ‘light touch peer review’ has taken place to enable large-scale, quick turnaround publishing. More recently, F1000 has developed a post-publication peer review platform where all reviews are visible and take place on almost all articles that are submitted. While these platforms have undoubtedly offered variety and author choice to scientific publishing processes, they have yet to change the game, particularly in social sciences where more in-depth peer review is required.

Perhaps real disruption will be seen to accommodate peer review rather than change it. This week’s announcement at the ALPSP Conference by Cactus Communications – part of the same organization as Editage – of an AI-powered platform that can allow authors to submit articles to be viewed by multiple journal editors may just change the way peer review works. Instead of the multiple submit-review-reject cycles authors have to endure, they can submit their article to a system that can check for hygiene factor quality characteristics and relevance to journals’ coverage, and match them with potentially interested editors who can offer the opportunity for the article to then be peer reviewed.

If it works across a good number of journals, one can see that from the perspective of authors, editors and publishers, it would be a much more satisfactory process than the traditional one that still endures. And a much quicker one to boot, which means that the general public should see the benefits of peer review all the more speedily.

Agile thinking

In early November, Cabells is very pleased to be supporting the Global Business School Network (GBSN) at its annual conference in Lisbon, Portugal. In looking forward to the event, Simon Linacre looks at its theme of ‘Measuring the Impact of Business Schools’, and what this means for the development of scholarly communications.


For those of you not familiar with the Global Business School Network, they have been working with business schools, industry and charitable organizations in the developing world for many years, with the aim of enhancing access to high quality, highly relevant management education. As such, they are now a global player in developing international networking, knowledge-sharing and collaboration in wider business education communities.

Cabells will support their Annual Conference in November in its position as a leader in publishing analytics, and will host a workshop on ‘Research Impact for the Developing World’. This session will focus on the nature of management research itself – whether it should focus on global challenges rather than just business ones, and whether it can be measured effectively by traditional metrics, or if new ones can be introduced. The thinking is that unless the business school community is more pro-active about research and publishing models themselves, wider social goals will not be met and an opportunity lost to set the agenda globally.

GBSN and its members play a pivotal role here, both in seeking to take a lead on a new research agenda and also in seizing an opportunity to be at the forefront of what relevant research looks like and how it can be incentivized and rewarded. With the advent of the United Nations Sustainable Development Goals (SDGs) – a “universal call to action to end poverty, protect the planet and ensure that all people enjoy peace and prosperity” – not only is there an increased push to change the dynamics of what prioritizes research, there comes with it a need to assess that research in different terms. This question will form the nub many of the discussions in Lisbon in November.

So, what kind of new measures could be applied? Well firstly, this does assume that measures can be applied in the first place, and there are many people who think that any kind of measurement is unhelpful and unworkable. However, academic systems are based around reward and recognition, so to a greater or lesser degree, it is difficult to see measures disappearing completely. Responsible use of such measures is key, as is the informed use of a number of data points available – this is why Cabells includes unique data such as time to publication, acceptance rates, and its own Cabells Classification Index© (CCI©) which measures journal performance using citation data within subject areas.

In a new research environment, just as important will be new measures such as Altmetrics, which Cabells also includes in its data. Altmetrics can help express the level of online engagement research publications have had, and there is a feeling that this research communications space will become much bigger and more varied as scholars and institutions alike seek new ways to disseminate research information. This is one of the most exciting areas of development in research at the moment, and it will be fascinating to see what ideas GBSN and its members can come up with at their Annual Conference.

If you would like to attend the GBSN Annual Conference, Early Bird Registration is open until the 15th September.

Publish and be damned

The online world is awash with trolling, gaslighting and hate speech, and the academic portion is sadly not immune, despite its background in evidence, logical argument and rigorous analysis. For the avoidance of doubt, Simon Linacre establishes fact from fiction for Cabells in terms of Open Access, predatory publishing and product names.


When I went to university as a very naive 18-year-old Brit, for the first time I met an American. He was called Rick who lived down the corridor in my hall of residence. He was older than me, and a tad dull if I’m honest, but one evening we were chatting in our room about someone else in the hall, and he warned me about setting too much store by perceptions: “To assume is to make an ass out of u and me,” he told me sagely.

Twenty years later, while I have come to realize this phrase is a little corny, it still sticks in my mind every time I see or hear about people being angry about a situation where the full facts are not known. Increasingly, this is played out on social media, where sadly there are thousands of people making asses out of u, me and pretty much everyone else without any evidence whatsoever for their inflammatory statements.

To provide a useful point of reference for the future, we at Cabells thought we should define positions we hold on some key issues in scholarly publishing. So, without further ado, here is your cut-out-and-keep guide to our ACTUAL thinking in response to the following statements:

  • ‘Cabells is anti-OA’ or ‘Cabells likes paywalls’: Not true, in any way shape or form. Cabells is pro-research, pro-quality and pro-authors; we are OA-neutral in the sense that we do not judge journals or publishers in these terms. Over 13% of the Whitelist are pure OA journals and two-thirds are hybrid OA.
  • ‘Cabells is anti-OA like Beall’ or ‘You’re carrying on Beall’s work’: Cabells had several discussions with Jeffrey Beall around the time he stopped work on his list and Cabells published the Blacklist for the first time in the same year (2017). However, the list did NOT start with Beall’s list, does NOT judge publishers (only journals) for predatory behavior, and the Blacklist shows considerable divergence from Beall’s List – only 234 journals are listed on both (according to Strinzel et al (2019)).
  • ‘Predatory publishing is insignificant’ or ‘Don’t use the term predatory publishing’: The recent FTC judgement fining the Omics Group over $50m shows that these practices are hardly insignificant, and that sum in no way quantifies the actual or potential hurt done by publishing fake science or bad science without the safety net of peer review. Other terms for such practices may in time gain traction – fake journals, junk science, deceptive practices – but until then the most commonly used term seems the most appropriate.
  • ‘The Whitelist and Blacklist are racist’: The origins of the word ‘blacklist’ come from 17th century England, and was used to describe a number of people who had opposed Charles II during the Interregnum. It is a common feature of language that some things are described negatively as dark or black and vice versa. Cabells is 100% pro-equality and pro-diversity in academic research and publishing.
  • ‘Cabells unfairly targets new or small journals with the Blacklist’: Some of the criteria used for the Blacklist include benchmarks that a very few legitimate journals may not pass. For example, there is a criterion regarding the speed of growth of a journal. This is a minor violation and identifies typical predatory behavior. It is not a severe violation which Cabells uses to identify journals for the Blacklist, and nor is it used in isolation – good journals will not be stigmatized by inclusion on the Blacklist simply because they won’t be included. In the two years the Blacklist has been in operation, just three journals out of 11,500+ listed have requested a review.

The power of four

After hearing so many different ways that its Journal Whitelist and Journal Blacklist have been used by customers, Cabells has started to map out how any researcher can use journal data to optimize their decision-making. Fresh from its debut at the World Congress on Research Integrity in Hong Kong last month, Simon Linacre shares the thinking behind the new ‘Four Factor Framework’ and how it could be used in the future.


The 6th World Congress on Research Integrity (WCRI) was held in Hong Kong last month, bringing together the great and the good of those seeking to improve the research process globally. Topics were surprisingly wide, taking a look at such diverse areas as human rights, predatory publishing, data sharing, and policy. It was significant that while much of the focus of the conference was on the need to improve education and learning on how to conduct research ethically, many of the cases presented showed that there is still much to do in this respect.

Cabells was also there and used its presence to share some ideas on how to overcome some of these challenges, particularly with regard to engagement with improved research and publishing practices. Taking the established issues within predatory publishing encountered the world over as a starting point (i.e. choosing the wrong journal), as well as the need to look at as much data as possible (i.e. choosing the right journal), Cabells has very much put the author at the center of its thinking to develop what it has called the ‘Four Factor Framework’:

 

The framework, or FFF, firstly puts the onus on the researcher to rule out any poor, deceptive or predatory journals, using resources such as the Blacklist. This ‘negative’ first step then opens up the next stage, which is to take the four following factors into account before submitting to a research paper to a journal:

  • Strategic: understanding how a journal will impact career ambitions or community perspectives
  • Environmental: bringing in wider factors such as research impact or ethical issues
  • Political: understanding key considerations such as publishing in titles on journal lists, avoiding such lists or journals based in certain areas
  • Cultural: taking into account types of research, peer review or article form

Having talked to many customers over a period of time, these factors all become relevant to authors at some point during that crucial period when they are choosing which journal to publish in. Customers have fed back to Cabells that use of Cabells’ Whitelist and Blacklist – as well as other sources of data and guidance – can be related to as benchmarking, performance-focused or risk management. While it is good to see that the databases can help so many authors in so many different ways, judging by the evidence at WCRI there is still a huge amount to do in educating researchers to take advantage of these optimized approaches. And this will be the main aim of Cabells’ emerging strategy – to enable real impact by researchers and universities through the provision of validated information and support services around scholarly publishing.

Look to the North

Say what you like about Canada – and plenty do – but they are taking the threat of predatory publishing more seriously than just about any other country. Simon Linacre reflects on recent activities North of the 49th parallel.


It’s all about Canada at the moment. They have the new NBA champions in the shape of the Toronto Raptors, overcoming huge odds to beat defending champions Golden State Warriors in a thrilling finals series, and in the process becoming the first Canadian winner of a major US sports championship in over 25 years. Add to that one of the more likable (if under pressure) world leaders, and continued dominance of lists of best places to live, and those living North of the border seem to have it sewn up, eh?

They also seem to be leading the way when it comes to research integrity and publishing ethics, as a number of high-profile studies and articles have shown. A piece in Canadian news magazine The Walrus has highlighted the problems of predatory publishing in Canada, but also the fight to overcome these concerns. The article is entitled ‘The Rise of Junk Science’ and highlights the ways predatory publishing has infected scholarly activities in Canada, including:

  • Seeing predatory publishers buy out legitimate publishers and use their name to support predatory conferences
  • Wasting resources from funding organizations and universities on APCs for predatory journals
  • Forcing reputable universities to check every individual CV of every academic to ensure they haven’t published in predatory journals
  • Running the risk that ‘junk science’ published in seemingly authentic journals will be read and used, causing unknown harm
  • Allowing unscrupulous academics to take advantage of university policy to publish in journals with no peer review or quality control in order to be rewarded.

This last point was also highlighted by Derek Pyne in 2017, who pointed out that some of his colleagues at Thompson Rivers University had published in predatory journals and received rewards as a result. Pyne was suspended for a brief spell following publication of the article, but it acted as a wake-up call to many institutions to review their policies when it came to publications.

Canada also hosted a conference on predatory journals this year – as opposed to the numerous predatory conferences that have also sprung out of predatory publishing practices. These are also highlighted by the article in The Walrus which gives a great overview of the predatory publishing problem in Canada. Cabells’ own evidence shows that there seems to be a specific issue there as there are 119 confirmed and over 500 suspected predatory journals originating from there in the Blacklist, which is nearly 5% of the total. However, by shining a light on the problem and tackling it head-on, the country can at least lead the way for many others to follow.

Faking the truth

Predatory publishing can cause harm in all sorts of ways, but so can fighting it with the wrong ammunition. In this blog post, Simon Linacre looks at examples of how organizations have gone the wrong way about doing the right thing.


One of the perks – and also the pains – of working in marketing is that you have to spend time trawling through social media posts. It is painful because no matter how good your filters are, there is a huge amount of unnecessary, unearthly and unhealthy content being shared in absolute torrents. On the flip side, however, there are a few gems worth investigating further. Many of them prove to be rabbit holes, but nevertheless, the chase can be worthwhile.

Searching through some posts earlier this month I happened upon mention of an updated list of recommended and predatory journals. Obviously, this is our gig at Cabells so I was genuinely intrigued to find out more. It turns out that the Directorate General of Scientific Research and Technological Development (RSDT) in Algeria has produced three lists on its website – two of recommended journals for Algerian researchers in two subject categories, and one of predatory journals and publishers.

Judgment

A cursory look at the predatory list shows that the first 100 or so journals match Beall’s archived list almost exactly. Futhermore, there is nowhere on the website that suggests how and why such a list exists, other than an open warning to authors who publish in one of the journals listed:

“To this effect, any publication in a journal in category A or B which is predatory or published by a predatory publisher, or that exclusively publishes conference proceedings, is not accepted for defense of a doctoral thesis or university tenure.” (own translation)

In other words, your academic career could be in trouble if you publish in a journal in the RSDT list.

Consequences

The rights and wrongs, accuracies and inaccuracies of Beall’s list have been debated elsewhere, but it is fair to say that as Beall was trying to eradicate predatory publishing practices by highlighting them, some journals were missed while some publishers and their titles were perhaps unfairly identified as predatory. Now the list is over two years out of date, with one version being updated by no-one-knows-who. However, what are the consequences for Algerian academics – and authors from anywhere else who are judged by the same policy – of publishing in a journal?

  1. Publish in RSDT-listed journal that is not predatory but on the list: Career trouble
  2. Publish in RSDT-listed journal that is predatory and on the list: Career trouble
  3. Publish in journal not listed by RSDT but is predatory: Career trouble
  4. Publish in journal not listed by RSDT and is not predatory: Career OK

Option 4 is obviously the best option, and Option 2 is a sad result for authors not doing their homework and de-risking their publishing strategy. But it seems there will be a large number of academics who make a valid choice (1) based on independent criteria who will fall foul of an erroneous list, or who think they are safe because a journal is not on the RSDT list but is predatory (3).

Comparison

One of my colleagues at Cabells cross-referenced the RSDT list and the Cabells Blacklist, which now has over 11,595 journals reviewed and validated as predatory. The results show that due to a lack of crossover between the lists, many academics in Algeria, and potentially elsewhere, could be wrongly condemned or unwittingly publish in predatory journals:

  • In terms of publishers, the RSDT list contains 1601 unique publishers, while the Blacklist contains 443
  • There are exactly 200 publishers on both lists, meaning that around 12% of the publishers on the RSDT list are also included in the Blacklist, while 43% of the Blacklist publishers are also on the RSDT list
  • The RSDT list contains 2488 unique journals, of which only 81 are the same as the 11,500+ Blacklist journals
  • Less than 1% (0.7%) of the Blacklist is also on the RSDT list; conversely, about 3% of the RSDT list is also included on the Blacklist.

As always, the moral of the story for authors is ‘research your research’ – fully utilize the skills you have gained as an academic by applying them to researching your submission decision and checking multiple sources of information. Failure to do so could mean serious problems, wherever you are.

Feedback loop

Last week the Scholarly Kitchen blog reviewed the Cabells Blacklist for its readers and inspired the second highest number of comments for any post so far in 2019. As a follow-up, Simon Linacre answers some of the questions the post and comments have raised while providing an update on the product itself.    



The publication of Rick Anderson’s review of the Blacklist last week gives Cabells some great feedback for us to improve the product, both from an industry expert and the blog’s readers from the scholarly publishing community. We have answered some of the specific queries already in the Comments section, but thought it would be helpful to address some wider points for the benefit of those who have been so engaged with the post.

Firstly, Rick pointed out that for those journals under review, there was no indication as to why that was the case. Journals are recommended for review through the Cabells website, from members of the academic community, through word of mouth and from our own research, so often the reason they are being reviewed is not recorded. Some journals check out just fine, so we have to be careful not to stigmatize a journal unfairly by repeating claims that may be unfounded, which could also have legal implications.

Secondly, Rick felt that some of the criteria for inclusion were a little ambiguous and unclear, and this is something we have very much taken on board. We have recently revamped the criteria and added some new items, but due to the nature of predatory publishing this review process is ongoing and we will look to clarify any ambiguities we can. In addition, there was clear concern that the appeals process for the Blacklist was not visible enough, and this is something that will be changed to make the appeals policy more visible. A page for the Blacklist appeals process has been added to our blog, The Source. In addition, we will add a link to the Cabells Blacklist product page under the Blacklist criteria link. 

Rick’s final point was with regard to the functionality of Advanced Search on the Blacklist, with recommendations it should be expanded to offer searches by violation type, for example. This development is currently on our roadmap, as we constantly seek to improve the product’s utility for users. Other product development ideas mentioned in the Comments section – such as integrating the Blacklist as a tool for customers to run checks on journals and checking citation activity – are also in on our to-do list, and we hope to be able to share some product development news shortly.

Moving on to the Comments, it is clear some in the community feel the Blacklist should be free or at least available at a lower subscription price. As has been noted by our colleague in the Comments, the price one contributor quoted for a subscription was far more than a typical subscription, which tends to equate to a handful of APCs. One of the Scholarly Kitchen chefs commented that many institutions and funders unfortunately waste many thousands of dollars for academics to publish their papers in predatory journals, which use of the Blacklist would help mitigate.

Finally, there were two very interesting comment threads around author services and offering a ‘gray list’ to customers in the future. Cabells has a strategic partnership with Editage, and in collaboration with them offers users an opportunity to improve their articles before the vital submission stage. As for offering a Gray List, while there is a de facto list of such journals – i.e. a list of journals NOT on the Whitelist or Blacklist – this list could easily include 50,000 journals or more, and as noted above could unfairly taint essentially decent journals. Cabells is very much a global operation and understands new, regional, niche, innovative or low-cited journals can be legitimate and offer a vital publication outlet for many researchers. If we were to think of another list, it would be to champion these titles rather those that offer little or no value for their contributors.
 
PS – If you would like a quote for your institution to subscribe to the Blacklist or any other Cabells products, please email us at sales@cabells.com and we will get straight back to you.