Updated CCI and DA metrics hit the Journal Whitelist

Hot off the press, newly updated Cabell’s Classification Index© (CCI©) and Difficulty of Acceptance© (DA©) scores for all Journal Whitelist publication summaries are now available. These insightful metrics are part of our powerful mix of intelligent data leading to informed and confident journal evaluations.

Research has become increasingly cross-disciplinary and, accordingly, an individual journal might publish articles relevant to several fields.  This means that researchers in different fields often use and value the same journal differently. Our CCI© calculation is a normalized citation metric that measures how a journal ranks compared to others in each discipline and topic in which it publishes and answers the question, “How and to whom is this journal important?” For example, a top journal in computer science might sometimes publish articles about educational technology, but researchers in educational technology might not really “care” about this journal the same way that computer scientists do. Conversely, top educational technology journals likely publish some articles about computer science, but these journals are not necessarily as highly regarded by the computer science community. In short, we think that journal evaluations must be more than just a number.

CCI screenshot 2019 updates

The CCI© gauges how well a paper might perform in specific disciplines and topics and compares the influence of journals publishing content from different disciplines. Further, within each discipline, the CCI© classifies a journal’s influence for each topic that it covers. This gives users a way to evaluate not just how influential a journal is, but also the degree to which a journal influences different disciplines.

For research to have real impact it must first be seen, making maximizing visibility a priority for many scholars. Our Difficulty of Acceptance© (DA©) metric is a better way for researchers to gauge a journal’s exclusivity to balance the need for visibility with the very real challenge of getting accepted for publication.

DA screenshot 2019 updates

The DA© rating quantifies a journal’s history of publishing articles from top-performing research institutions. These institutions tend to dedicate more faculty, time, and resources towards publishing often and in “popular” journals. A journal that accepts more articles from these institutions will tend to expect the kind of quality or novelty that the availability of resources better facilitates. So, researchers use the DA© to find the journals with the best blend of potential visibility and manageable exclusivity.

For more information on our metrics, methods, and products, please visit www.cabells.com.

Bridging the Validation Gap

The pressure on academics is not just to publish, but to publish high research and to do so in the right journals. In order to help researchers with what can be a monumental struggle, Cabells is launching an enhanced service offer with leading editing services provider Editage to offer scholars the chance to up their game.


What is the greatest obstacle for authors in their desire to publish their research? This is a common question with a multitude of answers, much of them depending on the personal circumstances of the individual, but there are some things that everyone must overcome in order to fulfill their potential in the field of academia. Quality research is the starting point, ensuring that it makes an original contribution to the current body of knowledge. But what about finding the right journal, and ensuring the article itself is word perfect?

These constitute what I would call the ‘validation gap’ that exists for all authors. In the publication process for each article, there are points where the author should check that the intended journal they would like to submit their work to is legitimate and whether it has the required quality aspects to publish their work. The Cabells Blacklist and Whitelist were designed to help authors with these questions, and today Cabells is stepping up its partnership with Editage to relaunch its Author Services support page.

New beginning
Far too little support is given to researchers about publishing in universities, which is why I and others involved in scholarly communication have always been content to share some of our knowledge with them on campus or through webinars. Universities or governments set benchmarks for researchers to publish in certain journals without equipping them with the skills and knowledge to help them do that. This is incredibly unfair on researchers, and understandably some struggle. They need much greater support in writing their articles, especially if they do not have English as a first language, and understanding how the publication process works.

Universities can offer great support to researchers from Ph.D. supervision and research ethics up to teaching and public engagement. However, when it comes to publication of articles there is this chasm that needs to be crossed to develop academic careers and help is too often found wanting. This is a crucial part of the journey for early career scholars and even more experienced scholars, and along with Editage, Cabells is aiming to bridge that gap.

Give it a try
So, if you or any of your colleagues are about to take the trip over this yawning divide, why not give our new service a go. Just go to the website at https://cabells.editage.com/ and let Editage do the rest. And once you are happy with your article, check that the intended journals on your shortlist are legitimate by using the Blacklist, and have the necessary quality benchmarks by using the Whitelist. And then, once the validation gap has been successfully negotiated, you can click ‘send’ with peace of mind.

NB: For help on using the Whitelist and Blacklist in your journal search, you can use Cabells’ BrightTALK channel, which aims to answer many of the individual user queries we receive in one place. Good luck!

Why asking the experts is always a good idea

In the so-called ‘post-truth’ age where experts are sidelined in favor of good soundbites, Simon Linacre unashamedly uses expert insight in uncovering the truth behind poor publishing decisions… with some exciting news at the end!


Everyone in academia or scholarly publishing can name at least one time they came across a terrible publishing decision. Whether it was an author choosing the wrong journal, or indeed the journal choosing the wrong author, articles have found their way into print that never should have, and parties on both sides must live with the consequences for evermore.

My story involved an early career researcher (ECR) in the Middle East whom I was introduced to whilst delivering talks on how to get published in journals. The researcher had submitted an article to well-regarded Journal A, but, tired of waiting on a decision, submitted the same article to a predatory-looking Journal B without retracting the prior submission. Journal B accepted the paper… and then so did Journal A after the article had already appeared in Journal B’s latest issue. Our hapless author went ahead and published the same article in Journal A – encouraged, so I was told, by his boss – and was then left with the unholy mess of dual publication and asking for my guidance. A tangled web indeed.

Expert advice

The reason why our author made a poor publishing choice was both out of ignorance and necessity, with the same boss telling him to accept the publication in the better-ranked journal, the same boss who wanted to see improved publishing outputs from their faculty. At Cabells, we are fast-approaching 11,000 predatory journals on our Blacklist and it is easy to forget that every one of those journals is filled with articles from authors who, for some reason, made a decision to submit their articles to them for publication.

The question therefore remains: But why?

Literature reviewed

One researcher decided to answer this question herself by, you guessed it, looking at what other experts had said in the form of a literature review of related articles. TF Frandsen’s article is entitled, “Why do researchers decide to publish in questionable journals? A review of the literature” and is published by Wiley in the latest issue of Learned Publishing (currently available as a free access article here). In it, Frandsen draws the following key points:

  • Criteria for choosing journals could be manipulated by predatory-type outlets to entrap researchers and encourage others
  • A ‘publish or perish’ culture has been blamed for the rise in ‘deceptive journals’ but may not be the only reason for their growth
  • Identifying journals as ‘predatory’ ignores the fact that authors may seek to publish in them as a simple route to career development
  • There are at least two different types of authors who publish in so-called deceptive journals: the “unethical” and the “uninformed”
  • Therefore, there should be at least two different approaches to the problem required

For the uninformed, Frandsen recommends that institutions ensure that faculty members are as informed as possible on the dangers of predatory journals and what the consequences of poor choices might be. For those authors making unethical choices, she suggests that the incentives in place that push these authors to questionable decisions should be removed. More broadly, as well as improved awareness, better parameters for authors around the quality of journals in which they should publish could encourage a culture of transparency around journal publication choices. And this would be one decision that everyone in academia and scholarly publishing could approve of.

PS: Enjoying our series of original posts in The Source? The great news is that there will be much more original content, news and resources available for everyone in the academic and publishing communities in the coming weeks… look out for the next edition of The Source for some exciting new developments!