New Kid on the Block

The publishing industry is often derided for its lack of innovation. However, as Simon Linacre argues, there is often innovation going on right under our noses where the radical nature of changes are yet to be fully understood, for good or bad.



There is a clock ticking down on the top right of my screen. I have 15 mins and 28 seconds to upgrade to premium membership for half price. But what do I need to do? What is the clock for? What happens if I don’t click the button to stop the clock in time…?

This isn’t an excerpt from a new pulp fiction thriller, but one of the elements in a new journal many academics will have received notification of recently. Academia Letters is a new journal from Academia.edu, a social networking platform for researchers worldwide to share and discover research. Since it started life in 2008, the site has become popular with academics, with millions signed up to take advantage of the platform to promote their research and find others to collaborate with. It has also been controversial, accused of hosting thousands of article pdfs in breach of copyright terms. Up until now, the site has focused on enabling researchers to share their work, but now they have joined the publishing game with their own journal publishing short articles of between 800 and 1,600 words.

The new offering provides several other different takes on the traditional journal:

  • All articles are Open Access but for a lower fee than average (£300 in the UK)
  • Peer review times are promised to be “lightning-fast”
  • Articles are accepted or rejected at the first round, with only minor revisions required if accepted.

Now, some people reading this will ask themselves: “Doesn’t that sound like a predatory journal?”. However, it is very clear that Academia Letters is categorically NOT predatory in nature, because far from attempting to deceive authors into believing there is in-depth peer review, it is clear that the light-touch process and access to millions of users should mean the publishing process is both fast and cheap compared to other OA options. However, the quality of articles would not be expected to match those in a traditional journal given the brevity and lack of intervention from peer reviewers in the new model.

It will be interesting to see how many authors take advantage of the new approach chosen by the journal. If it takes off, it could open up other new forms from traditional publishers and other networking sites, and be held up as a clear example of innovation in scholarly communications. However, the journal may run afoul of its approach to marketing as authors have become increasingly wary of promises of fast turnaround times and low APCs from predatory publishers. For example, what happened when the ticking clock ran down to signify the end of a half price deal to become a premium member of Academia.edu? It simply reset to 48 hours for the same deal. Such marketing tactics may be effective in signing some authors up, but others may well be put off, however innovative the new proposition might be.

Look before you leap!

A recent paper published in Nature has provided a tool for researchers to use to check the publication integrity of a given article. Simon Linacre looks at this welcome support for researchers, and how it raises questions about the research/publication divide.

Earlier this month, Nature published a well-received comment piece by an international group of authors entitled ‘Check for publication integrity before misconduct’ (Grey et al, 2020). The authors wanted to create a tool to enable researchers to spot potential problems with articles before they got too invested in the research, citing a number of recent examples of misconduct. The tool they came up with is a checklist called REAPPRAISED, which uses each letter to identify an area – such as plagiarism or statistics and data – that researchers should check as part of their workflow.
 
As a general rule for researchers, and as a handy mnemonic, the tool seems to work well, and undoubtedly authors using this as part of their research should avoid the potential pitfalls of using poorly researched and published work. Perhaps we at Cabells would argue that an extra ‘P’ should be added for ‘Predatory’, and the checks researchers should make to ensure the journals they are using and intend to publish in are legitimate. To do this comprehensively, we would recommend using our own criteria for the Cabells Journal Blacklist as a guide, and of course, using the database itself where possible.
 
The guidelines also raise a fundamental question for researchers and publishers alike as to where research ends and publishing starts. For many involved in academia and scholarly communications, the two worlds are inextricably linked and overlap, but are nevertheless different. Faculty members of universities do their research thing and write articles to submit to journals; publishers manage the submission process and publish the best articles for other academics to read and in turn use in their future research. 
 
Journal editors seem to sit at the nexus of these two areas as they tend to be academics themselves while working for the publisher, and as such have feet in both camps. But while they are knowledgeable about the research that has been done and may actively research themselves, as editor their role is one performed on behalf of the publisher, and ultimately decides which articles are good enough to be recorded in their publication; the proverbial gatekeeper.
 
What the REAPPRAISED tool suggests, however, is that for authors the notional research/publishing divide is not a two-stage process, but rather a continuum. Only if authors embark on research intent on fully appraising themselves of all aspects of publishing integrity can they guarantee the integrity of their own research, and in turn this includes how and where that research is published. Rather than a two-step process, authors can better ensure the quality of their research AND publications by including all publishing processes as part of their own research workflow. By doing this, and using tools such as REAPPRAISED and Cabells Journal Blacklist along the way, authors can better take control of their academic careers.


Updated CCI and DA metrics hit the Journal Whitelist

Hot off the press, newly updated Cabell’s Classification Index© (CCI©) and Difficulty of Acceptance© (DA©) scores for all Journal Whitelist publication summaries are now available. These insightful metrics are part of our powerful mix of intelligent data leading to informed and confident journal evaluations.

Research has become increasingly cross-disciplinary and, accordingly, an individual journal might publish articles relevant to several fields.  This means that researchers in different fields often use and value the same journal differently. Our CCI© calculation is a normalized citation metric that measures how a journal ranks compared to others in each discipline and topic in which it publishes and answers the question, “How and to whom is this journal important?” For example, a top journal in computer science might sometimes publish articles about educational technology, but researchers in educational technology might not really “care” about this journal the same way that computer scientists do. Conversely, top educational technology journals likely publish some articles about computer science, but these journals are not necessarily as highly regarded by the computer science community. In short, we think that journal evaluations must be more than just a number.

CCI screenshot 2019 updates

The CCI© gauges how well a paper might perform in specific disciplines and topics and compares the influence of journals publishing content from different disciplines. Further, within each discipline, the CCI© classifies a journal’s influence for each topic that it covers. This gives users a way to evaluate not just how influential a journal is, but also the degree to which a journal influences different disciplines.

For research to have real impact it must first be seen, making maximizing visibility a priority for many scholars. Our Difficulty of Acceptance© (DA©) metric is a better way for researchers to gauge a journal’s exclusivity to balance the need for visibility with the very real challenge of getting accepted for publication.

DA screenshot 2019 updates

The DA© rating quantifies a journal’s history of publishing articles from top-performing research institutions. These institutions tend to dedicate more faculty, time, and resources towards publishing often and in “popular” journals. A journal that accepts more articles from these institutions will tend to expect the kind of quality or novelty that the availability of resources better facilitates. So, researchers use the DA© to find the journals with the best blend of potential visibility and manageable exclusivity.

For more information on our metrics, methods, and products, please visit www.cabells.com.

Bridging the Validation Gap

The pressure on academics is not just to publish, but to publish high research and to do so in the right journals. In order to help researchers with what can be a monumental struggle, Cabells is launching an enhanced service offer with leading editing services provider Editage to offer scholars the chance to up their game.


What is the greatest obstacle for authors in their desire to publish their research? This is a common question with a multitude of answers, much of them depending on the personal circumstances of the individual, but there are some things that everyone must overcome in order to fulfill their potential in the field of academia. Quality research is the starting point, ensuring that it makes an original contribution to the current body of knowledge. But what about finding the right journal, and ensuring the article itself is word perfect?

These constitute what I would call the ‘validation gap’ that exists for all authors. In the publication process for each article, there are points where the author should check that the intended journal they would like to submit their work to is legitimate and whether it has the required quality aspects to publish their work. The Cabells Blacklist and Whitelist were designed to help authors with these questions, and today Cabells is stepping up its partnership with Editage to relaunch its Author Services support page.

New beginning
Far too little support is given to researchers about publishing in universities, which is why I and others involved in scholarly communication have always been content to share some of our knowledge with them on campus or through webinars. Universities or governments set benchmarks for researchers to publish in certain journals without equipping them with the skills and knowledge to help them do that. This is incredibly unfair on researchers, and understandably some struggle. They need much greater support in writing their articles, especially if they do not have English as a first language, and understanding how the publication process works.

Universities can offer great support to researchers from Ph.D. supervision and research ethics up to teaching and public engagement. However, when it comes to publication of articles there is this chasm that needs to be crossed to develop academic careers and help is too often found wanting. This is a crucial part of the journey for early career scholars and even more experienced scholars, and along with Editage, Cabells is aiming to bridge that gap.

Give it a try
So, if you or any of your colleagues are about to take the trip over this yawning divide, why not give our new service a go. Just go to the website at https://cabells.editage.com/ and let Editage do the rest. And once you are happy with your article, check that the intended journals on your shortlist are legitimate by using the Blacklist, and have the necessary quality benchmarks by using the Whitelist. And then, once the validation gap has been successfully negotiated, you can click ‘send’ with peace of mind.

NB: For help on using the Whitelist and Blacklist in your journal search, you can use Cabells’ BrightTALK channel, which aims to answer many of the individual user queries we receive in one place. Good luck!

Why asking the experts is always a good idea

In the so-called ‘post-truth’ age where experts are sidelined in favor of good soundbites, Simon Linacre unashamedly uses expert insight in uncovering the truth behind poor publishing decisions… with some exciting news at the end!


Everyone in academia or scholarly publishing can name at least one time they came across a terrible publishing decision. Whether it was an author choosing the wrong journal, or indeed the journal choosing the wrong author, articles have found their way into print that never should have, and parties on both sides must live with the consequences for evermore.

My story involved an early career researcher (ECR) in the Middle East whom I was introduced to whilst delivering talks on how to get published in journals. The researcher had submitted an article to well-regarded Journal A, but, tired of waiting on a decision, submitted the same article to a predatory-looking Journal B without retracting the prior submission. Journal B accepted the paper… and then so did Journal A after the article had already appeared in Journal B’s latest issue. Our hapless author went ahead and published the same article in Journal A – encouraged, so I was told, by his boss – and was then left with the unholy mess of dual publication and asking for my guidance. A tangled web indeed.

Expert advice

The reason why our author made a poor publishing choice was both out of ignorance and necessity, with the same boss telling him to accept the publication in the better-ranked journal, the same boss who wanted to see improved publishing outputs from their faculty. At Cabells, we are fast-approaching 11,000 predatory journals on our Blacklist and it is easy to forget that every one of those journals is filled with articles from authors who, for some reason, made a decision to submit their articles to them for publication.

The question therefore remains: But why?

Literature reviewed

One researcher decided to answer this question herself by, you guessed it, looking at what other experts had said in the form of a literature review of related articles. TF Frandsen’s article is entitled, “Why do researchers decide to publish in questionable journals? A review of the literature” and is published by Wiley in the latest issue of Learned Publishing (currently available as a free access article here). In it, Frandsen draws the following key points:

  • Criteria for choosing journals could be manipulated by predatory-type outlets to entrap researchers and encourage others
  • A ‘publish or perish’ culture has been blamed for the rise in ‘deceptive journals’ but may not be the only reason for their growth
  • Identifying journals as ‘predatory’ ignores the fact that authors may seek to publish in them as a simple route to career development
  • There are at least two different types of authors who publish in so-called deceptive journals: the “unethical” and the “uninformed”
  • Therefore, there should be at least two different approaches to the problem required

For the uninformed, Frandsen recommends that institutions ensure that faculty members are as informed as possible on the dangers of predatory journals and what the consequences of poor choices might be. For those authors making unethical choices, she suggests that the incentives in place that push these authors to questionable decisions should be removed. More broadly, as well as improved awareness, better parameters for authors around the quality of journals in which they should publish could encourage a culture of transparency around journal publication choices. And this would be one decision that everyone in academia and scholarly publishing could approve of.

PS: Enjoying our series of original posts in The Source? The great news is that there will be much more original content, news and resources available for everyone in the academic and publishing communities in the coming weeks… look out for the next edition of The Source for some exciting new developments!