Earlier this month, Nature published a well-received comment piece by an international group of authors entitled ‘Check for publication integrity before misconduct’ (Grey et al, 2020). The authors wanted to create a tool to enable researchers to spot potential problems with articles before they got too invested in the research, citing a number of recent examples of misconduct. The tool they came up with is a checklist called REAPPRAISED, which uses each letter to identify an area – such as plagiarism or statistics and data – that researchers should check as part of their workflow.
As a general rule for researchers, and as a handy mnemonic, the tool seems to work well, and undoubtedly authors using this as part of their research should avoid the potential pitfalls of using poorly researched and published work. Perhaps we at Cabells would argue that an extra ‘P’ should be added for ‘Predatory’, and the checks researchers should make to ensure the journals they are using and intend to publish in are legitimate. To do this comprehensively, we would recommend using our own criteria for the Cabells Journal Blacklist as a guide, and of course, using the database itself where possible.
The guidelines also raise a fundamental question for researchers and publishers alike as to where research ends and publishing starts. For many involved in academia and scholarly communications, the two worlds are inextricably linked and overlap, but are nevertheless different. Faculty members of universities do their research thing and write articles to submit to journals; publishers manage the submission process and publish the best articles for other academics to read and in turn use in their future research.
Journal editors seem to sit at the nexus of these two areas as they tend to be academics themselves while working for the publisher, and as such have feet in both camps. But while they are knowledgeable about the research that has been done and may actively research themselves, as editor their role is one performed on behalf of the publisher, and ultimately decides which articles are good enough to be recorded in their publication; the proverbial gatekeeper.
What the REAPPRAISED tool suggests, however, is that for authors the notional research/publishing divide is not a two-stage process, but rather a continuum. Only if authors embark on research intent on fully appraising themselves of all aspects of publishing integrity can they guarantee the integrity of their own research, and in turn this includes how and where that research is published. Rather than a two-step process, authors can better ensure the quality of their research AND publications by including all publishing processes as part of their own research workflow. By doing this, and using tools such as REAPPRAISED and Cabells Journal Blacklist along the way, authors can better take control of their academic careers.
Updated CCI and DA metrics hit the Journal Whitelist
Hot off the press, newly updated Cabell’s Classification Index© (CCI©) and Difficulty of Acceptance© (DA©) scores for all Journal Whitelist publication summaries are now available. These insightful metrics are part of our powerful mix of intelligent data leading to informed and confident journal evaluations.
Research has become increasingly cross-disciplinary and, accordingly, an individual journal might publish articles relevant to several fields. This means that researchers in different fields often use and value the same journal differently. Our CCI© calculation is a normalized citation metric that measures how a journal ranks compared to others in each discipline and topic in which it publishes and answers the question, “How and to whom is this journal important?” For example, a top journal in computer science might sometimes publish articles about educational technology, but researchers in educational technology might not really “care” about this journal the same way that computer scientists do. Conversely, top educational technology journals likely publish some articles about computer science, but these journals are not necessarily as highly regarded by the computer science community. In short, we think that journal evaluations must be more than just a number.
The CCI© gauges how well a paper might perform in specific disciplines and topics and compares the influence of journals publishing content from different disciplines. Further, within each discipline, the CCI© classifies a journal’s influence for each topic that it covers. This gives users a way to evaluate not just how influential a journal is, but also the degree to which a journal influences different disciplines.
For research to have real impact it must first be seen, making maximizing visibility a priority for many scholars. Our Difficulty of Acceptance© (DA©) metric is a better way for researchers to gauge a journal’s exclusivity to balance the need for visibility with the very real challenge of getting accepted for publication.
The DA© rating quantifies a journal’s history of publishing articles from top-performing research institutions. These institutions tend to dedicate more faculty, time, and resources towards publishing often and in “popular” journals. A journal that accepts more articles from these institutions will tend to expect the kind of quality or novelty that the availability of resources better facilitates. So, researchers use the DA© to find the journals with the best blend of potential visibility and manageable exclusivity.
For more information on our metrics, methods, and products, please visit www.cabells.com.
When does research end and publishing begin?
In his latest post, Simon Linacre argues that in order for authors to make optimal decisions – and not to get drawn into predatory publishing nightmares – research and publishing efforts should overlap substantially.
In a recent online discussion on predatory publishing, there was some debate as to the motivations of authors to chose predatory journals. A recent study in the ALPSP journal Learned Publishing found that academics publishing in such journals usually fell into one of two camps – either they were “uninformed” that the journal they had chosen to publish in was predatory in nature, or they were “unethical” in knowingly choosing such a journal in order to satisfy some publication goals.
However, a third category of researcher was suggested, that of the ‘unfussy’ author who neither cares nor knows what sort of journal they are publishing in. Certainly, there may be some overlap with the other two categories, but what they all have in common is bad decision-making. Whether one does not know, does not care, or does not mind which journal one publishes in, it seems to me that one should do so on all three counts.
It was at this point where one of the group posed one of the best questions I have seen in many years in scholarly communications: when it comes to article publication, where does the science end in scientific research? Due in part to the terminology as well as the differing processes, the concept of research and publication are regarded as somehow distinct or separate. Part of the same eco-system, for sure, but requiring different skills, knowledge and approaches. The question is a good one as it challenges this duality. Isn’t is possible for science to encompass some of the publishing process itself? And shouldn’t the publishing process become more involved in the process of research?
The latter is already happening to a degree in moves by major publishers to climb up the supply chain and become more involved in research services provision (e.g. the acquisition of article platform services provider Atypon by Wiley). On the other side, there is surely an argument that at the end of experiments or data collection, analyzing data logically and writing up conclusions, there is a place for scientific process to be followed in choosing a legitimate outlet with appropriate peer review processes? Surely any university or funder would expect such a scientific approach at every level from their employees or beneficiaries. And a failure to do this allows in not only sub-optimal choices of journal, but worse predatory outlets which will ultimately delegitimize scientific research as a whole.
I get that that it may not be such a huge scandal if some ho-hum research is published in a ‘crappy’ journal so that an academic can tick some boxes at their university. However, while the outcome may not be particularly harmful, the tacit allowing of such lazy academic behavior surely has no place in modern research. Structures that force gaming of the system should, of course, be revised, but one can’t help thinking that if academics carried the same rigor and logic forward into their publishing decisions as they did in their research, scholarly communications would be in much better shape for all concerned.
Bridging the Validation Gap
The pressure on academics is not just to publish, but to publish high research and to do so in the right journals. In order to help researchers with what can be a monumental struggle, Cabells is launching an enhanced service offer with leading editing services provider Editage to offer scholars the chance to up their game.
What is the greatest obstacle for authors in their desire to publish their research? This is a common question with a multitude of answers, much of them depending on the personal circumstances of the individual, but there are some things that everyone must overcome in order to fulfill their potential in the field of academia. Quality research is the starting point, ensuring that it makes an original contribution to the current body of knowledge. But what about finding the right journal, and ensuring the article itself is word perfect?
These constitute what I would call the ‘validation gap’ that exists for all authors. In the publication process for each article, there are points where the author should check that the intended journal they would like to submit their work to is legitimate and whether it has the required quality aspects to publish their work. The Cabells Blacklist and Whitelist were designed to help authors with these questions, and today Cabells is stepping up its partnership with Editage to relaunch its Author Services support page.
New beginning
Far too little support is given to researchers about publishing in universities, which is why I and others involved in scholarly communication have always been content to share some of our knowledge with them on campus or through webinars. Universities or governments set benchmarks for researchers to publish in certain journals without equipping them with the skills and knowledge to help them do that. This is incredibly unfair on researchers, and understandably some struggle. They need much greater support in writing their articles, especially if they do not have English as a first language, and understanding how the publication process works.
Universities can offer great support to researchers from Ph.D. supervision and research ethics up to teaching and public engagement. However, when it comes to publication of articles there is this chasm that needs to be crossed to develop academic careers and help is too often found wanting. This is a crucial part of the journey for early career scholars and even more experienced scholars, and along with Editage, Cabells is aiming to bridge that gap.
Give it a try
So, if you or any of your colleagues are about to take the trip over this yawning divide, why not give our new service a go. Just go to the website at https://cabells.editage.com/ and let Editage do the rest. And once you are happy with your article, check that the intended journals on your shortlist are legitimate by using the Blacklist, and have the necessary quality benchmarks by using the Whitelist. And then, once the validation gap has been successfully negotiated, you can click ‘send’ with peace of mind.
NB: For help on using the Whitelist and Blacklist in your journal search, you can use Cabells’ BrightTALK channel, which aims to answer many of the individual user queries we receive in one place. Good luck!