Not Seeing the Wood for the Trees

There is much to learn from literature regarding scholarly communications, not least how to get messages across the divide to the wider public. Simon Linacre finds much food for thought in chewing over a modern classic.


Like many people, I tend to have two or three books on the go at once. This is due in part to different moods suiting different types of books, but also due to some fallacious idea that I will read them more quickly if I do so concurrently rather than one at a time. It also helps tackle larger books, and there are few larger than my recent fiction read, The Overstory by Richard Powers. A true epic intertwining several different stories that all become rooted to the conservation of redwoods in the Pacific Northwest of the US, it encompasses far more than environmental concerns. Liberty, corporate behavior, relationships and child welfare are all covered, as well as the often-blighted route taken by academic researchers.

The professor in question makes a quite literally groundbreaking discovery about how trees can communicate with each other in a forest through root systems, but as soon as she is lauded she then is put down by fellow academics pouring scorn on her ideas and effectively shunning her from her profession. It is only when her ideas are revived and proven again that she comes into her own, but not as an academic expert – rather as a rabble-rouser, polemicist, and leader of like-minded people.

One of the themes hinted at in this story is the impact debate, and how academic research or any kind of uncovered truth can possibly enable real change in the face of corporate hegemony, state bureaucracy and sheer noise created by those arguing for their individual rights to lead their lives as they see fit. The book offers a little hope among huge piles of despair, but is nevertheless an uplifting read due to the force of will exhibited from the main characters. Similarly, one can see that for those academics who raise their head above the parapet and choose impactful research over more popular or recognised research areas, the path towards individual and societal success can appear a long one.

In one telling passage in the book, one character trying to save a tree by living high in its boughs is almost blown off by a helicopter hovering over her perch. Those behind the actions of the helicopter argue they are saving livelihoods by cutting down the trees; those behind the protest say the same, but those livelihoods stretch far into the future and will be worth more than the short-term approach being adopted by the loggers. The parallels for scholarly communication are also stark, as the limited resources that have backed non-actionable research may soon switch to more actionable outcomes. It just needs all stakeholders to see the wood for the trees and move towards a much more impact-focused trail.

A New Perspective

What should a good quality journal include in its make-up – rigorous research, well-regarded editorial board, plenty of citations? But what if we challenge these assumptions and demand commitment to the UN’s Sustainable Development Goals as well? There are solutions to this challenge, and here Simon Linacre introduces the first SDG Impact Intensity™ rating from Cabells and Saint Joseph’s University.


It is said that some of the best deals are done in a secluded restaurant or in the back of a cab. For academics, perhaps the equivalent is the fringes of a conference gala dinner and in a coach back to the hotel. That’s what happened when I met Dr. David Steingard from Saint Joseph’s University (SJU) in Lisbon in late 2019, where we discussed what an appraisal of journals from the perspective of the UN’s Sustainable Development Goals (SDGs) might look like.

First trialed in March of this year, the fruits of this meeting are released today in the shape of the SDG Impact Intensity™ journal rating. This pilot study – the first full ratings are expected in early 2022 – seeks to highlight the differences between business and management journals regarded as leaders in their disciplines, and those which have focused on sustainability and related issues. The pilot consists of 100 journals rated according to their relevance – or intensity – to the UN’s 17 SDGs, determined by the relative focus they have exhibited in their article publications over the last five years. Using a sophisticated AI methodology from SJU and journals based on Cabells’ Journalytics database, journals were rated from zero to five, with six journals achieving the top rating.

Traditionally, citations and rankings have been a proxy for quality, none more so than the list of 50 journals used by the Financial Times for its FT Research rankings. However, to what extent have these journals started to reflect on research on climate change and the SDGs in recent years – a focus which should surely be a top priority for business and business schools alike?

The evidence from the SDG Impact Intensity™ journal rating is that… there has been very little focus at all. As you can see from the list of 100 journals, only two journals from the FT 50 appear in the top 50 of the list, showcasing the fact – as if there was any doubt – that sustainability journals that have typically lagged behind top business journals in terms of citations and prestige far outperform them when it comes to engagement with the SDGs and the research agenda they represent. We will view with interest the FT’s plan for a “slow hackathon” this Autumn as part of a review of their journal list.

Cabells started to investigate this area to see if there was another way to assess what value journals represented to authors looking to publish their work. What the last two years have shown is that more than a shift in perspective, there is a paradigm shift waiting to happen as the value of journals to authors moves from old-fashioned prestige to a more dynamic representation of mission-driven research. While Cabells and some publishers have backed this general shift by signing up to initiatives such as the UN Publishers Compact, much more can be done to progress the impact agenda in scholarly communications. Events such as the upcoming Higher Education Sustainability Initiative (HESI) Webinar aim to tackle the problem of aligning research programs and outcomes in publications head on. By highlighting those journals that are already focused on this alignment – and those that could do better – Cabells and SJU are hoping they can play a part in genuinely moving the dial.

What lies beneath

The first set of data from Cabells’ collaboration with Inera’s Edifix shows that nearly 300 article checks included references to predatory journals. Simon Linacre looks behind the data to share more details about ‘citation contamination.’


A few months ago, Cabells announced a trial partnership with the Edifix service, an article checking tool from Wiley’s Inera division (watch the free webinar discussing the collaboration from SSP’s OnDemand Library). Subscribers to Edifix can check their article’s references against Cabells’ Predatory Reports database for free during an open beta phase, and the first results of this offer have been announced by Edifix on their latest blog. The results show that:

  • A total of 295 jobs have had at least one reference flagged as having been included in a journal that is currently listed by Cabells’ Predatory Reports since May 2021
  • When you look at all 295 of those jobs, there were 66 (22%) that also included multiple references from predatory journals
  • Over the same period, Edifix processed a total of 7102 jobs (containing 104,140 submitted references, of which Edifix was able to fully process 89,180), so overall around 4% of all live jobs included at least one reference flagged by Cabells’ Predatory Reports database.

To recap, it is in the interests of all stakeholders in scholarly communications – authors, universities, societies, funders, and society as a whole – that research is not lost to predatory publishing activities. The Edifix and Cabells collaboration is designed not only to offer access to a database such as Predatory Reports to help all these stakeholders, but to augment their capabilities to produce the best research.

In addition, the collaboration represents a step forward in preventing ‘citation contamination’, where articles published in predatory journals find their way into legitimate journals by being referenced by them directly. The new service allows users to vet references for citations to predatory journals, as identified by Predatory Reports, and reduce the contamination of the scholarly record.

It is important to underline that while checking references won’t remove the predatory journal publications in the first place, it will ensure that those articles are cited less, and also that the research they include is checked. Authors cite articles assuming what is included in them has been peer reviewed, which is the very thing that is most unlikely to happen with a predatory journal. If an author understands the work they are citing may not have had any peer review – or a sub-standard or superficial one – they can find other literature to support their case. The analogy of contamination is a strong one as not only does it conjure up the stench many feel predatory publishing practices represents, it also describes how the problem can ‘cross-contaminate’ other journals and research projects. By empowering authors to clean up their research, and highlighting the problem of contamination more widely, it is hoped that this early experiment can lead to more steps forward in the fight against predatory publishing.

Peer Review Week 2021: Identity in Peer Review

Peer Review Week 2021 has been announced for September 20–24 with the theme of Identity in Peer Review. Simon Linacre, who volunteers for the event’s Steering Committee, takes a look at the importance of the event and this year’s chosen theme.


For those new to scholarly communication, the annual celebration of peer review probably seems one of the more unlikely events to occur in the crowded calendar. It makes sense for relatively novel ideas such as open access and open science to have their day – or week – in the sun in October, while other events supporting academic research and universities in general pepper the rest of the year. So why is boring old peer review so special?

Well, it may be a surprise to learn it’s not that old, and when you dig deeper you find it is anything but boring. While journals began life in the 17th Century – 1665, to be precise – it seems the first peer reviews only took place in the 18th Century, and external reviews in the Victorian period. According to academic publishing historian Alex Csiszar, peer reviews grew from these beginnings very slowly, and only took hold in mainstream science journals in the post-war period.

Furthermore, this year’s theme shows that issues and challenges facing the world today are very much relevant to the process of peer review. Identity in Peer Review was the first Peer Review Week theme to be chosen by the public, and will explore the role of both personal and social identity in peer review. It is hoped that the various events and activities during the week will develop a more diverse, equitable and inclusive approach to peer review. Academia has seen increased emphasis on the taking of steps to ensure research literature reflects and amplifies diverse voices, and of course the manner in which peer review is conducted is key to that.

Peer Review Week steering committee co-chair Danielle Padula says: “If the past year has taught us anything, I think it’s that recognizing the composite of identities that make up who we are as individuals, organizations, and populations, and the links between those identities, is essential to the future of scholarship and, ultimately, global progress. The pandemic has illuminated myriad deep-seated inequities that we need to address in all areas of society, with academia being no exception. And I think that starts with unpacking various aspects of personal and social identity and how we need to rethink the systems in which we operate to acknowledge and make space for diverse identities.”

Looking back to learn about the future is an apt approach, given that the past of peer review is not far behind us, and radical change potentially so near in the future. As ever, focusing on peer review makes a lot of sense for everyone with an interest in knowledge sharing and scholarly communications. Roll on September.

If you are interested in learning more or volunteering, please visit the Peer Review Week website, or you can contact Danielle Padula (dpadula@scholasticahq.com) or Jayashree Rajagopalan (jayashreer@cactusglobal.com), who are co-chairing this year’s PRW steering committee.

The top nine questions on predatory journals – answered!

In the course of researching a book on predatory publishing, Simon Linacre wanted to find some answers to common questions on the subject. In his latest blog post, he shares why straightforward is never easy when it comes to this controversial topic.


Have you ever wondered where those questions come from near the top of a Google search? Headlined ‘People Also Ask’ (PAA), the feature was introduced by Google in 2015 to aid search activities, and, according to SearchEngineWatch.com, the feature now appears in around half of all searches. The algorithms that trigger the feature seem to work more with searches based on questions and that include multiple keywords, and now form part of the standard toolbox of any digital marketer, as it opens up a wider range of sites than the top three hits at the top of a Google search engine results page (SERP).

For academic researchers, the feature is probably both a benefit and an irrelevance. While it may help some to gain a wider understanding of what kinds of questions are being asked about a topic – and it certainly helped me in this regard – it will also annoy others with much more sophisticated skills and needs for their search activities, where being sent down a potential blind alley is something to be avoided.

But do the questions posed by the algorithm any use? To put it to the test, here are the top nine results to the question, ‘What is a predatory journal?’, which was posed on Wednesday 23rd June 2021. The initial question reveals four results (see Figure A), clicking on the first answer reveals a further two (Figure B), and clicking on the second question reveals a total of nine questions (Figure C). These questions will differ depending on which question is clicked, as the algorithm seeks to provide further related questions to the initial one that was clicked on.

Figure A

Figure B

Figure C

Each question provides a summary answer and link through to the original web page, which will inevitably vary greatly in how useful they actually are. Some sources are blogs, some university library guides, and others Wikipedia. What is perhaps concerning is the direction the questions take, in that it is not the sources per se that provide worrying information, but the questions that are posed in the first place, presumably from an algorithm based on usage data and relevance to the questions being asked. So, to try and set the record straight in our own small way, here are some short and more realistic answers to the nine questions Google puts forward as most relevant to the predatory journal question:

Q. What is meant be predatory Journal?

Wikipedia supplies as good a short description as any, with the addition that there is rarely if any peer review at all: “Predatory publishing is an exploitative academic publishing business model that involves charging publication fees to authors without checking articles for quality and legitimacy, and without providing editorial and publishing services that legitimate academic journals provide, whether open access or not.”

Q. How do you know if a journal is predatory?

Common indicators include fake claims of an Impact Factor, lack of information/lies about the Editorial Board, and unrealistic promises of a fast turnaround.

Q. What happens if you publish in a predatory journal?

It stays published – retraction is highly unlikely, and to try and republish the article in a legitimate journal will only compound the problem by breaching publication ethics guidelines.

Q. What is a predatory journal a journal published over Internet?

Predatory journals began life by taking advantage of online publication as well as the Open Access model – both things were simply combined to create the right circumstances for predatory journals to evolve.

Q. Why are predatory journals bad?

Predatory journals do not check the validity or accuracy of submitted research but present it as if they have. As a result junk science, propaganda, and faked research can appear and be accessed by other academics and the general public alike, causing confusion and potential harm to anyone adopting that research for another purpose.

Q. Is PLOS ONE a predatory journal?

No, not at all. PLOS ONE like many so-called ‘mega-journals’ publish large numbers of articles based on a light-touch peer review that nevertheless checks the validity and accuracy of the research articles submitted.

Q. How can you detect and avoid predatory journals?

Research the topic and use the many guidelines provided by university libraries around the world. You can also use Cabells’ own criteria it uses to identify them for inclusion in its Predatory Reports database.

Q How many predatory journals are there?

There are currently 14,647 journals listed on Cabells’ Predatory Reports database.

Q. What is the warning sign that a journal or publisher is predatory?

In addition to the common indicators listed above, other more superficial signs can include poor grammar/spelling, very broad coverage of a topic, or solicitation of article submissions with excessive flattery in spam emails.

The rise and rise of predatory journals and conferences

Editor’s Note: Today’s post is by Tracey Elliott, Ph.D. Dr. Elliott is the Project Director at InterAcademy Partnership (IAP), currently overseeing Combatting Predatory Academic Journals and Conferences.


Predatory academic journals and, even more so, predatory conferences have been given surprisingly little attention in academic circles, despite their rapid growth and sophistication in recent years.  Juxtaposed with the pervasive “publish or perish” research assessment culture, where quantity trumps quality, the research community risks sleepwalking into a perfect storm.  Predatory academic practices are one manifestation of a surge in online scams and deceit that are deluging many sectors, fuelled further by changes in (post-) pandemic lifestyles, but their impact on the knowledge economy, research enterprise, and public policy is potentially profound. 

The InterAcademy Partnership (IAP) – the global network of over 140 academies of science, engineering and medicine – is leading an international project “Combatting predatory journals and conferences” which seeks to better understand the growing menace of these practices, gauge their extent and impact, what drives them and what actions are required to curb them.  With the number of predatory journals now estimated to be at least 14,500 (Cabells) and predatory conferences believed to outnumber legitimate ones (THES), this project is imperative and our recent survey of researchers all over the world is illuminating.

Conducted in November-December 2020, the survey gives concerning insight into the extent and impact of predatory practices across the world.  Based on the 1800+ respondents, two headlines are particularly striking:

1. Over 80% of respondents perceived predatory practices to be a serious problem or on the rise in their country.
2. At least a quarter of respondents had either published in a predatory journal, participated in a predatory conference, or did not know if they had.  Reasons cited for this included a lack of awareness of such scams and encouragement by their peers. Indeed, there is anecdotal evidence to suggest that the use of predatory journals and conferences is embedded, or at least tolerated, in some institutions/networks.

Contrary to some studies citing that early career researchers are especially vulnerable, we found no correlation between a researcher’s career stage, or their discipline, with their likelihood to publish in a predatory journal or participate in a predatory conference.  However, there is a small correlation with the economic status of the country in which they work, with those in lower- and middle-income countries more likely to participate or publish than those in high-income countries. If left unchecked, the research gap between higher- and lower-income countries risks widening. Putting definitive guidance on predatory journals behind paywalls, whilst sometimes unavoidable, risks exacerbating this further.

A challenge for such essential services, whether paywalled or not, is how to distinguish fraudulent, deceitful journals from low quality but well-intentioned and legitimate ones. Whilst bringing the clarity researchers crave, journal safelists and watchlists force an in or out binary decision that is increasingly inadequate and unfair.  In reality, there is a spectrum of fast-evolving and highly nuanced publishing practices that makes Cabell’s and its counterparts’ work very difficult. IAP is currently exploring a subset of Cabell’s-listed predatory journals using internet scraping and spidering techniques for data on predatory publishing.

Our project report, anticipated by early 2022, will include recommendations for all key stakeholder communities – researchers, research funders, publishers, academies and universities, libraries, and indexing services. With IAP as a conduit to academies and research communities throughout the world, we will focus on awareness-raising, training, and mentoring resources, and mobilising governments, multilateral and intergovernmental organisations.

Industrial disease

It’s almost four years since Cabells launched its Predatory Reports database, but the battle to overcome predatory journals shows no signs of abating. As a result, Cabells is constantly developing new ways to support authors and their institutions in dealing with the problem, and this week Simon Linacre reports from the virtual SSP Annual Meeting on a new collaboration with Edifix from Inera, which helps identify articles and authors published in predatory journals.

A common retort heard or read on social media whenever there is a discussion on predatory journals can go something like this: “is there really any harm done?”, “some research is only good enough for those kind of journals,” or “everyone knows those journals are fake.” For the latter rejoinders, there is some justification for taking those perspectives, and if recent global events have taught us anything it is that we need a sense of proportion when dealing with scientific breakthroughs and analysis. But the former point really doesn’t hold water because, when you think it through, there is a good deal of harm done to a number of different stakeholders as a result of one article appearing in a predatory journal.

Predatory journals do researchers and their institutions a huge disservice by claiming to be a reputable outlet for publication. Legitimate journals provide valuable services to both promote and protect authors’ work, which simply doesn’t happen with predatory journals. Essentially, there are three key reasons why authors and their employers can suffer harm from publishing in the wrong journals:

  • Their work may be subject to sub-par peer review, or more likely no peer review at all. The peer review system isn’t perfect, but papers that undergo peer review are better for it. Researchers want to make sure they are publishing in a place that values their work and is willing to devote time and resources to improving it.
  • Versions of record could disappear. One of the advantages of publishing with a reputable journal is that they make commitments to preserve authors’ work. Opportunists looking to make a quick buck are not going to care if your paper is still available in five years – or even five weeks.
  • Published articles will be hard to find. Some predatory journals advertise that they are included in well-known databases like Web of Science, Scopus, or Cabells when they are not. Predatory journals invest nothing in SEO or work to include journals in research databases, so research won’t be easily discoverable.

So, it is in the interests of authors, universities, societies, funders and society itself that research is not lost to predatory publishing activities. Checking against a database such as Predatory Reports will help those stakeholders, but to augment their capabilities Cabells is collaborating with Atypon’s Inera division, and specifically its Edifix product to help prevent ‘citation contamination’. This is where illegitimate articles published in predatory journals find their way into the research bloodstream by being referenced by legitimate journals. With Edifix, users can now vet bibliographic reference lists for citations to predatory journals, as identified by Predatory Reports.

This new Edifix web service with the automated Cabells Reference Checking Tool was showcased at SSP’s Annual Meeting (meeting registration required) this week (and previewed in an SSP sponsored session in October 2020) with a host of other new innovations, collaborations and product developments from the scholarly communications industry. While it would have been great to see old friends and colleagues in person at the event, the virtual format enabled much wider, international engagement which contributed to an undoubtedly successful event.

Beware the known unknowns

Following a recent study showing an alarming lack of knowledge and understanding of predatory journals in China, Simon Linacre looks at the potential impact of the world’s biggest producer of research succumbing to the threat of deceptive publications.

That China has achieved something remarkable in its continued growth in research publications is surely one of the most important developments in modern research and scholarly communications. It passed the US in 2018 and all indications suggest it has increased its lead since then, propelled by huge investment in research by the Chinese government.

Cabells sought to reflect on this success when it published the list of top Chinese-language management journals in December 2020 following a collaboration with AMBA. However, research on that project also highlighted the significant risk for Chinese scholars in publishing in the wrong journals. Until last year, academics tended to be pushed towards – and recognised for – publishing in Impact Factor journals. This policy has, however, now changed, with more of a focus on Chinese-language journals as well as other international titles. The concern then arises, that some scholars may be lured into publishing in predatory journals with the shift in policy.

This thought has been fortified by the publication of the article ‘Chinese PhD Students’ Perceptions of Predatory Journals’ (2021) by Jiayun Wang, Jie Xu and DIanyou Chen in the Journal of Scholarly Publishing. Their study looks at the attitudes of over 300 Chinese doctoral students towards predatory journals, making three key findings:

  1. In STEM subjects, students regularly confused predatory journals with Open Access (OA) journals
  2. In Humanities and Social Science subjects, students tended to only identify predatory journals in the Chinese language, but not in English
  3. While the majority of respondents said they had no intention of submitting to predatory journals (mainly due to the potential harm it could do to their reputation), the few that would do so cited quick publication times and easy acceptance as motivating factors.

While there are limitations to the Wang et al article due to its relatively small sample and restricted scope, it is clear there is at least the potential for widespread use and abuse of the predatory publishing model in China, in parallel to what has been observed to a greater or lesser degree around the rest of the world. In conclusion, the authors state:

“PhD candidates in China generally have insufficient knowledge about predatory journals, and also generally disapprove of publishing in them.” (2021, pp. 102)

This lack of knowledge is referred to time and time again in articles about predatory publishing, of which there is now a small library to choose from. While there is considerable debate on how to define predatory journals, how to identify them and even score them, there is a gap where a better understanding of how to prevent publication in them can be engendered, particularly in the PhD and early career scholar (ECR) communities. Some studies on this aspect of predatory publishing would be very welcome indeed.

Book review – Gaming the Metrics: Misconduct and Manipulation in Academic Research

The issues of gaming metrics and predatory publishing undoubtedly go hand-in-hand, outputs from the same system that requires academic researchers the world over to sing for their supper in some form or other. However, the two practices are often treated separately, almost as if there was no link at all, so editors Biagioli and Lippman are to be congratulated in bringing them together under the same roof in the shape of their book Gaming the Metrics: Misconduct and Manipulation in Academic Research (MIT Press, 2020).

The book is a collection of chapters that cover the whole gamut of wrongheaded – or just plain wrong – publication decisions on behalf of authors the word over on where to publish the fruits of their research. This ‘submission decision’ is unenviable, as it inevitably shapes academic careers to a greater or lesser degree. The main reason why authors make poor decisions is laid firmly at the doors of a variety of ‘publish or perish’ systems which seek to quantify the outputs from authors with a view to… well, the reason why outputs are quantified is never really explained. However, the reason why such quantification should be a non-starter is well-argued by Michael Power in Chapter 3, as well as Barbara M. Kehm (Ch. 6) in terms of the ever-popular university rankings. Even peer review comes under attack from Paul Wouters (Ch. 4), but as with the other areas any solutions are either absent, or in the case of Wouters proffered with minimal detail or real-world context.

Once into the book, any author would quickly realize that their decision to publish is fraught with difficulty with worrying about predatory publishers lurking on the internet to entice their articles and APCs from them. As such, any would be author would be well advised to heed the call ‘Caveat scriptor’ and read this book in advance of sending off their manuscript to any journals.

That said, there is also a case for advising ‘caveat lector’ before would-be authors read the book, as there are other areas where additional context would greatly help in addressing the problems of gaming metrics and academic misconduct. When it comes to predatory journals, there is a good deal of useful information included in several of the later chapters, especially the case studies in Chapters 7 and 15 which detail a suspiciously prolific Czech author and sting operation, respectively.

Indeed, these cases provide the context that is perhaps the single biggest failing of the book, which through its narrow academic lens doesn’t quite capture the wider picture of why gaming metrics and the scholarly communications system as a whole is ethically wrong, both for those who perpetrate it and arguably the architects of the systems. As with many academic texts that seek to tackle societal problems, the unwillingness to get dirt under the fingernails in the pursuit of understanding what’s really going on simply distances the reader from the problem at hand.

As a result, after reading Gaming the Metrics, one is like to simply shrug one’s shoulders in apathy about the plight of authors and their institutions, whereas a great deal more impact might have been achieved if the approach had been less academic and included more case studies and insights into the negative impact resulting from predatory publishing practices. After all, the problem with gaming the system is that, for those who suffer, it is anything but a game.

Gaming the Metrics: Misconduct and Manipulation in Academic Research, edited by Mario Biagioli and Alexandra Lippman (published Feb. 21 2020, MIT Press USA) ISBN: 978-0262537933.

Cabells and scite partner to bring Smart Citations to Journalytics

Cabells, a provider of key intelligence on academic journals for research professionals, and scite, a platform for discovering and evaluating scientific articles, are excited to announce the addition of scite’s Smart Citations to Cabells Journalytics publication summaries.

Journalytics summary card with scite Smart Citations data

Journalytics is a curated database of over 11,000 verified academic journals spanning 18 disciplines, developed to help researchers and institutions optimize decision-making around the publication of research. Journalytics summaries provide publication and submission information and citation-backed data and analytics for comprehensive evaluations.

scite’s Smart Citations allow researchers to see how articles have been cited by providing the context of the citation and a classification describing whether it provides supporting or disputing evidence for the cited claim.

The inclusion of Smart Citations adds a layer of perspective to Journalytics metrics and gives users a deeper understanding of journal activity by transforming citations from a mere number into contextual data.

Lacey Earle, executive director of Cabells, says, “Cabells is thrilled to partner with scite in order to help researchers evaluate scientific articles through an innovative, comparative-based metric system that encourages rigorous and in-depth research.”

Josh Nicholson, co-founder and CEO of scite says of the partnership, “We’re excited to be working with Cabells to embed our Smart Citations into their Journalytics summaries. Smart Citations help you assess the quantity of citations a journal has received as well as the quality of these citations, with a focus on identifying supporting and disputing citations in the literature.”


about cabells

Cabells generates actionable intelligence on academic journals for research professionals.  On the Journalytics platform, an independent, curated database of more than 11,000 verified scholarly journals, researchers draw from the intersection of expertise, data, and analytics to make confident decisions to better administer research. In Predatory Reports, Cabells has undertaken the most comprehensive and detailed campaign against predatory journals, currently reporting on deceptive behaviors of over 14,000 publications. By combining its efforts with those of researchers, academic publishers, industry organizations, and other service providers, Cabells works to create a safe, transparent and equitable publishing ecosystem that can nurture generations of knowledge and innovation. For more information please visit Cabells or follow us on Twitter, LinkedIn and Facebook.

about scite

scite is a Brooklyn-based startup that helps researchers better discover and evaluate scientific articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or disputing evidence. scite is used by researchers from dozens of countries and is funded in part by the National Science Foundation and the National Institute of Drug Abuse of the National Institutes of Health. For more information, please visit scite, follow us on Twitter, LinkedIn, and Facebook, and download our Chrome or Firefox plugin. For careers, please see our jobs page.