The price of predatory publishing

What is the black market in predatory publishing worth each year? No satisfactory estimate has yet been produced, so Simon Linacre has decided to grab the back of an envelope and an old biro to try to make an educated guess.


Firstly, all of us at Cabells would like to wish everyone well during this unusual and difficult time. We are thinking a great deal about our customers, users, publishers and researchers who must try and maintain their important work during the coronavirus pandemic. Whether you are in lockdown, self-isolating, or are more or less free of restrictions, please be assured that Cabells’ services are still available for your research needs, and if there are any problems with access, please do not hesitate to contact us at journals@cabells.com.

Possibly as a result of spending too much holed up at home, a friend of mine in scholarly communications asked me last week how much predatory publishers earned each year. I confess that I was a little stumped at first. Despite the fact that Cabells has created the world’s most comprehensive database of predatory titles in its Journal Blacklist, it does not collate information on article processing charges (APCs), and even if it did it would not bear any relation to what was actually paid, as often APCs are discounted or waived. Indeed, sometimes authors are even charged a withdrawal fee for the predatory journal to NOT publish their article.

So, where do you start trying to estimate a figure? Well, firstly you can try reviewing the literature, but this brings its own risks. The first article I found estimated it to be in the billions of dollars, which immediately failed the smell test. After looking at the articles it had cited, it became clear that an error had been made – the annual value of all APCs is estimated to be in the billions, so the figure for predatory journals is likely to be a lot less.

The second figure was in an article by Greco (2016) which estimated predatory journals to be earning around $75m a year, which seemed more reasonable. But is there any way to validate this? Well, recently the case was closed by the Federal Trade Commission (FTC) on its judgement against Omics Group which it had fined $50.1m in April 2019 (Linacre, Bisaccio & Earle, 2019). After the judgement was passed, there were further checks on the judgement to ensure the FTC had been fair and equitable in its dealings, and these were all validated. This included the $50m fine and the way it was worked out… which means you could use these calculations and extrapolate them out to all of the journals included in the Cabells Journal Blacklist.

And no, this is not mathematically valid, and nor is it any guarantee of getting near a correct answer – it is just one way of providing an estimate so that we can get a handle on the size of the problem.

So, what the back of my dog-eared envelope shows is that:

  • The judgement against OMICS was for $ $50,130,811, which represented the revenues it had earned between August 25, 2011 and July 31, 2017 (2,167 days, or 5.94 years)
  • The judgement did not state how many journals Omics and its subsidiaries operated, but Cabells includes 776 Omics-related journals in its Journal Blacklist
  • For Omics, if you use this data that means each journal earns revenues of $10,876 per year
  • If we were to assume OMICS were a typical predatory publisher (and they are bigger and more professional than most predatory operators) and were to extrapolate that out to the whole Blacklist of 13,138 journals, that’s a value of $142.9m a year

I do think this is very much top side as many predatory publishers charge ultra-low APCs to attract authors, while some may have stopped functioning. However, on the flip side we are adding to the Blacklist all the time and new journals are being created daily. So, I think a reasonable estimate based on the FTC judgement and Cabells data is that the predatory journal market is probably worth between $75m and $100m a year. What the actual figure might be is, however, largely irrelevant. What is relevant is that millions of dollars of funders’ grants, charitable donations and state funding have been wasted on these outlets.

References:

Greco, A. N. (2016). The impact of disruptive and sustaining digital technologies on scholarly journals. Journal of Scholarly Publishing48(1), 17–39. doi: 10.3138/jsp.48.1.17

Simon Linacre, Michael Bisaccio & Lacey Earle (2019). Publishing in an Environment of Predation: The Many Things You Really Wanted to Know, but Did Not Know How to Ask, Journal of Business-to-Business Marketing, 26:2, 217-228, DOI: 10.1080/1051712X.2019.1603423

 

The future of research evaluation

Following last week’s guest post from Rick Anderson on the risks of predatory journals, we turn our attention this week to legitimate journals and the wider issue of evaluating scholars based on their publications. With this in mind, Simon Linacre recommends a broad-based approach with the goal of such activities permanently front and center.


This post was meant to be ‘coming to you LIVE from London Book Fair’, but as you may know, this event has been canceled, like so many other conferences and other public gatherings in the wake of the coronavirus outbreak. While it is sad to miss the LBF event, meetings will take place virtually or in other places, and it is to be hoped the organizers can bring it back bigger and better than ever in 2021.

Some events are still going ahead, however, in the UK, and it was my pleasure to attend the LIS-Bibliometrics Conference at the University of Leeds last week to hear the latest thinking on journal metrics and performance management for universities. The day-long event was themed ‘The Future of Research Evaluation’, and it included both longer talks from key people in the industry, and shorter ‘lightning talks’ from those implementing evaluation systems or researching their effectiveness in different ways.

There was a good deal of debate, both on the floor and on Twitter (see #LisBib20 to get a flavor), with perhaps the most interest in speaker Dr. Stephen Hill, who is Director of Research at Research England, and chair of the steering group for the 2021 Research Excellence Framework (REF) in the UK. For those of us wishing to see crumbs from his table in the shape of a steer for the next REF, we were sadly disappointed as he was giving nothing away. However, what he did say was that he saw four current trends shaping the future of research evaluation:

  • Outputs: increasingly they will be diverse, include things like software code, be more open, more collaborative, more granular and potentially interactive rather than ‘finished’
  • Insight: different ways of understanding may come into play, such as indices measuring interdisciplinarity
  • Culture: the context of research and how it is received in different communities could become explored much more
  • AI: artificial intelligence will become a bigger player both in terms of the research itself and how the research is analyzed, e.g. the Unsilo tools or so-called ‘robot reviewers’ that can remove any reviewer bias.

Rather revealingly, Dr. Hill suggested a fifth trend might be the societal impact, and this is despite the fact that such impact has been one of the defining traits of both the current and previous REFs. Perhaps, the full picture has yet to be understood regarding impact, and there is some suspicion that many academics have yet to buy-in to the idea at all. Indeed, one of the takeaways from the day was that there was little input into the discussion from academics at all, and one wonders if they might have contributed to the discussion about the future of research evaluation, as it is their research being evaluated after all.

There was also a degree of distrust among the librarians present towards publishers, and one delegate poll should be of particular interest to them as it showed what those present thought were the future threats and challenges to research evaluation. The top three threats were identified as publishers monopolizing the area, commercial ownership of evaluation data, and vendor lock-in – a result which led to a lively debate around innovation and how solutions could be developed if there was no commercial incentive in place.

It could be argued that while the UK has taken the lead on impact and been busy experimenting with the concept, the rest of the higher education world has been catching up with a number different takes on how to recognize and reward research that has a demonstrable benefit. All this means that we are yet to see the full ‘impact of impact,’ and certainly, we at Cabells are looking carefully at what potential new metrics could aid this transition. Someone at the event said that bibliometrics should be “transparent, robust and reproducible,” and this sounds like a good guiding principle for whatever research is being evaluated.

Guest Post – Why Should We Worry about Predatory Journals? Here’s One Reason

Editor’s Note: This post is by Rick Anderson, Associate Dean for Collections & Scholarly Communication in the J. Willard Marriott Library at the University of Utah. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro and as Director of Resource Acquisition at the University of Nevada, Reno. Rick serves on numerous editorial and advisory boards and is a regular contributor to the Scholarly Kitchen. He has served as president of the North American Serials Interest Group (NASIG), and is a recipient of the HARRASSOWITZ Leadership in Library Acquisitions Award. In 2015 he was elected President of the Society for Scholarly Publishing. He serves as an unpaid advisor on the library boards of numerous publishers and organizations including biorXiv, Elsevier, JSTOR, and Oxford University Press.


This morning I had an experience that is now familiar, and in fact a several-times-daily occurrence—not only for me, but for virtually every one of my professional colleagues: I was invited to submit an article to a predatory journal.

How do I know it was a predatory journal? Well, there were a few indicators, some strong and some merely suggestive. For one thing, the solicitation addressed me as “Dr. Rick Anderson,” a relatively weak indicator given that I’m referred to that way on a regular basis by people who assume that anyone with the title “Associate Dean” must have a doctoral degree.

However, there were other elements of this solicitation that indicated much more strongly that this journal cares not at all about the qualifications of its authors or the quality of its content. The strongest of these was the opening sentence of the message:

Based on your expertise & research on Heart [sic], it is an honour to invite you to submit your article for our Journal of Cardiothoracic Surgery and Therapeutics.

This gave me some pause, since I have no expertise whatsoever “on Heart,” and have never published anything on any topic even tangentially related to medicine. Obviously, no legitimate journal would consider me a viable target for a solicitation like this.

Another giveaway: the address given for this journal is 1805 N Carson St., Suite S, Carson City, NV. As luck would have it, I lived in northern Nevada for seven years and am quite familiar with Carson City. The northern end of Carson Street—a rather gritty stretch of discount stores, coffee shops, and motels with names designed to signal affordability—didn’t strike me as an obvious location for any kind of multi-suite office building, let alone a scientific publishing office, but I checked on Google Maps just to see. I found that 1805 North Carson Street is a non-existent address; 1803 North Carson Street is occupied by the A to Zen Thrift Shop, and Carson Coffee is at 1825. There is no building between them.

Having thus had my suspicion stoked, I decided to give this journal a real test. I created a nonsense paper consisting of paragraphs taken at random from articles originally published in a legitimate journal of cardiothoracic medicine, and gave it a title consisting of syntactically coherent but otherwise randomly-chosen terms taken from the discipline. I invented several fictional coauthors, created an email account under the assumed name of the lead author, submitted the manuscript via the journal’s online system and settled down to wait for a decision (which was promised within “14 days,” following the journal’s usual “double blind peer review process”).

***

While we wait for word from this journal’s presumably distinguished team of expert peer reviewers, let’s talk a little bit about the elephant in the room: the fact that the journal we’re testing purports to publish peer-reviewed research on the topic of heart surgery.

The problem of deceptive or “predatory” publishing is not new; it has been discussed and debated at length, and it might seem as if there’s not much new to be said about it: as just about everyone in the world of scholarly publishing now knows, a large and apparently growing number of scam artists have created thousands upon thousands of journals that purport to publish rigorously peer-reviewed science, but will, in fact, publish whatever is submitted (good or bad) as long as it’s accompanied by an article processing charge. Some of these outfits go to great expense to appear legitimate and realize significant revenues from their efforts; OMICS (which was subject to a $50 million judgment after being sued by the Federal Trade Commission for deceptive practices) is probably the biggest and most famous of predatory publishing outfits. But most of these outfits are relatively small; many seem to be minimally staffed fly-by-night operations that have invested in little more than the creation of a website and an online payment system. The fact that so many of these “journals” exist and publish so many articles is a testament to either the startling credulity or the distressing dishonesty of scholars and scientists the world over—or, perhaps, both.

But while the issue of predatory publishing, and its troubling implications for the integrity of science and scholarship, is discussed regularly in broad terms within the scholarly-communication community, I want to focus here on one especially concerning aspect of the phenomenon: predatory journals that falsely claim to publish rigorously peer-reviewed science in fields that have a direct bearing on human health and safety.

In order to try to get a general idea of the scope of this issue, I did some searching within Cabell’s Journal Blacklist to see how many journals from such disciplines are listed in that database. My findings were troubling. For example, consider the number of predatory journals found in Cabell’s Blacklist that publish in the following disciplines (based on searches conducted on 25 and 26 November 2019):

Disciplinary Keyword # of Titles
Medicine 3,818
Clinical 300
Cancer 126
Pediatrics 64
Nutrition 88
Surgery 159
Neurology 39
Climate 25
Brain 24
Neonatal 16
Cardiovascular 51
Dentistry 44
Gynecology 44
Alzheimer’s 10
Structural Engineering 10
Anesthesiology 21
Oncology 74
Diabetes 51

Obviously, it’s concerning when scholarship or science of any kind is falsely represented as having been rigorously reviewed, vetted, and edited. But it’s equally obvious that not all scholarship or science has the same impact on human health and safety. A fraudulent study in the field of sociology certainly has the capacity to do significant damage—but perhaps not the same kind or amount of damage as a fraudulent study in the field of pediatric anesthesiology, or diagnostic oncology. The fact that Cabell’s Blacklist has identified nearly 4,000 predatory journals in the general field of medicine is certainly cause for very serious concern.

At the risk of offending my hosts, I’ll just add here that this fact leads me to really, really wish that Cabell’s Blacklist were available to the general public at no charge. Recognizing, of course, that a product like this can’t realistically be maintained at zero cost—or anything close to zero cost—this begs an important question: what would it take to make this resource available to all?

I can think of one possible solution. Two very large private funding agencies, the Bill & Melinda Gates Foundation and the Wellcome Trust, have demonstrated their willingness to put their money where their mouths are when it comes to supporting open access to science; both organizations require funded authors to make the published results of their research freely available to all, and allow them to use grant funds to pay the attendant article-processing charges. For a tiny, tiny fraction of their annual spend on research and on open-access article processing charges, either one of these grantmakers could underwrite the cost of making Cabell’s Blacklist freely available. How tiny? I don’t know what Cabell’s costs are, but let’s say, for the sake of argument, that it costs $10 million per year to maintain the Blacklist product, with a modest amount of profit built in. That would represent two tenths of a percent of the Gates Foundation’s annual grantmaking, or 2.3 tenths of a percent of Wellcome’s.

This, of course, is money that they would then not be able to use to directly subsidize research. But since both fundmakers already commit a much, much larger percentage of their annual grantmaking to APCs, this seems like a redirection of funds that would yield tremendous value for dollar.

Of course, underwriting a service like Cabell’s Blacklist would entail acknowledging that predatory publishing is real, and a problem. Oddly enough, this is not universally acknowledged, even among those who (one might think) ought to be most concerned about the integrity of the scholcomm ecosystem and about the reputation of open access publishing. Unfortunately, among many members of that ecosystem, APC-funded OA publishing is largely—and unfairly—conflated with predatory publishing.

***

Well, it took much longer than promised (or expected), but after receiving, over a period of two months, occasional messages telling me that my paper was in the “final peer review process,” I finally received the long-awaited-for response in late January: “our” paper had been accepted for publication!

Journal Blacklist entry for Journal of Cardiothoracic Surgery and Therapeutics

Over the course of several subsequent weeks I received a galley proof for my review—along with an invoice for an article-processing charge in the amount of $1,100. In my guise as lead author, I expressed shock and surprise at this charge; no one had said anything to me about an APC when my work was solicited for publication. I received a conciliatory note from the editor, explaining that the lack of notice was due to a staff error, and further explaining that the Journal of Cardiothoracic Surgery and Therapeutics is an open-access journal and uses APCs to offset its considerable costs. He said that by paying this fee and allowing publication to go forward I would be ensuring that the article “will be available freely which allows the scientific community to view, download, distribution of an article in any medium (provided that the original work is properly cited) thereby increasing the views of article.” He also promised that our article will be indexed “in Crossref and many other scientific databases.” I responded that I understood the model but had no funds available to pay the fee, and would therefore have to withdraw the paper. “You may consider our submission withdrawn,” I concluded.

Then something interesting happened. My final communication bounced back. I was informed by a system-generated message that my email had been “waitlisted” by a service called Boxbe, and that I would have to add myself to the addressee’s “guest list” in order for it to be delivered. Apparently, the editor no longer wanted to hear from me.

Also interesting: despite my nonpayment of the APC, the article has now been published and can be seen here. It will be interesting to see how long it remains in the journal.

We need to be very clear about one thing here: the problem with my article is not that it represents low-quality science. The problem with my article is that it is nonsense and it is utterly incoherent. Not only is its content entirely plagiarized, it’s so randomly assembled from such disparate sources that it could not possibly be mistaken for an actual study by any informed reader who took the time to read any two of its paragraphs. Furthermore, it was “written” by authors who do not exist, whose names were taken from famous figures in history and literature, and whose institutional affiliations are entirely fictional. (There is no “Brockton State University,” nor is there a “Massapequa University,” nor is there an organization called the “National Clinics of Health.”)

What all of this means is that the fundamental failing of this journal—as it is of all predatory journals—is not its low standards, or the laxness of its peer review and editing. Its fundamental failing is that despite its claims, and despite charging authors for these services, it has no standards at all, performs no peer review, and does no editing. If it did have any standards whatsoever, and if it performed even the most perfunctory peer review and editorial oversight, it would have detected the radical incoherence of my paper immediately.

One might reasonably ask, though: if my paper is such transparently incoherent nonsense, why does its publication pose any danger? No surgeon in the real world will be led by this paper to do anything in an actual surgical situation, so surely there’s no risk of it affecting a patient’s actual treatment in the real world.

This is true of my paper, no doubt. But what the acceptance and publication of my paper demonstrates is not only that the Journal of Cardiothoracic Surgery and Therapeutics will publish transparent nonsense, but also—more importantly and disturbingly—that it will publish ­anything. Dangerously, this includes papers that may not consist of actual nonsense, but that were flawed enough to be rejected by legitimate journals, or that were written by the employees of device makers or drug companies that have manipulated their data so as to promote their own products, or that were written by dishonest surgeons who have generally legitimate credentials but are pushing crackpot techniques or therapies. The danger illustrated by my paper is not so much that predatory journals will publish literal nonsense; the more serious danger is that they will uncritically publish seriously flawed science while presenting it as carefully-vetted science.

In other words, the defining characteristic of a predatory journal is not that it’s a “low-quality” journal. The defining characteristic of a predatory journal is that it falsely claims to provide quality control of any kind—precisely because to do so would restrict its revenue flow. This isn’t to say that no legitimate science ever gets published in predatory journals; I’m sure quite a bit does since there’s no reason why a predatory journal would reject it, any more than it would reject the kind of utter garbage this particular journal has now published under the purported authorship of Jackson X. Pollock. But the appearance of some legitimate science does nothing to resolve the fundamental issue here, which is one of scholarly and scientific fraud.

Such fraud is distressing wherever it occurs. In the context of cardiothoracic surgery—along with all of the other health-related disciplines in which predatory journals currently publish—it’s terrifying.

Or it should be, anyway.

Predatory publishing from A to Z

During 2019, Cabells published on its Twitter feed (@CabellsPublish) at least one of its 70+ criteria for including a journal on the Cabells Journal Blacklist, generating great interest among its followers. For 2020, Simon Linacre highlights a new initiative below where Cabells will publish its A-Z of predatory publishing each week to help authors identify and police predatory publishing behavior.


This week a professor I know well approached me for some advice. He had been approached by a conference to present a plenary address on his research area but had been asked to pay the delegate fee. Something didn’t seem quite right, so knowing I had some knowledge in this area he asked me for some guidance. Having spent considerable time looking at predatory journals, it did not take long to notice signs of predatory activity: direct commissioning strategy from unknown source; website covering hundreds of conferences; conferences covering very wide subject areas; unfamiliar conference organizers; guaranteed publication in unknown journal; evidence online of other researchers questioning the conference and its organizers’ legitimacy.

Welcome to ‘C for Conference’ in Cabells’ A-Z of predatory publishing.

From Monday 17 February, Cabells will be publishing some quick hints and tips to help authors, researchers and information professionals find their way through the morass of misinformation produced by predatory publishers and conference providers. This will include links to helpful advice, as well as the established criteria Cabells uses to judge if a journal should be included in its Journal Blacklist. In addition, we will be including examples of predatory behavior from the 12,000+ journals currently listed on our database so that authors can see what predatory behavior looks like.

So, here is a sneak preview of the first entry: ‘A is for American’. The USA is a highly likely source of predatory journal activity, as the country lends credence to any claim of legitimacy a journal may adopt to hoodwink authors into submitting articles to them. In the Cabells Journal Blacklist there are over 1,900 journals that include the name ‘American’ in their titles or publisher name. In comparison, just 308 Scopus-indexed journals start with the word ‘American’. So for example, the American Journal of Social Issues and Humanities purports to be published from the USA, but this cannot be verified, and it has 11 violations of Journal Blacklist criteria, including the use of a fake ISSN number and complete lack of any editor or editorial board member listed on the journal’s website (see image).

‘A’ also stands for ‘Avoid at all costs’.

Please keep an eye out for the tweets and other blog posts related to this series, which we will use from time to time to dig deeper into understanding more about predatory journal and conference behavior.

Look before you leap!

A recent paper published in Nature has provided a tool for researchers to use to check the publication integrity of a given article. Simon Linacre looks at this welcome support for researchers, and how it raises questions about the research/publication divide.

Earlier this month, Nature published a well-received comment piece by an international group of authors entitled ‘Check for publication integrity before misconduct’ (Grey et al, 2020). The authors wanted to create a tool to enable researchers to spot potential problems with articles before they got too invested in the research, citing a number of recent examples of misconduct. The tool they came up with is a checklist called REAPPRAISED, which uses each letter to identify an area – such as plagiarism or statistics and data – that researchers should check as part of their workflow.
 
As a general rule for researchers, and as a handy mnemonic, the tool seems to work well, and undoubtedly authors using this as part of their research should avoid the potential pitfalls of using poorly researched and published work. Perhaps we at Cabells would argue that an extra ‘P’ should be added for ‘Predatory’, and the checks researchers should make to ensure the journals they are using and intend to publish in are legitimate. To do this comprehensively, we would recommend using our own criteria for the Cabells Journal Blacklist as a guide, and of course, using the database itself where possible.
 
The guidelines also raise a fundamental question for researchers and publishers alike as to where research ends and publishing starts. For many involved in academia and scholarly communications, the two worlds are inextricably linked and overlap, but are nevertheless different. Faculty members of universities do their research thing and write articles to submit to journals; publishers manage the submission process and publish the best articles for other academics to read and in turn use in their future research. 
 
Journal editors seem to sit at the nexus of these two areas as they tend to be academics themselves while working for the publisher, and as such have feet in both camps. But while they are knowledgeable about the research that has been done and may actively research themselves, as editor their role is one performed on behalf of the publisher, and ultimately decides which articles are good enough to be recorded in their publication; the proverbial gatekeeper.
 
What the REAPPRAISED tool suggests, however, is that for authors the notional research/publishing divide is not a two-stage process, but rather a continuum. Only if authors embark on research intent on fully appraising themselves of all aspects of publishing integrity can they guarantee the integrity of their own research, and in turn this includes how and where that research is published. Rather than a two-step process, authors can better ensure the quality of their research AND publications by including all publishing processes as part of their own research workflow. By doing this, and using tools such as REAPPRAISED and Cabells Journal Blacklist along the way, authors can better take control of their academic careers.


Beware of publishers bearing gifts

In the penultimate post of 2019, Simon Linacre looks at the recent publication of a new definition of predatory publishing and challenges whether such a definition is fit for purpose for those who really need it – authors


In this season of glad tidings and good cheer, it is worth reflecting that not everyone who approaches academic researchers bearing gifts are necessarily Father Christmas. Indeed, the seasonal messages popping into their inboxes at this time of year may offer opportunities to publish that seem too good to miss, but in reality, they could easily be a nightmare before Christmas.
 
Predatory publishers are the very opposite of Santa Claus. They will come into your house, eat your mince pies, but rather than leave you presents they will steal your most precious possession – your intellectual property. Publishing an article in a predatory journal could ruin an academic’s career, and it is very hard to undo once it has been done. Interestingly, one of the most popular case studies this year on COPE’s website is on what to do if you are unable to retract an article from a predatory journal in order to publish it in a legitimate one. 
 
Cabells has added over two thousand journals to its Journals Blacklist in 2019 and will reach 13,000 in total in the New Year. Identifying a predatory journal can be tricky, which is why they are often so successful in duping authors; yet defining exactly what a predatory journal is can be fraught with difficulty. In addition, some commentators do not like the term – from an academic perspective ‘predatory’ is hard to define, while others think it is too narrow. ‘Deceptive publishing’ has been put forward, but this, in turn, could be seen as too broad.
 
Cabells uses over 70 criteria to identify titles for inclusion in its Journals Blacklist and widens the net to encompass deceptive, fraudulent and/or predatory journals. Defining what characterizes these journals in just a sentence or two is hard, but this is what a group of academics has done following a meeting in Ottowa, Canada earlier in 2019 on the topic of predatory publishing. The output of this meeting was the following definition:
 
Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices.” (Grudniewicz et al, 2019)
 
The definition is presented as part of a comment piece published in Nature last week and came from a consensus reached at the Ottowa meeting. It is a pity that Cabells was not invited to the event and given the opportunity to contribute. As it is, the definition and accompanying explanation has been met with puzzlement in the Twittersphere, with a number of eminent Open Access advocates saying it allows almost any publisher to be described as predatory. For it to be relevant, it will need to be adopted and used by researchers globally as a test for any journal they are thinking of submitting to. Only time will tell if this will be the case.


From all of us at Cabells, we wish everyone a joyous holiday season and a healthy New Year. Our next blog will be published on January 15, 2020.

Will academia lead the way?

Universities are usually expected to have all the answers – they are full of clever people after all. But sometimes, they need some help to figure out specific problems. Simon Linacre attended a conference recently where the questions being asked of higher education are no less than solving the problems of climate change, poverty, clean water supply and over a dozen more similar issues. How can academic institutions respond?


Most people will be aware of the United Nations and the Sustainable Development Goals (SDGs), which they adopted to solve 17 of the world’s biggest problems by 2030. Solving the climate change crisis by that date has perhaps attracted the most attention, but all of the goals present significant challenges to global society.

Universities are very much at the heart of this debate, and there seems to be an expectation that because of the position they have in facilitating research, they will unlock the key to solving these major problems. And so far they seem to have taken up the challenge with some gusto, with new research centers and funding opportunities appearing all the time for those academics aiming to contribute to these global targets in some way. What seems to be missing, however, is that many academics don’t seem to have received the memo on what they should be researching.
 
Following several conversations at conferences and with senior management at a number of universities, the two themes that are repeated when it comes to existing research programs is that there is a problem with both ‘culture and capabilities’. By culture, university hierarchies report that their faculty members are still as curious and keen to do research as ever, but they are not as interested when they are told to focus their energies on certain topics. And when they do, they lack the motivation or incentives to ensure the outcomes of their research lie in real-world impact. For the academic, impact still means a smallish number with three decimal places – ie, the Impact Factor.

In addition, when it comes to the capability of undertaking the kind of research that is likely to contribute to moving forward the SDGs, academics have not had any training, guidance, or support in what to do. In the UK, for example, where understanding and exhibiting impact is further forward than anywhere else in the world thanks to the Research Excellence Framework (REF), there still seem to be major issues with academics being focused on research that will get published rather than research that will change things. In one conversation, while I was referring to research outcomes as real-world benefits, an academic was talking about the quality of journals in which research would be published. Both are legitimate research outcomes, but publication is still way ahead in terms of cultural expectations. And internal incentives are in reality far behind the overarching aims stated by governments and research organizations.

Perhaps we are being too optimistic to expect the grinding gears of academia to move more smoothly towards a major culture change, and perhaps the small gains that are being made and the work done in the public space by the likes of Greta Thunberg will ultimately be enough to enable real change. But when the stakes are so high and the benefits are so great, maybe our expectations should weigh heavily on academia, as they are probably the people best placed to solve the world’s problems after all.

From Lisbon to Charleston, Cabells has you covered

This week, Cabells is fortunate enough to connect with colleagues and friends, new and old, across the globe in Lisbon, Portugal at the GBSN 2019 Annual Conference, and in Charleston, South Carolina at the annual Charleston Conference. We relish these opportunities to share our experiences and learn from others, and both conference agendas feature industry leaders hosting impactful sessions covering myriad thought-provoking topics. 

At the GBSN conference in Lisbon, Simon Linacre, Cabells Director of International Marketing and Development, is co-leading the workshop, “Research Impact for the Developing World” which explores ideas to make research more impactful and relevant in local contexts. At the heart of the matter is the notion that unless the global business community is more thoughtful and proactive about the development of research models, opportunities for positively impacting business and management in the growth markets of the future will be lost. We know all in attendance will benefit from Simon’s insights and leadership in working through this important topic.

gbsn

At the Charleston Conference, a lively and eventful day at the vendor showcase on Tuesday was enjoyed by all and our team was reminded once again how wonderful it is to be a part of the scholarly community. We never take for granted how fortunate we are to have the opportunity to share, learn, and laugh with fellow attendees. 

 

We are always excited to pass along news on the projects we are working on, learn about what we might expect down the road, and consider areas we should focus on going forward. Hearing what is on the collective mind of academia and how we can help move the community forward is what keeps us going. And things are just getting started! With so many important and interesting sessions on the agenda in Charleston, our only regret is that we can’t attend them all!

Bringing clarity to academic publishing

How do you know if a journal is a good or a bad one? It is a simple enough question, but there is a lack of clear information out there for researchers, and often scams that lay traps for the unaware. In his latest post, Simon Linacre presents some new videos from Cabells that explain what it does to ensure authors can keep fully informed.


On a chilly Spring day in Edinburgh, myself and one of my colleagues were asked to do what nobody really wants to do if they can help it, and that is to ‘act natural’. It is one of life’s great ironies that it is so difficult to act naturally when told to do so. However, it was for a good cause, as we had been asked to explain to people through a short film what it was that Cabells did and why we thought it was important.

Video as a medium has been relatively ignored by scholarly publishers until quite recently. Video has of course been around for decades, and it has been possible to embed video on websites next to articles for a number of years. However, embedding video into pdfs has been tricky, and as every publisher will tell you when they ask you about user needs – academics ‘just want the pdf’. As a result, there has been little in the way of innovation when it comes to scholarly communication, despite some brave attempts such as video journals, video abstracts and other accompaniments to the humble article.

Video has been growing as a means of search, particularly for younger academics, and it can be much more powerful when it comes to engagement and social media. Stepping aside from the debate about what constitutes impact and whether Altmetrics and hits via social media really mean anything, video can be ‘sticky’ in the sense that people spend longer watching it than skipping over words on a web page. As such, the feeling is that video is a medium whose time may have yet to come when it comes to scholarly communications.

So, in that spirit, Cabells has shot a short video with some key excerpts that take people through the Journal Whitelist and Journal Blacklist. It is hoped that it answers some questions that people may have, and spurs others to get in touch with us. The idea of the film is the first step towards Cabells’ development of a number of resources in lots of different platforms that will help researchers drink in knowledge of journals to optimize their decision-making. In a future of Open Access, new publishing platforms, and multiple publishing choices, the power to publish will increasingly be in the hands of the author, with the scholarly publishing industry increasingly seeking ways to satisfy their needs. Knowledge about publishing is the key to unlocking that power.

Updated CCI and DA metrics hit the Journal Whitelist

Hot off the press, newly updated Cabell’s Classification Index© (CCI©) and Difficulty of Acceptance© (DA©) scores for all Journal Whitelist publication summaries are now available. These insightful metrics are part of our powerful mix of intelligent data leading to informed and confident journal evaluations.

Research has become increasingly cross-disciplinary and, accordingly, an individual journal might publish articles relevant to several fields.  This means that researchers in different fields often use and value the same journal differently. Our CCI© calculation is a normalized citation metric that measures how a journal ranks compared to others in each discipline and topic in which it publishes and answers the question, “How and to whom is this journal important?” For example, a top journal in computer science might sometimes publish articles about educational technology, but researchers in educational technology might not really “care” about this journal the same way that computer scientists do. Conversely, top educational technology journals likely publish some articles about computer science, but these journals are not necessarily as highly regarded by the computer science community. In short, we think that journal evaluations must be more than just a number.

CCI screenshot 2019 updates

The CCI© gauges how well a paper might perform in specific disciplines and topics and compares the influence of journals publishing content from different disciplines. Further, within each discipline, the CCI© classifies a journal’s influence for each topic that it covers. This gives users a way to evaluate not just how influential a journal is, but also the degree to which a journal influences different disciplines.

For research to have real impact it must first be seen, making maximizing visibility a priority for many scholars. Our Difficulty of Acceptance© (DA©) metric is a better way for researchers to gauge a journal’s exclusivity to balance the need for visibility with the very real challenge of getting accepted for publication.

DA screenshot 2019 updates

The DA© rating quantifies a journal’s history of publishing articles from top-performing research institutions. These institutions tend to dedicate more faculty, time, and resources towards publishing often and in “popular” journals. A journal that accepts more articles from these institutions will tend to expect the kind of quality or novelty that the availability of resources better facilitates. So, researchers use the DA© to find the journals with the best blend of potential visibility and manageable exclusivity.

For more information on our metrics, methods, and products, please visit www.cabells.com.