A case study of how bad science spreads

Fake news has been the go-to criticism of the media for some politicians, which in turn has been rejected as propaganda and fear-mongering by journalists. However, as former journalist Simon Linacre argues, the fourth estate needs to have its own house is in order first, and ensure they are not tripped up by predatory journals.


I class myself as a ‘runner’, but only in the very loosest sense. Inspired by my wife taking up running a few years ago, I decided I should exercise consistently instead of the numerous half-hearted, unsuccessful attempts I had made over the years. Three years later I have done a couple of half-marathons, run every other day, and track my performance obsessively on Strava. I have also recently started to read articles on running online, and have subscribed to the magazine Runners World. So yes, I think I may actually be a runner now.

But I’m also an experienced journalist, a huge cynic, and have a bulls*** radar the size of the Grand Canyon, so even while relaxing with my magazine I like to think I can spot fakery a mile off. And so it proved earlier this summer while reading a piece on how hill running can improve your fitness. This was music to my ears as someone who lives half-way up a valley side, but my interest was then piqued when I saw a reference to the study that formed the basis for the piece, which was to an article in the International Journal of Scientific Research. Immediately, I smelt a rat. “There is no way that is the name of a reputable, peer-reviewed journal,” I thought. And I was right.

But that wasn’t even half of the problem.

After checking Cabells’ Predatory Reports database, I found not one but TWO journals are listed on the database with that name, both with long lists of breaches of the Cabells’ criteria that facilitate the identification of predatory journals. I was still curious as to the nature of the research, as it could have been legitimate research in an illegitimate journal, or just bad research, full stop. As it turned out, neither journal had ever published any research on hill running and the benefits of increasing VO2 max. So where was the story from?

After some more digging, an article matching the details in the Runners World piece could be found in a third similarly-named journal, the International Journal of Scientific and Research Publications. The article, far from the recent breakthrough suggested in the August 2020 issue of Runners World, was actually published in August 2017 by two authors from Addis Ababa University in Ethiopia. While the science of the article seems OK, the experiment that produced the results was on just 32 people over 12 weeks, which means it really needs further validation across greater subjects to confirm its findings. Furthermore, while the journal itself was not included in Cabells’ Predatory Reports database, a review found significant failings, including unusually quick peer review processes and, more seriously, that the “owner/Editor of the journal or publisher falsely claims academic positions or qualifications”. The journal has subsequently been added to Predatory Reports, and the article itself has never been cited in the three years since publication.

Yet one question remains: how did a relatively obscure article, published in a predatory journal and that has never been cited, find its way into a news story in a leading consumer magazine? Interestingly, similar research was also quoted on MSN.com in May 2020 which also quoted the International Journal of Scientific Research, while other sites have also quoted the same research but from the International Journal of Scientific and Research Publications. It appears likely that, having been quoted online once, the same story has been doing the rounds for three years like a game of ‘Telephone,’ all based on uncited research that may not have been peer reviewed in the first place, that used a small sample size and was published in a predatory journal.

While no damage has been done here – underlying all this, it does make sense that hill running can aid overall performance – one need only need to think about the string of recent health news stories around the coronavirus to see how one unverified article could sweep like wildfire through news outlets and online. This is the danger that predatory journals pose.

They’re not doctors, but they play them on TV

Recently, while conducting investigations of suspected predatory journals, our team came across a lively candidate. At first, as is often the case, the journal in question seemed to look the part of a legitimate publication. However, after taking a closer look and reading through one of the journal’s articles (“Structural and functional brain differences in key opinion journal leaders“) it became clear that all was not as it seemed.

Neurology and Neurological Sciences: Open Access, from MedDocs Publishers, avoids a few of the more obvious red flags that indicate deceitful practices, even to neophyte researchers, but lurking just below the surface are several clear behavioral indicators common to predatory publications.

1a

With a submission date of August 22, 2018, and a publication date November 13, 2018, the timeline suggests that some sort of peer review of this article may have been carried out. A closer examination of the content makes it evident that little to no peer review actually took place. The first tip-off was the double-take inducing line in the “Material and methods” section, “To avoid gender bias, we recruited only males.” Wait, what? That’s not how that works.

It soon became clear to our team that even a rudimentary peer review process (or perhaps two minutes on Google) would have led to this article’s immediate rejection. While predatory journals are no laughing matter, especially when it comes to medical research in the time of a worldwide pandemic, it is hard not to get a chuckle from some of the “easter eggs” found within articles intended to expose predatory journals. Some of our favorites from this article:

  • Frasier Crane, a listed author, is the name of the psychiatrist from the popular sitcoms Cheers and Frasier
  • Another author, Alfred Bellow, is the name of the NASA psychiatrist from the TV show I Dream of Jeannie
  • Marvin Monroe is the counselor from The Simpsons
  • Katrina Cornwell is a therapist turned Starfleet officer on Star Trek: Discovery
  • Faber University is the name of the school in Animal House (Faber College in the film)
  • Orbison University, which also doesn’t exist, is likely a tribute to the late, great musician Roy Orbison

And, perhaps our favorite find and one we almost missed:

  • In the “Acknowledgments” section the authors thank “Prof Joseph Davola for his advice and assistance.” This is quite likely an homage to the Seinfeld character “Crazy Joe Davola.”

Though our team had a few laughs with this investigation, they were not long-lived as this is yet another illustration of the thousands (Predatory Reports currently lists well over 13,000 titles) of journals such as this one in operation. Outlets that publish almost (or literally) anything, usually for a fee, with no peer review or other oversight in place and with no consideration of the detrimental effect it may have on science and research.

MedDocs PR card
Predatory Reports listing for Neurology and Neurological Sciences: Open Access

A more nuanced issue that deceptive publications create involves citations. If this was legitimate research, the included citations would not ‘count’ or be picked up anywhere since this journal is not indexed in any citation databases. Furthermore, any citation in a predatory journal that cites a legitimate journal is ‘wasted’ as the legitimate journal cannot count or use that citation appropriately as a foundation for its legitimacy. However, these citations could be counted via Google Scholar, although (thankfully) this journal has zero. Citation ‘leakage’ can also occur, where a legitimate journal’s articles cite predatory journals, effectively ‘leaking’ those citations out of the illegitimate scholarly publishing sphere into legitimate areas. These practices can have the effect of skewing citation metrics which are measures often relied upon (sometimes exclusively, often too heavily) to gauge the legitimacy and impact of academic journals.

When all is said and done, as this “study” concludes, “the importance of carefully selecting journals when considering the submission of manuscripts,” cannot be overstated. While there is some debate around the use of “sting” articles such as this one to expose predatory publications, not having them exposed at all is far more dangerous.

Cabells’ top 7 palpable points about predatory publishing practices

In his latest post, Simon Linacre looks at some new stats collated from the Cabells Predatory Reports database that should help inform and educate researchers, better equipping them to evade the clutches of predatory journals.


In recent weeks Cabells has been delighted to work with both The Economist and Nature Index to highlight some of the major issues for scholarly communication that predatory publishing practices represent. As part of the research for both pieces, a number of facts have been uncovered that not only help us understand the issues inherent in this malpractice much better, but should also point researchers away from some of the sadly typical behaviors we have come to expect.

So, for your perusing pleasure, here are Cabells’ Top 7 Palpable Points about Predatory Publishing Practices:

  1. There are now 13,500 predatory journals listed in the Predatory Reports database, which is currently growing by approximately 2,000 journals a year
  2. Over 4,300 journals claim to publish articles in the medical field (this includes multidisciplinary journals) – that’s a third of the journals in Predatory Reports. By discipline, medical and biological sciences have many more predatory journals than other disciplines
  3. Almost 700 journals in Predatory Reports start with ‘British’ (5.2%), while just 50 do on the Journalytics database (0.4%). Predatory journals often call themselves American, British or European to appear well established and legitimate, when in reality relatively few good quality journals have countries or regions in their titles
  4. There are over 5,300 journals listed in Predatory Reports with an ISSN (40%), although many of these are copied, faked, or simply made up. Having an ISSN is not a guarantee of legitimacy for journals
  5. Around 41% of Predatory Reports journals are based in the US, purport to be from the US, or are suspected of being from the US, based on information on journal websites and Cabells’ investigations. This is the highest count for any country, but only a fraction will really have their base in North America
  6. The average predatory journal publishes about 50 articles a year according to recent research from Bo-Christer Björk of the Hanken School of Economics in Helsinki, less than half the output of a legitimate title. Furthermore, around 60% of papers in such journals receive no future citations, compared with 10% of those in reliable ones
  7. Finally, it is worth noting that while we are in the throes of the Coronavirus pandemic, there are 41 journals listed on in Predatory Reports (0.3%) specifically focused on epidemiology and another 35 on virology (0.6% in total). There could be further growth over the next 12 months, so researchers in these areas should be particularly careful now about where they submit their papers.

Reversal of fortune

One of the most common questions Cabells is asked about its Predatory Reports database of journals is whether it has ever “changed its mind” about listing a journal. As Simon Linacre reports, it is less a question of changing the outcome of a decision, but more of a leopard changing its spots.


This week saw the annual release of Journal Impact Factors from Clarivate Analytics, and along with it the rather less august list of journals whose Impact Factors have been suppressed in Web of Science. This year there were 33 journals suspended, all of which for “anomalous citation patterns found in the 2019 citation data” which pertained to high levels of self-citation. Such a result is the worst nightmare for a publisher, as while they can be due to gaming citation levels, they can also sometimes reflect the niche nature of a subject area, or other anomalies about a journal.

Sometimes the decision can be changed, although it is often a year or two before the data can prove a journal has changed its ways. Similarly, Cabells offers a review process for every journal it lists in its Predatory Reports database, and when I arrived at the company in 2018, like many people one of the first things I asked was: has Cabells ever had a successful review to delist a journal?

Open for debate

The answer is yes, but the details of those cases are quite instructive as to why journals are included on the database in the first place, and perhaps more importantly whey they are not. Firstly, however, some context. It is three years since the Predatory Reports database was first launched, and in that time almost 13,500 journals have been included. Each journal has a link next to the violations on its report for anyone associated with that journal to view the policy and appeal the decision:

1a

This policy clearly states:

The Cabells Review Board will consider Predatory Journal appeals with a frequency of one appeal request per year, per journal. Publications in Predatory Reports, those with unacceptable practices, are encouraged to amend their procedures to comply with accepted industry standards.

Since 2017, there have been just 20 appeals against decisions to list journals in Predatory Reports (0.15% of all listed journals), and only three have been successful (0.02%). In the first case (Journal A), the journal’s peer review processes were checked and it was determined that some peer reviews were being completed, albeit very lightly. In addition, Cabells’ investigators found a previous example of dual publication. However, following the listing, the journal dealt with the problems and retracted the article it had published as it seemed the author had submitted two identical articles simultaneously. This in turn led to Cabells revising its evaluations so that particular violation does not penalize journals for something where an author was to blame.

In the second review (Journal B), Cabells evaluated the journal’s peer review process and found that it was also not completing full peer reviews and had a number of other issues. It displayed metrics in a misleading way, lacked editorial policies on its website and did not have a process for plagiarism screening. After its listing in PR, the journal’s publisher fixed the misleading elements on its website and demonstrated improvements to its editorial processes. In this second case, it was clear that the journal’s practices were misleading and deceptive, but they chose to change and improve their practices.”

Finally, a third journal (Journal C) has just had a successful appeal completed. In this case, there were several problems that the journal was able to correct by being more transparent on its website. It added or cleared up confusion about the necessary policies and made information about its author fees available. Cabells was also able to evaluate its peer review process after it submitted peer review notes on a few articles and it was evident the journal editor was managing a good quality peer review, hence it has now been removed from the Predatory Reports database (it should be noted that, as with the other two successful appeals, journals removed from Predatory Reports are not then automatically included in the Cabells Journalytics database).

Learning curve

Cabells’ takeaway from all of these successful reviews was they were indeed successful – they showed that the original identification was correct, and they enabled improvements that identified them as better, and certainly non-predatory, journals. They also fed into the continuing improvement Cabells seeks in refining its Predatory Reports criteria, with a further update due to be published later this summer.

There are also things to learn from unsuccessful reviews. In one case a publisher appealed a number of its journals that were included on Predatory Reports. However, their appeal only highlighted how bad the journals actually were. Indeed, an in-depth review of each journal not only uncovered new violations that were subsequently added to the journals, but also led to the addition of a brand new violation that is to be included in the upcoming revision of the Predatory Reports criteria.

Announcement regarding brand-wide language changes, effective immediately

Since late last year, Cabells has been working on developing new branding for our products that better embody our ideals of integrity and equality in academic publishing and society as a whole. We set out to ensure that the changes represent a total departure from the symbolism inextricably tied to the idea of blacklists and whitelists. In support of, and in solidarity with, the fight against systemic racism that our country is facing, Cabells is implementing brand-wide language changes, effective immediately. The changes implemented today represent only a fraction of those that we will be launching in 2020, but it is an important start.

Users may experience temporary outages as the changes roll out, but normal operations should resume quickly. Customer access will function identically as before the changes, but look for the term “Journalytics” in place of “whitelist” and “Predatory Reports” in place of “blacklist.”

Please contact Mike Bisaccio at michael.bisaccio@cabells.com or (409) 767-8506 with any questions or for additional information.

Cabells thanks the entire community for their support of this effort.

Sincerely,
The Cabells Team

No time for rest

This week The Economist published an article on predatory publishing following collaboration with Cabells. Simon Linacre looks into the findings and points to how a focus on education can avert a disaster for Covid-19 and other important research.


One of the consequences of the all-consuming global interest in the coronavirus pandemic is that it has catapulted science and scientists right onto the front pages and into the public’s range of vision. For the most part, this should be a good thing, as there quite rightly has to be a respect and focus on what the facts say about one of the most widespread viruses there has ever been. However, there have been some moments where science itself has been undermined by some of the rather complex structures that support it. And like it or not, scholarly communication is one of them.

Let’s take the perspective of, say, a mother who is worried about the safety of her kids when they go back to school. Understandably, she starts to look online and in the media for what the science says, as many governments have sought to quell fears people have by saying they are ‘following the science’. But once online, they are faced with a tangled forest of articles, journals, jargon, paywalls and small print, with the media seemingly supporting contradictory statements depending on the newspaper or website you read. For example, this week’s UK newspapers have led on how the reduction of social distancing from 2m to 1m can double the infection rate, or be many times better than having no social distancing – both factually accurate and from the same peer reviewed study in The Lancet.

Another area that has seen a good deal of coverage has been preprints, and how they can speed up the dissemination of science… or have the capability of disseminating false data and findings due to lack of peer review, again depending on where you cast your eye. The concerns represented by media bias, the complexity of information and lack of peer review all combine into one huge problem that could be coming down the line very soon, and that is the prospect of predatory journals publishing erroneous, untested information as research in one of the thousands of predatory journals currently active.

This week Cabells collaborated with The Economist to focus some of these issues, highlighting that:

  • Around a third of journals on both the Cabells Journal Whitelist and Blacklist focus on health, with predatory journals in subjects such as maths and physics number more than legitimate journals
  • Geography plays a significant role, with many more English language predatory journals based in India and Nigeria than reliable ones
  • The average output of a predatory journal is 50 articles a year, although 60% of these will never be cited (compared to 10% for legitimate journals)
  • Despite the like of peer review or any of the usual publishing checks, an estimated 250,000 articles each year are cited in other journals
  • Most common severe behaviors (which are liable to lead to inclusion in the Blacklist) are articles missing from issues or archives, lack of editor or editorial board on the website, and journals claiming misleading metrics or inclusion in well-known indexes.

Understandably, The Economist makes the link between so much fake or unchecked science being published and the current coronavirus threat, concluding: “Cabells’ guidelines will only start to catch dodgy studies on COVID-19 once they appear in predatory journals. But the fact that so many “scholars” use such outlets means that working papers on the disease should face extra-thorough scrutiny.” We have been warned.

Gray area

While Cabells spends much of its time assessing journals for inclusion in its Verified or Predatory lists, probably the greater number of titles reside outside the parameters of those two containers. In his latest blog, Simon Linacre opens up a discussion on what might be termed ‘gray journals’ and what their profiles could look like.


 

The concept of ‘gray literature’ to describe a variety of information produced outside traditional publishing channels has been around since at least the 1970s, and has been defined as “information produced on all levels of government, academia, business and industry in electronic and print formats not controlled by commercial publishing (ie. where publishing is not the primary activity of the producing body*” (1997; 2004). The definition plays an important role in both characterizing and categorizing information outside the usual forms of academic content, and in a way is the opposite of the chaos and murkiness the term ‘gray’ perhaps suggests.

The same could not be said, however, if we were to apply the same term to those journals that inhabit worlds outside the two main databases Cabells curates. Its Journal Whitelist indexes over 11,000 journals that satisfy its criteria to assess whether a journal is a reputable outlet for publication. As such, it is a list of recommended journals for any academic to entrust their research to. The same cannot be said, however, for the Journal Blacklist, which is a list of over 13,000 journals that NO ONE should recommend publication in, given that they have ‘met’ several of Cabells’ criteria.

So, after these two cohorts of journals, what’s left over? This has always been an intriguing question and one which was alluded to most intelligently recently by Kyle Siler in a piece for the LSE Impact Blog. There is no accurate data available on just how many journals there are in existence, as like grains of sand they are created and disappear before they can all be counted. Scopus currently indexes well over 30,000 journals, so a conservative estimate might be that there are over 50,000 journals currently active, with 10,000 titles or more not indexed in any recognized database. Using Cabells experience of assessing these journals for both Whitelist and Blacklist inclusion, here are some profiles that might help researchers spot which option might be best for them:

  • The Not-for-Academics Academic Journal: Practitioner journals often fall foul of indexers as they are not designed to be used and cited in the same way as academic journals, despite the fact they look like them. As a result, journals that have quite useful content are often overlooked due to lack of citations or a non-academic style, but can include some good quality content
  • The So-Bad-it’s-Bad Journal: Just awful in every way – poor editing, poor language, uninteresting research and research replicated from elsewhere. However, it is honest and peer reviewed, so provides a legitimate outlet of sorts
  • The Niche-of-a-Niche Journal: Probably focusing on a scientific area you have never heard of, this journal drills down into a subject area and keeps on drilling so that only a handful of people in the world have the foggiest what it’s about. But if you are one of the lucky ones, it’s awesome. Just don’t expect citation awards any time soon
  • The Up-and-Coming Journal: Many indexers prefer to wait a year or two before including a journal in their databases, as citations and other metrics can start to be used to assess quality and consistent publication. In the early years, quality can vary widely, but reading the output so far is at least feasible to aid the publishing decision
  • The Worthy Amateur Journal: Often based in a non-research institution or little-known association, these journals have the right idea but publish haphazardly, have small editorial boards and little financial support, producing unattractive-looking journals that may nevertheless hide some worthy articles.

Of course, when you arrive at the publication decision and happen upon a candidate journal that is not indexed, as we said last week simply ‘research your research’: check against the Blacklist and its criteria to detect any predatory characteristics, research the Editor and the journal’s advisory board for their publishing records and seek out the opinion of others before sending your precious article off into the gray ether.


*Third International Conference on Grey Literature in 1997 (ICGL Luxembourg definition, 1997 – Expanded in New York, 2004


***LAST CHANCE!***

If you haven’t already completed our survey, there is still time to provide your feedback. Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.

Our short survey should take no more than ten minutes to complete, and can be taken here.

As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.

Many thanks in advance for your valuable feedback!

Bad medicine

Recent studies have shown that academics can have a hard time identifying some predatory journals, especially if they come from high-income countries or medical faculties. Simon Linacre argues that this is not surprising given they are often the primary target of predatory publishers, but a forthcoming product from Cabells could help them.


A quick search of PubMed for predatory journals will throw up hundreds of results – over the last year I would estimate there are on average one or two papers published each week on the site (and you can sign up for email alerts on this and other scholarly communication issues at the estimable Biomed News site). The papers tend to fall into two categories – editorial or thought pieces on the blight of predatory journals in a given scientific discipline, or original research on the phenomenon. While the former are necessary to raise the profile of the problem among researchers, they do little to advance the understanding of such journals.

The latter, however, can provide illuminating details about how predatory journals have developed, and in so doing offer lessons in how to combat them. Two such articles were published last week in the field of medicine. In the first paper ‘Awareness of predatory publishing’, authors Panjikaran and Mathew surveyed over 100 authors who had published articles in predatory journals. While a majority of authors (58%) were ignorant of such journals, of those who said they recognized them nearly half from high-income countries (HICs) failed a recognition test, while nearly a quarter from low-income to middle-income countries (LMICs) also failed. The result, therefore, was a worrying lack of understanding of predatory journals among authors who had already published in them.

The second article was entitled ‘Faculty knowledge and attitudes regarding predatory open access journals: a needs assessment study’ and authored by Swanberg, Thielen and Bulgarelli. In it, they surveyed both regular and medical faculty members of a university to ascertain if they understood what was meant by predatory publishing. Almost a quarter (23%) said they had not heard of the term previously, but of those that had 87% said there confident of being able to assess journal quality. However, when they were tested by being presented with journals in their own fields, only 60% could, with scores even lower for medical faculty.

Both papers call for greater education and awareness programs to support academics in dealing with predatory journals, and it is here that Cabells can offer some good news. Later this year Cabells intends to launch a new medical journal product that identifies good quality journals in the vast medical field. Alongside our current products covering most other subject disciplines, the new medical product will enable academics, university administrators, librarians, tenure committees and research managers to validate research sources and publication outputs of faculty members. They will also still be backed up, of course, by the Cabells Journal Blacklist which now numbers over 13,200 predatory, deceptive or illegitimate journals. Indeed, in the paper by Swanberg et al the researchers ask faculty members themselves what support they would like to see from their institution, and the number one answer was a “checklist to help assess journal quality.” This is exactly the kind of feedback Cabells have received over the years that has driven us to develop the new product for medical journals, and hopefully, it will help support good publishing decisions in the future alongside our other products.


PS: A kind request – Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.

Our short survey should take no more than ten minutes to complete, and can be taken here.

As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.

Many thanks in advance for your valuable feedback!

Simon Linacre

The price of predatory publishing

What is the black market in predatory publishing worth each year? No satisfactory estimate has yet been produced, so Simon Linacre has decided to grab the back of an envelope and an old biro to try to make an educated guess.


Firstly, all of us at Cabells would like to wish everyone well during this unusual and difficult time. We are thinking a great deal about our customers, users, publishers and researchers who must try and maintain their important work during the coronavirus pandemic. Whether you are in lockdown, self-isolating, or are more or less free of restrictions, please be assured that Cabells’ services are still available for your research needs, and if there are any problems with access, please do not hesitate to contact us at journals@cabells.com.

Possibly as a result of spending too much holed up at home, a friend of mine in scholarly communications asked me last week how much predatory publishers earned each year. I confess that I was a little stumped at first. Despite the fact that Cabells has created the world’s most comprehensive database of predatory titles in its Journal Blacklist, it does not collate information on article processing charges (APCs), and even if it did it would not bear any relation to what was actually paid, as often APCs are discounted or waived. Indeed, sometimes authors are even charged a withdrawal fee for the predatory journal to NOT publish their article.

So, where do you start trying to estimate a figure? Well, firstly you can try reviewing the literature, but this brings its own risks. The first article I found estimated it to be in the billions of dollars, which immediately failed the smell test. After looking at the articles it had cited, it became clear that an error had been made – the annual value of all APCs is estimated to be in the billions, so the figure for predatory journals is likely to be a lot less.

The second figure was in an article by Greco (2016) which estimated predatory journals to be earning around $75m a year, which seemed more reasonable. But is there any way to validate this? Well, recently the case was closed by the Federal Trade Commission (FTC) on its judgement against Omics Group which it had fined $50.1m in April 2019 (Linacre, Bisaccio & Earle, 2019). After the judgement was passed, there were further checks on the judgement to ensure the FTC had been fair and equitable in its dealings, and these were all validated. This included the $50m fine and the way it was worked out… which means you could use these calculations and extrapolate them out to all of the journals included in the Cabells Journal Blacklist.

And no, this is not mathematically valid, and nor is it any guarantee of getting near a correct answer – it is just one way of providing an estimate so that we can get a handle on the size of the problem.

So, what the back of my dog-eared envelope shows is that:

  • The judgement against OMICS was for $ $50,130,811, which represented the revenues it had earned between August 25, 2011 and July 31, 2017 (2,167 days, or 5.94 years)
  • The judgement did not state how many journals Omics and its subsidiaries operated, but Cabells includes 776 Omics-related journals in its Journal Blacklist
  • For Omics, if you use this data that means each journal earns revenues of $10,876 per year
  • If we were to assume OMICS were a typical predatory publisher (and they are bigger and more professional than most predatory operators) and were to extrapolate that out to the whole Blacklist of 13,138 journals, that’s a value of $142.9m a year

I do think this is very much top side as many predatory publishers charge ultra-low APCs to attract authors, while some may have stopped functioning. However, on the flip side we are adding to the Blacklist all the time and new journals are being created daily. So, I think a reasonable estimate based on the FTC judgement and Cabells data is that the predatory journal market is probably worth between $75m and $100m a year. What the actual figure might be is, however, largely irrelevant. What is relevant is that millions of dollars of funders’ grants, charitable donations and state funding have been wasted on these outlets.

References:

Greco, A. N. (2016). The impact of disruptive and sustaining digital technologies on scholarly journals. Journal of Scholarly Publishing48(1), 17–39. doi: 10.3138/jsp.48.1.17

Simon Linacre, Michael Bisaccio & Lacey Earle (2019). Publishing in an Environment of Predation: The Many Things You Really Wanted to Know, but Did Not Know How to Ask, Journal of Business-to-Business Marketing, 26:2, 217-228, DOI: 10.1080/1051712X.2019.1603423

 

The future of research evaluation

Following last week’s guest post from Rick Anderson on the risks of predatory journals, we turn our attention this week to legitimate journals and the wider issue of evaluating scholars based on their publications. With this in mind, Simon Linacre recommends a broad-based approach with the goal of such activities permanently front and center.


This post was meant to be ‘coming to you LIVE from London Book Fair’, but as you may know, this event has been canceled, like so many other conferences and other public gatherings in the wake of the coronavirus outbreak. While it is sad to miss the LBF event, meetings will take place virtually or in other places, and it is to be hoped the organizers can bring it back bigger and better than ever in 2021.

Some events are still going ahead, however, in the UK, and it was my pleasure to attend the LIS-Bibliometrics Conference at the University of Leeds last week to hear the latest thinking on journal metrics and performance management for universities. The day-long event was themed ‘The Future of Research Evaluation’, and it included both longer talks from key people in the industry, and shorter ‘lightning talks’ from those implementing evaluation systems or researching their effectiveness in different ways.

There was a good deal of debate, both on the floor and on Twitter (see #LisBib20 to get a flavor), with perhaps the most interest in speaker Dr. Stephen Hill, who is Director of Research at Research England, and chair of the steering group for the 2021 Research Excellence Framework (REF) in the UK. For those of us wishing to see crumbs from his table in the shape of a steer for the next REF, we were sadly disappointed as he was giving nothing away. However, what he did say was that he saw four current trends shaping the future of research evaluation:

  • Outputs: increasingly they will be diverse, include things like software code, be more open, more collaborative, more granular and potentially interactive rather than ‘finished’
  • Insight: different ways of understanding may come into play, such as indices measuring interdisciplinarity
  • Culture: the context of research and how it is received in different communities could become explored much more
  • AI: artificial intelligence will become a bigger player both in terms of the research itself and how the research is analyzed, e.g. the Unsilo tools or so-called ‘robot reviewers’ that can remove any reviewer bias.

Rather revealingly, Dr. Hill suggested a fifth trend might be the societal impact, and this is despite the fact that such impact has been one of the defining traits of both the current and previous REFs. Perhaps, the full picture has yet to be understood regarding impact, and there is some suspicion that many academics have yet to buy-in to the idea at all. Indeed, one of the takeaways from the day was that there was little input into the discussion from academics at all, and one wonders if they might have contributed to the discussion about the future of research evaluation, as it is their research being evaluated after all.

There was also a degree of distrust among the librarians present towards publishers, and one delegate poll should be of particular interest to them as it showed what those present thought were the future threats and challenges to research evaluation. The top three threats were identified as publishers monopolizing the area, commercial ownership of evaluation data, and vendor lock-in – a result which led to a lively debate around innovation and how solutions could be developed if there was no commercial incentive in place.

It could be argued that while the UK has taken the lead on impact and been busy experimenting with the concept, the rest of the higher education world has been catching up with a number different takes on how to recognize and reward research that has a demonstrable benefit. All this means that we are yet to see the full ‘impact of impact,’ and certainly, we at Cabells are looking carefully at what potential new metrics could aid this transition. Someone at the event said that bibliometrics should be “transparent, robust and reproducible,” and this sounds like a good guiding principle for whatever research is being evaluated.