Guest Post: A look at citation activity of predatory marketing journals

This week we are pleased to feature a guest post from Dr. Salim Moussa, Assistant Professor of Marketing at ISEAH at the University of Gafsa in Tunisia. Dr. Moussa has recently published insightful research on the impact predatory journals have had on the discipline of marketing and, together with Cabells’ Simon Linacre, has some cautionary words for his fellow researchers in that area.

Academic journals are important to marketing scholars for two main reasons: (a) journals are the primary medium through which they transmit/receive scholarly knowledge; and (b) tenure, promotion, and grant decisions depend mostly on the journals in which they have published. Selecting the right journal to which one would like to submit a manuscript is thus a crucial decision. Furthermore, the overabundance of academic marketing journals -and the increasing “Publish or Perish” pressure – makes this decision even more difficult.

The “market” of marketing journals is extremely broad, with Cabells’ Journalytics indexing 965 publication venues that that are associated with “marketing” in their aims and scope. While monitoring the market of marketing journals for the last ten years, I have noticed that a new type of journal has tapped into it: open access (OA) journals.

The first time I have ever heard about OA journals was during a dinner in an international marketing conference held in April 2015 in my country, Tunisia. Many of the colleagues at the dinner table were enthusiastic about having secured publications in a “new” marketing journal published by “IBIMA”. Back in my hometown (Gafsa), I took a quick look at IBIMA Publishing’s website. The thing that I remember the most from that visit is that IBIMA’s website looked odd to me. Then a few years later, while conducting some research on marketing journals, I noticed some puzzling results for a particular journal. Investigating the case of that journal, I came to realize that a scam journal was brandjacking the identity of the flagship journal of the UK-based Academy of Marketing’s, Journal of Marketing Management.

Undertaking this research, terms such“Predatory publishers”, “Beall’s List”, and “Think, Check, Submit” were new discoveries for me. This was also the trigger point of a painful yet insightful research experience that lasted an entire year (from May 2019 to May 2020).

Beall’s list was no longer available (shutdown in January 2017), and I had no access to Cabells’ Predatory Reports. Freely available lists were either outdated or too specialized (mainly Science, Technology, and Medicine) to be useful. So, I searched for journals that have titles that are identical or confusingly similar to those of well-known, prestigious, non-predatory marketing journals. Using this procedure, I identified 12 journals and then visited the websites of each of these 12 journals to collect information about both the publisher and the journal; that is, is the journal OA or not, its Article Processing Charges, whether the journal had an editor in chief or not, the names of its review board members and their affiliations (if any), the journal’s ISSNs, etc. I even emailed an eminent marketing scholar that I was stunned to see his name included in the editorial board of a suspicious journal.

With one journal discarded, I had a list of 11 suspicious journals (Journal A to Journal K).

Having identified the 11 publishers of these 11 journals, I then consulted three freely available and up-to-date lists of predatory publishers: the Dolos List, the Kscien List, and the Stop Predatory Journals List. The aim of consulting these lists was to check whether I was wrong or right in qualifying these publishers as predatory. The verdict was unequivocal; each of the 11 publishers were listed in all three of them. These three lists, however, provided no reasons for the inclusion of a particular publisher or a particular journal.

To double-check the list, I used the Directory of Open Access Journals, which is a community-curated online directory that indexes and provides access to high-quality, OA, peer-reviewed journals. None of the 11 journals were indexed in it. To triple-check the list, I used both the 2019 Journal Quality List of the Australian Business Deans Council and the 2018 Academic Journal Guide by the Chartered Association of Business Schools. None of the 11 journals were ranked in these two lists either.

To be brief, the one year of endeavor resulted in a paper I submitted to the prestigious academic journal, Scientometrics, published by Springer Nature, and my paper was accepted and published online in late October 2020. In that paper, I reported the findings of a study that examined the extent of citations received by articles published in ten predatory marketing journals (as one of the 11 journals under scrutiny was an “empty journal”; that is, with no archives). The results indicated that some of these journals received quite a few citations with a median of 490 citations, with one journal receiving 6,296 citations (see also Case Study below).

I entitled the article “Citation contagion: A citation analysis of selected predatory marketing journals.” Some people may or may not like the framing in terms of “contagion” and “contamination” (especially in these COVID times), but I wanted the title to be striking enough to attract more readership. Those who read the article may see it as a call for marketing researchers to “not submit their (possibly interesting) knowledge products to any journal before checking that the publication outlet they are submitting to is a non-predatory journal.” Assuming that the number of citations an article receives signals its quality, the findings in my study indicate that some of the articles published in these predatory journals deserved better publication venues. I believe that most of the authors of these articles were well-intentioned and did not know that the journals they were submitting to were predatory. 

A few months earlier and having no access to Cabells databases, I read each of the posts in their blog trying identify marketing journals that were indexed in Predatory Reports. Together with Cabells, our message to the marketing research community is that 10 of the 11 journals that I have investigated were already listed (or under-review for inclusion) in Predatory Reports. I believe my study has revealed only the tip of the iceberg. Predatory Reports now indexes 140 journals related to the subject of Marketing (which represents 1% of the total number of journals listed in Predatory Reports). Before submitting your papers to an OA marketing journal, you can use Predatory Reports to verify that it is legitimate.

Case Study

The study completed by Dr. Moussa provides an excellent primer on how to research and identify predatory journals (writes Simon Linacre). As such, it is instructive to look at one of the journals highlighted in Dr. Moussa’s article in more detail.

Dr. Moussa rightly suspected that the British Journal of Marketing Studies looked suspicious due to its familiar-sounding title. This is a well-used strategy by predatory publishers to deceive authors who do not make themselves familiar with the original journal. In this case, the British Journal of Marketing Studies sounds similar to a number of potential journals in this subject discipline.

As Dr. Moussa also points out, a questionable journal’s website will often fail to stand up to a critical eye. For example, the picture below shows the “offices” of BJMS – a small terraced house in Southern England, which seems an unlikely location for an international publishing house. This journal’s website also contains a number of other tells that while not singularly defining of predatory publishers, certainly provide indicators: prominent phone numbers, reference to an ‘Impact Factor’ (not from Clarivate), fake indexation in databases (eg DOAJ), no editor contact details, and/or fake editor identity.

What is really interesting about Dr. Moussa’s piece is his investigation of citation activity. We can see from the data below that ‘Journal I’ (which is the British Journal of Marketing Studies) that both total citations and the most citations received by a single article are significant, and represent what is known as ‘citation leakage’ where citations are made to and from predatory journals. As articles in these journals are unlikely to have had any peer review, publication ethics checks or proof checks, their content is unreliable and skews citation data for reputable research and journals.

  • Predatory journal: Journal I (BJMS)
  • Total number of citations received: 1,331
  • Number of citations received by the most cited article: 99
  • The most cited article was published in: 2014
  • Number of citations received from SSCI-indexed journals: 3
  • Number of citations received from FT50 listed journals: 0
Predatory Reports entry for BJMS

It is a familiar refrain from The Source, but it bears repeating – as an author you should do due diligence on where you publish your work and ‘research your research’. Using your skills as a researcher for publication and not just what you want to publish will save a huge amount of pain in the future, both for avoiding the bad journals and choosing the good ones.

Cabells and Inera present free webinar: Flagging Predatory Journals to Fight “Citation Contamination”

Cabells and Inera are excited to co-sponsor the free on-demand webinar “Flagging Predatory Journals to Fight ‘Citation Contamination'” now available to stream via SSP OnDemand. Originally designed as a sponsored session for the 2020 SSP Annual Meeting, this webinar is presented by Kathleen Berryman of Cabells and Liz Blake of Inera, with assistance from Bruce Rosenblum and Sylvia Izzo Hunter, also from Inera.

The webinar outlines an innovative collaborative solution to the problem of “citation contamination”—citations to content published in predatory journals with a variety of bad publication practices, such as invented editorial boards and lack of peer review, hiding in plain sight in authors’ bibliographies.

DON’T MISS IT! On Thursday, November 12, 2020, at 11:00 am Eastern / 8:00 am Pacific, Kathleen and Liz will host a live screening with real-time chat followed by a Q&A!

For relevant background reading on this topic, we recommend these Scholarly Kitchen posts:

How do you know you can trust a journal?

As many readers know, this week is Peer Review Week, the annual opportunity for those involved in scholarly communication and research to celebrate and learn about all aspects of peer review. As part of this conversation, Simon Linacre reflects on this year’s theme of ‘Trust in Peer Review’ in terms of the important role of peer review in the validation of scholarship, and dangers of predatory behaviour in its absence.


I was asked to deliver a webinar recently to a community of scholars in Eastern Europe and, as always with webinars, I was very worried about the Q&A section at the end. When you deliver a talk in person, you can tell by looking at the crowd what is likely to happen at the end of the presentation and can prepare yourself. A quiet group of people means you may have to ask yourself some pretty tough questions, as no one will put their hand up at the end to ask you anything; a rowdy crowd is likely to throw anything and everything at you. With a webinar, there are no cues, and as such, it can be particularly nerve-shredding.

With the webinar in question, I waited a while for a question and was starting to prepare my quiet crowd response, when a single question popped up in the chat box:

How do you know you can trust a journal?

As with all the best questions, this floored me for a while. How do you know? The usual things flashed across my mind: reputation, whether it’s published known scholars in its field, whether it is indexed by Cabells or other databases, etc. But suddenly the word trust felt a lot more personal than simply a tick box exercise to confirm a journal’s standing. That may confirm it is trustworthy but is that the same as the feeling an individual has when they really trust something or someone?

The issue of trust is often the unsaid part of the global debates that are raging currently, whether it is responses to the coronavirus epidemic, climate change or democracy. Politicians, as always, want the people to trust them; but increasingly their actions seem to be making that trust harder and harder. As I write, the UK put its two top scientists in front of the cameras to give a grave warning about COVID-19 and a second wave of cases. The fact there was no senior politician to join them was highly symbolic.

It is with this background that the choice of the theme Trust in Peer Review is an appropriate one for Peer Review Week (full disclosure: I have recently joined one of the PRW committees to support the initiative). There is a huge groundswell of support by publishers, editors and academics to support both the effectiveness of peer review and the unsung heroes who do the job for little recognition or reward. The absence of which would have profound implications for research and society as a whole.

Which brings me to the answer to the question posed above, which is to ask the opposite: how do you know when you cannot trust a journal? This is easier to answer as you can point to all those characteristics and behaviours that you would want in a journal. We see on a daily basis with our work on Predatory Reports how the absence of crucial aspects of a journal’s workings can cause huge problems for authors. No listed editor, a fake editorial board, a borrowed ISSN, a hijacked journal identity, a made-up impact factor, and – above all – false promises of a robust peer review process. Trust in peer review may require some research on the part of the author in terms of checking the background of the journal, its publisher and its editors, and it may require you to contact the editor, editorial board members or published authors to get personal advice on publishing in that journal. But doing that work in the first place and receiving personal recommendations will build trust in peer review for any authors who have doubts – and collectively for all members of the academic community.

Special report: Assessing journal quality and legitimacy

Earlier this year Cabells engaged CIBER Research (http://ciber-research.eu/) to support its product and marketing development work. Today, in collaboration with CIBER, Simon Linacre looks at the findings and implications for scholarly communications globally.


In recent months the UK-based publishing research body CIBER has been working with Cabells to better understand the academic publishing environment both specifically in terms of Medical research publications, and more broadly with regard to the continuing problems posed by predatory journals. While the research was commissioned privately by Cabells, it was always with the understanding that much of the findings could be shared openly to enable a better understanding of these two key areas.

The report — Assessing Journal Quality and Legitimacy: An Investigation into the Experience and Views of Researchers and Intermediaries – with special reference to the Health Sector and Predatory Publishinghas been shared today on CIBER’s website and the following briefly summarizes the key findings following six months’ worth of research:

  • The team at CIBER Research was asked to investigate how researchers in the health domain went about selecting journals to publish their papers, what tools they used to help them, and what their perceptions of new scholarly communications trends were, especially in regard to predatory journals. Through a mixture of questionnaire surveys and qualitative interviews with over 500 researchers and ‘intermediaries’ (i.e. librarians and research managers), research pointed to a high degree of self-sufficiency among researchers regarding journal selection
  • While researchers tended to use tools such as information databases to aid their decision-making, intermediaries focused on sharing their own experiences and providing education and training solutions to researchers. Overall, it was notable how much of a mismatch there was between what researchers said and what intermediaries did or believed
  • The existence of so-called ‘whitelists’ were common on a national and institutional level, as were the emergence of ‘greylists’ of journals to be wary of, however, there seemed to be no list of recommended journals in Medical research areas
  • In China, alongside its huge growth in research and publication output are concerns that predatory publishing could have an impact, with one participant stating that, “More attention is being paid to the potential for predatory publishing and this includes the emergence of Blacklists and Whitelists, which are government-sponsored. However, there is not just one there are many 10 or 20 or 50 different (white)lists in place”
  • In India, the explosion of predatory publishing is perhaps the consequence of educational and research expansion and the absence of infrastructure capacity to deal with it. An additional factor could be a lack of significant impetus at a local level to establish new journals, unlike in countries such as Brazil, however, universities are not legally able to establish new titles themselves. As a result, an immature market has attempted to develop new journals to satisfy scholars’ needs which in turn has led to the rise of predatory publishing in the country
  • Predatory publishing practices seemed to be having an increased impact on mainstream publishing activities globally, with grave risk of “potentially polluting repositories and citation indexes but there seems to have been little follow through by anyone.” National bodies, publishers and funders have failed to follow through on the threat and how it may have diverted funds away from legitimate publications to those engaged in illicit activities
  • Overall, predatory publishing is being driven by publish-or-perish scenarios, particularly with early career researchers (ECRs) where authors are unaware of predatory publishers in general, or of the identity of a specific journal. However, a cynical manipulation of such journals as outlets for publications is also suspected.

 

blog image 2
‘Why do you think researchers publish in predatory journals’

 


CIBER Research is an independent group of senior academic researchers from around the world, who specialize in scholarly communications and publish widely on the topic. Their most recent projects have included studies of early career researchers, digital libraries, academic reputation and trustworthiness.

 

Right path, wrong journey

In his latest post, Simon Linacre reviews the book, The Business of Scholarly Publishing: Managing in Turbulent Timesby Albert N. Greco, Professor of Marketing at Fordham University’s Gabelli School of Business, recently published by Oxford University Press.


Given the current backdrop for all industries, one might say that scholarly communications is in more turmoil than most. With the threat to the commercial model of subscriptions posed by increasing use of Open Access options by authors, as well as the depressed book market and recent closures of university presses, the last thing anyone needs in this particular industry is the increased uncertainty brought about by the coronavirus epidemic.

As such, a book looking back at where the scholarly communications industry has come from and an appraisal of where it is now and how it should pivot to remain relevant in the future would seem like a worthwhile enterprise. Just such a book, The Business of Scholarly Publishing: Managing in Turbulent Times, has recently been written by Albert N. Greco, a U.S. professor of marketing who aims to “turn a critical eye to the product, price, placement, promotion, and costs of scholarly books and journals with a primary emphasis on the trajectory over the last ten years.”

However, in addition to this critical eye, the book needs a more practical look at how the industry has been shaken up in the last 25 years or so. It is difficult to imagine either an experienced academic librarian or industry professional advised on the direction of the book, as it has a real blind spot when it comes to some of the major issues impacting the industry today.

The first of these historical misses is a failure to mention Robert Maxwell and his acquisition of Pergamon Press in the early 1950s. Over the next two decades the books and journals publisher saw huge increases in revenues and volumes of titles, establishing a business model of rapid growth using high year-on-year price increases for must-have titles that many argue persists to this day.

The second blind spot is around Open Access (OA). This subject is covered, although not in the detail one would like given its importance to the journal publishing industry in 2020. While one cannot blame the author for missing the evolving story around Plan S, Big Deal cancellations and other OA-related stories, one might expect more background on exactly how OA started life, what the first OA journals were, the variety of declarations around the turn of the Millennium, and how technology enabled OA to become the dominant paradigm in subject areas.

This misstep may be due to the overall slight bias towards books we find in the text, and indeed the emerging issues around OA books are well covered. There are also extremely comprehensive deep dives into publishing finances and trends since 200 that mean that the book does provide a worthy companion to any academic study of publishing from 2000 to 2016.

And this brings us to the third missing element, which is the lack of appreciation of new entrants and new forms in scholarly publishing. For example, there is no mention of F1000 and post-publication peer review, little on the establishment of preprint servers or institutional repositories, and nothing on OA-only publishers such as Frontiers and Hindawi.

As a result, the book is simply a (very) academic study of some publishing trends in the 2000s and 2010s, and like much academic research is both redundant and irrelevant for those practicing in the industry. This is typified in a promising final chapter that seeks to offer “new business strategies in scholarly publishing” by suggesting that short scholarly books, and data and library publishing programs should be examined, without acknowledging that all of these already exist.


The Business of Scholarly Publishing: Managing in Turbulent Times, by Albert N. Greco  (published April 28, 2020, OUP USA) ISBN: 978-0190626235.

Cabells’ top 7 palpable points about predatory publishing practices

In his latest post, Simon Linacre looks at some new stats collated from the Cabells Predatory Reports database that should help inform and educate researchers, better equipping them to evade the clutches of predatory journals.


In recent weeks Cabells has been delighted to work with both The Economist and Nature Index to highlight some of the major issues for scholarly communication that predatory publishing practices represent. As part of the research for both pieces, a number of facts have been uncovered that not only help us understand the issues inherent in this malpractice much better, but should also point researchers away from some of the sadly typical behaviors we have come to expect.

So, for your perusing pleasure, here are Cabells’ Top 7 Palpable Points about Predatory Publishing Practices:

  1. There are now 13,500 predatory journals listed in the Predatory Reports database, which is currently growing by approximately 2,000 journals a year
  2. Over 4,300 journals claim to publish articles in the medical field (this includes multidisciplinary journals) – that’s a third of the journals in Predatory Reports. By discipline, medical and biological sciences have many more predatory journals than other disciplines
  3. Almost 700 journals in Predatory Reports start with ‘British’ (5.2%), while just 50 do on the Journalytics database (0.4%). Predatory journals often call themselves American, British or European to appear well established and legitimate, when in reality relatively few good quality journals have countries or regions in their titles
  4. There are over 5,300 journals listed in Predatory Reports with an ISSN (40%), although many of these are copied, faked, or simply made up. Having an ISSN is not a guarantee of legitimacy for journals
  5. Around 41% of Predatory Reports journals are based in the US, purport to be from the US, or are suspected of being from the US, based on information on journal websites and Cabells’ investigations. This is the highest count for any country, but only a fraction will really have their base in North America
  6. The average predatory journal publishes about 50 articles a year according to recent research from Bo-Christer Björk of the Hanken School of Economics in Helsinki, less than half the output of a legitimate title. Furthermore, around 60% of papers in such journals receive no future citations, compared with 10% of those in reliable ones
  7. Finally, it is worth noting that while we are in the throes of the Coronavirus pandemic, there are 41 journals listed on in Predatory Reports (0.3%) specifically focused on epidemiology and another 35 on virology (0.6% in total). There could be further growth over the next 12 months, so researchers in these areas should be particularly careful now about where they submit their papers.

Gray area

While Cabells spends much of its time assessing journals for inclusion in its Verified or Predatory lists, probably the greater number of titles reside outside the parameters of those two containers. In his latest blog, Simon Linacre opens up a discussion on what might be termed ‘gray journals’ and what their profiles could look like.


 

The concept of ‘gray literature’ to describe a variety of information produced outside traditional publishing channels has been around since at least the 1970s, and has been defined as “information produced on all levels of government, academia, business and industry in electronic and print formats not controlled by commercial publishing (ie. where publishing is not the primary activity of the producing body*” (1997; 2004). The definition plays an important role in both characterizing and categorizing information outside the usual forms of academic content, and in a way is the opposite of the chaos and murkiness the term ‘gray’ perhaps suggests.

The same could not be said, however, if we were to apply the same term to those journals that inhabit worlds outside the two main databases Cabells curates. Its Journal Whitelist indexes over 11,000 journals that satisfy its criteria to assess whether a journal is a reputable outlet for publication. As such, it is a list of recommended journals for any academic to entrust their research to. The same cannot be said, however, for the Journal Blacklist, which is a list of over 13,000 journals that NO ONE should recommend publication in, given that they have ‘met’ several of Cabells’ criteria.

So, after these two cohorts of journals, what’s left over? This has always been an intriguing question and one which was alluded to most intelligently recently by Kyle Siler in a piece for the LSE Impact Blog. There is no accurate data available on just how many journals there are in existence, as like grains of sand they are created and disappear before they can all be counted. Scopus currently indexes well over 30,000 journals, so a conservative estimate might be that there are over 50,000 journals currently active, with 10,000 titles or more not indexed in any recognized database. Using Cabells experience of assessing these journals for both Whitelist and Blacklist inclusion, here are some profiles that might help researchers spot which option might be best for them:

  • The Not-for-Academics Academic Journal: Practitioner journals often fall foul of indexers as they are not designed to be used and cited in the same way as academic journals, despite the fact they look like them. As a result, journals that have quite useful content are often overlooked due to lack of citations or a non-academic style, but can include some good quality content
  • The So-Bad-it’s-Bad Journal: Just awful in every way – poor editing, poor language, uninteresting research and research replicated from elsewhere. However, it is honest and peer reviewed, so provides a legitimate outlet of sorts
  • The Niche-of-a-Niche Journal: Probably focusing on a scientific area you have never heard of, this journal drills down into a subject area and keeps on drilling so that only a handful of people in the world have the foggiest what it’s about. But if you are one of the lucky ones, it’s awesome. Just don’t expect citation awards any time soon
  • The Up-and-Coming Journal: Many indexers prefer to wait a year or two before including a journal in their databases, as citations and other metrics can start to be used to assess quality and consistent publication. In the early years, quality can vary widely, but reading the output so far is at least feasible to aid the publishing decision
  • The Worthy Amateur Journal: Often based in a non-research institution or little-known association, these journals have the right idea but publish haphazardly, have small editorial boards and little financial support, producing unattractive-looking journals that may nevertheless hide some worthy articles.

Of course, when you arrive at the publication decision and happen upon a candidate journal that is not indexed, as we said last week simply ‘research your research’: check against the Blacklist and its criteria to detect any predatory characteristics, research the Editor and the journal’s advisory board for their publishing records and seek out the opinion of others before sending your precious article off into the gray ether.


*Third International Conference on Grey Literature in 1997 (ICGL Luxembourg definition, 1997 – Expanded in New York, 2004


***LAST CHANCE!***

If you haven’t already completed our survey, there is still time to provide your feedback. Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.

Our short survey should take no more than ten minutes to complete, and can be taken here.

As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.

Many thanks in advance for your valuable feedback!

Doing your homework…and then some

Researchers have always known the value of doing their homework – they are probably the best there is at leaving no stone unturned. But that has to apply to the work itself. Simon Linacre looks at the importance of ‘researching your research’ and using the right sources and resources.


Depending on whether you are a glass half full or half empty kind of a person, it is either a great time for promoting the value of scientific research, or, science is seeing a crisis in confidence. On the plus side, the value placed on research to lead us out of the COVID-19 pandemic has been substantial, and rarely have scientists been so much to the fore for such an important global issue. On the other hand, there have been uprisings against lockdowns in defiance of science, and numerous cases of fake science related to the Coronavirus. Whether it is COVID-19, Brexit, or global warming, we seem to be in an age of wicked problems and polarising opinions on a global scale.

If we assume that our glass is more full than empty in these contrarian times, and try to maintain a sense of optimism, then we should be celebrating researchers and the contribution they make. But that contribution has to be validated in scientific terms, and its publication validated in such a way that users can trust in what it says. For the first part, there has been a good deal of discussion in academic circles and even in the press about the nature of preprints, and how users have to take care to understand that they may not yet have been peer reviewed, so any conclusions should not yet be taken as read.

For the second part, however, there is a concern that researchers in a hurry to publish their research may run afoul of predatory publishers, or simply publish their articles in the wrong way, in the wrong journal for the wrong reasons. This was highlighted to me when a Cabells customer alerted us to a new website called Academic Accelerator. I will leave people to make their own minds up as to the value of the site, however, a quick test using academic research on accounting (where I managed journals for over a decade, so know the area) showed that:

  • Attempting to use the ‘Journal Writer’ function for an accounting article suggested published examples from STM journals
  • Trying to use the ‘Journal Matcher’ function for an accounting article again only recommended half a dozen STM journals as a suitable destination for my research
  • Accessing data for individuals journals seems to have been crowdsourced by users, and didn’t match the actual data for many journals in the discipline.

The need for researchers to publish as quickly as possible has perhaps never been greater, and the tools and options for them to do so have arguably never been as open. However, with this comes a gap in the market that many operators may choose to exploit, and at this point, the advice for researchers is the same as ever. Always research your research – know what you are publishing and where you are publishing it, and what the impact will be both in scholarly terms and in real-world terms. In an era where working from home is the norm, there is no excuse for researchers not to do their homework on what they publish.


***REMINDER***

If you haven’t already completed our survey, there is still time to provide your opinion. Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.

Our short survey should take no more than ten minutes to complete, and can be taken here.

As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.

Many thanks in advance for your valuable feedback!

Simon Linacre

Unintended consequences: how will COVID-19 shape the future of research

What will happen to global research output during lockdowns as a result of the coronavirus?  Simon Linacre looks at how the effect in different countries and disciplines could shape the future of research and scholarly publications.


We all have a cabin fever story now after many countries have entered into varying states of lockdown. Mine is how the little things have lifted what has been quite an oppressive mood – the smell of buns baking in the oven; lying in bed that little bit longer in a morning; noticing the newly born lambs that have suddenly appeared in nearby fields. All of these would be missed during the usual helter-skelter days we experience during the week. But things are very far from usual in these coronavirus-infected days. And any distraction is a welcome one.

On a wider scale, the jury is still very much out as to how researchers are dealing with the situation, let alone how things will be affected in the future. What we do know is that in those developed countries most impacted by the virus, universities have been closed down, students sent home and labs mothballed. In some countries such as Italy there are fears important research work could be lost in the shutdown, while in the US there is concern for the welfare of those people – and animals – who are currently in the middle of clinical trials. Overall, everyone hopes that the specific research into the coronavirus yields some quick results.

On the flip side, however, for those researchers not confined to labs or field research, this period could accelerate their work. For those in social science or humanities freed from the commute, teaching commitments and office politics of daily academic life, the additional time will no doubt be put to good use. More time to set up surveys; more time for reading; more time for writing papers. Increased research output is perhaps inevitable in those areas where academics are not tied to labs or other physical experiments.

These two countervailing factors may cancel each other out, or one may prevail over the other. As such, the scholarly publishing community does not know yet what to expect down the line. In the short term, it has been focused on making related content freely accessible (such as this site from The Lancet: ). However, what we may see is that there is greater pressure to see research in potentially globally important areas to be made open access at the source given how well researchers and networks have seemed to work together so far during the short time the virus has been at large.

Again, unintended consequences could be one of the key legacies of the crisis once the virus has died down. Organizations concerned about how their people can work from home will no doubt have their fears allayed, while the positive environmental impact of less travelling will be difficult to give up. For publishers and scholars, understanding how their research could have an impact when the world is in crisis may change their research aims forever.

The future of research evaluation

Following last week’s guest post from Rick Anderson on the risks of predatory journals, we turn our attention this week to legitimate journals and the wider issue of evaluating scholars based on their publications. With this in mind, Simon Linacre recommends a broad-based approach with the goal of such activities permanently front and center.


This post was meant to be ‘coming to you LIVE from London Book Fair’, but as you may know, this event has been canceled, like so many other conferences and other public gatherings in the wake of the coronavirus outbreak. While it is sad to miss the LBF event, meetings will take place virtually or in other places, and it is to be hoped the organizers can bring it back bigger and better than ever in 2021.

Some events are still going ahead, however, in the UK, and it was my pleasure to attend the LIS-Bibliometrics Conference at the University of Leeds last week to hear the latest thinking on journal metrics and performance management for universities. The day-long event was themed ‘The Future of Research Evaluation’, and it included both longer talks from key people in the industry, and shorter ‘lightning talks’ from those implementing evaluation systems or researching their effectiveness in different ways.

There was a good deal of debate, both on the floor and on Twitter (see #LisBib20 to get a flavor), with perhaps the most interest in speaker Dr. Stephen Hill, who is Director of Research at Research England, and chair of the steering group for the 2021 Research Excellence Framework (REF) in the UK. For those of us wishing to see crumbs from his table in the shape of a steer for the next REF, we were sadly disappointed as he was giving nothing away. However, what he did say was that he saw four current trends shaping the future of research evaluation:

  • Outputs: increasingly they will be diverse, include things like software code, be more open, more collaborative, more granular and potentially interactive rather than ‘finished’
  • Insight: different ways of understanding may come into play, such as indices measuring interdisciplinarity
  • Culture: the context of research and how it is received in different communities could become explored much more
  • AI: artificial intelligence will become a bigger player both in terms of the research itself and how the research is analyzed, e.g. the Unsilo tools or so-called ‘robot reviewers’ that can remove any reviewer bias.

Rather revealingly, Dr. Hill suggested a fifth trend might be the societal impact, and this is despite the fact that such impact has been one of the defining traits of both the current and previous REFs. Perhaps, the full picture has yet to be understood regarding impact, and there is some suspicion that many academics have yet to buy-in to the idea at all. Indeed, one of the takeaways from the day was that there was little input into the discussion from academics at all, and one wonders if they might have contributed to the discussion about the future of research evaluation, as it is their research being evaluated after all.

There was also a degree of distrust among the librarians present towards publishers, and one delegate poll should be of particular interest to them as it showed what those present thought were the future threats and challenges to research evaluation. The top three threats were identified as publishers monopolizing the area, commercial ownership of evaluation data, and vendor lock-in – a result which led to a lively debate around innovation and how solutions could be developed if there was no commercial incentive in place.

It could be argued that while the UK has taken the lead on impact and been busy experimenting with the concept, the rest of the higher education world has been catching up with a number different takes on how to recognize and reward research that has a demonstrable benefit. All this means that we are yet to see the full ‘impact of impact,’ and certainly, we at Cabells are looking carefully at what potential new metrics could aid this transition. Someone at the event said that bibliometrics should be “transparent, robust and reproducible,” and this sounds like a good guiding principle for whatever research is being evaluated.