What to know about ISSNs

There are many ways to skin a cat, and many ways to infer a journal could be predatory. In his latest blog post, Simon Linacre looks at the role the International Standard Serial Number, or ISSN, can play in the production of predatory journals. 

For many reasons, the year 2020 will be remembered for the sheer volume of numbers that have invaded our consciousness. Some of these are big numbers – 80 million votes for Joe Biden, four million cases of COVID in the US in 2020 – and some of these will be small, such as the number of countries (1) leaving the EU at the end of the year. Wherever we look, we see numbers of varying degrees of import at seemingly every turn.

While numbers have been previously regarded as gospel, however, data has joined news and UFO sightings (seemingly one of the few phenomena NOT to increase in 2020) as something to be suspicious about or faked in some way. And one piece of data trusted by many authors in determining the validity or otherwise of a journal is the International Standard Serial Number, or ISSN.

An ISSN can be obtained relatively easily via either a national or international office as long as a journal can be identified as an existing publication. As the ISSN’s own website states, an ISSN is “a digital code without any intrinsic meaning” and does not include any information about the contents of that publication. Perhaps most importantly, an ISSN “does not guarantee the quality or the validity of the contents”. This perhaps goes some way to explain why predatory journals can often include an ISSN on their websites. Indeed, more than 40% of the journals included in Cabells’ Predatory Reports database include an ISSN in their journal information.

But sometimes predatory publishers can’t obtain an ISSN – or at least can’t be bothered to – and will fake the ISSN code. Of the 6,000 or so journals with an ISSN in Predatory Reports, 288 or nearly 5% have a fake ISSN, and this is included as one of the database’s behavioural indicators to help identify predatory activity. It is instructive to look at these fake ISSNs to see the lengths predatory publishers will go to in order to achieve some semblance of credibility in their site presence.

For some journals, it is obvious that the ISSN is fake as it looks wrong. In the example above for the Journal of Advanced Statistics and Probability, the familiar two groups of four digits followed by a hyphen format is missing, replaced by nine digits and a forward slash, which is incorrect.

For other journals, such as the Global Journal of Nuclear Medicine and Biology below, the format is correct, but a search using the ISSN portal brings up no results, so the ISSN code is simply made up.

More worrying are the few publications that have hijacked existing, legitimate journals and appropriated their identity, including the ISSN. In the example below, the Wulfenia Journal has had its identity hijacked, with the fake journal website pictured below.

If you compare it to the genuine journal shown below (the German homepage can be found here), you can see they list the same ISSN.

One can only imagine the chaos caused for a legitimate journal when its identity is hijacked, and this is just part of wider concerns on the effects of fake information being shared have on society. As always, arming yourself with the right information – and taking a critical approach to any information directed your way – will help see you through the morass of misinformation we seem to be bombarded with in the online world.

Guest Post: A look at citation activity of predatory marketing journals

This week we are pleased to feature a guest post from Dr. Salim Moussa, Assistant Professor of Marketing at ISEAH at the University of Gafsa in Tunisia. Dr. Moussa has recently published insightful research on the impact predatory journals have had on the discipline of marketing and, together with Cabells’ Simon Linacre, has some cautionary words for his fellow researchers in that area.

Academic journals are important to marketing scholars for two main reasons: (a) journals are the primary medium through which they transmit/receive scholarly knowledge; and (b) tenure, promotion, and grant decisions depend mostly on the journals in which they have published. Selecting the right journal to which one would like to submit a manuscript is thus a crucial decision. Furthermore, the overabundance of academic marketing journals -and the increasing “Publish or Perish” pressure – makes this decision even more difficult.

The “market” of marketing journals is extremely broad, with Cabells’ Journalytics indexing 965 publication venues that that are associated with “marketing” in their aims and scope. While monitoring the market of marketing journals for the last ten years, I have noticed that a new type of journal has tapped into it: open access (OA) journals.

The first time I have ever heard about OA journals was during a dinner in an international marketing conference held in April 2015 in my country, Tunisia. Many of the colleagues at the dinner table were enthusiastic about having secured publications in a “new” marketing journal published by “IBIMA”. Back in my hometown (Gafsa), I took a quick look at IBIMA Publishing’s website. The thing that I remember the most from that visit is that IBIMA’s website looked odd to me. Then a few years later, while conducting some research on marketing journals, I noticed some puzzling results for a particular journal. Investigating the case of that journal, I came to realize that a scam journal was brandjacking the identity of the flagship journal of the UK-based Academy of Marketing’s, Journal of Marketing Management.

Undertaking this research, terms such“Predatory publishers”, “Beall’s List”, and “Think, Check, Submit” were new discoveries for me. This was also the trigger point of a painful yet insightful research experience that lasted an entire year (from May 2019 to May 2020).

Beall’s list was no longer available (shutdown in January 2017), and I had no access to Cabells’ Predatory Reports. Freely available lists were either outdated or too specialized (mainly Science, Technology, and Medicine) to be useful. So, I searched for journals that have titles that are identical or confusingly similar to those of well-known, prestigious, non-predatory marketing journals. Using this procedure, I identified 12 journals and then visited the websites of each of these 12 journals to collect information about both the publisher and the journal; that is, is the journal OA or not, its Article Processing Charges, whether the journal had an editor in chief or not, the names of its review board members and their affiliations (if any), the journal’s ISSNs, etc. I even emailed an eminent marketing scholar that I was stunned to see his name included in the editorial board of a suspicious journal.

With one journal discarded, I had a list of 11 suspicious journals (Journal A to Journal K).

Having identified the 11 publishers of these 11 journals, I then consulted three freely available and up-to-date lists of predatory publishers: the Dolos List, the Kscien List, and the Stop Predatory Journals List. The aim of consulting these lists was to check whether I was wrong or right in qualifying these publishers as predatory. The verdict was unequivocal; each of the 11 publishers were listed in all three of them. These three lists, however, provided no reasons for the inclusion of a particular publisher or a particular journal.

To double-check the list, I used the Directory of Open Access Journals, which is a community-curated online directory that indexes and provides access to high-quality, OA, peer-reviewed journals. None of the 11 journals were indexed in it. To triple-check the list, I used both the 2019 Journal Quality List of the Australian Business Deans Council and the 2018 Academic Journal Guide by the Chartered Association of Business Schools. None of the 11 journals were ranked in these two lists either.

To be brief, the one year of endeavor resulted in a paper I submitted to the prestigious academic journal, Scientometrics, published by Springer Nature, and my paper was accepted and published online in late October 2020. In that paper, I reported the findings of a study that examined the extent of citations received by articles published in ten predatory marketing journals (as one of the 11 journals under scrutiny was an “empty journal”; that is, with no archives). The results indicated that some of these journals received quite a few citations with a median of 490 citations, with one journal receiving 6,296 citations (see also Case Study below).

I entitled the article “Citation contagion: A citation analysis of selected predatory marketing journals.” Some people may or may not like the framing in terms of “contagion” and “contamination” (especially in these COVID times), but I wanted the title to be striking enough to attract more readership. Those who read the article may see it as a call for marketing researchers to “not submit their (possibly interesting) knowledge products to any journal before checking that the publication outlet they are submitting to is a non-predatory journal.” Assuming that the number of citations an article receives signals its quality, the findings in my study indicate that some of the articles published in these predatory journals deserved better publication venues. I believe that most of the authors of these articles were well-intentioned and did not know that the journals they were submitting to were predatory. 

A few months earlier and having no access to Cabells databases, I read each of the posts in their blog trying identify marketing journals that were indexed in Predatory Reports. Together with Cabells, our message to the marketing research community is that 10 of the 11 journals that I have investigated were already listed (or under-review for inclusion) in Predatory Reports. I believe my study has revealed only the tip of the iceberg. Predatory Reports now indexes 140 journals related to the subject of Marketing (which represents 1% of the total number of journals listed in Predatory Reports). Before submitting your papers to an OA marketing journal, you can use Predatory Reports to verify that it is legitimate.

Case Study

The study completed by Dr. Moussa provides an excellent primer on how to research and identify predatory journals (writes Simon Linacre). As such, it is instructive to look at one of the journals highlighted in Dr. Moussa’s article in more detail.

Dr. Moussa rightly suspected that the British Journal of Marketing Studies looked suspicious due to its familiar-sounding title. This is a well-used strategy by predatory publishers to deceive authors who do not make themselves familiar with the original journal. In this case, the British Journal of Marketing Studies sounds similar to a number of potential journals in this subject discipline.

As Dr. Moussa also points out, a questionable journal’s website will often fail to stand up to a critical eye. For example, the picture below shows the “offices” of BJMS – a small terraced house in Southern England, which seems an unlikely location for an international publishing house. This journal’s website also contains a number of other tells that while not singularly defining of predatory publishers, certainly provide indicators: prominent phone numbers, reference to an ‘Impact Factor’ (not from Clarivate), fake indexation in databases (eg DOAJ), no editor contact details, and/or fake editor identity.

What is really interesting about Dr. Moussa’s piece is his investigation of citation activity. We can see from the data below that ‘Journal I’ (which is the British Journal of Marketing Studies) that both total citations and the most citations received by a single article are significant, and represent what is known as ‘citation leakage’ where citations are made to and from predatory journals. As articles in these journals are unlikely to have had any peer review, publication ethics checks or proof checks, their content is unreliable and skews citation data for reputable research and journals.

  • Predatory journal: Journal I (BJMS)
  • Total number of citations received: 1,331
  • Number of citations received by the most cited article: 99
  • The most cited article was published in: 2014
  • Number of citations received from SSCI-indexed journals: 3
  • Number of citations received from FT50 listed journals: 0
Predatory Reports entry for BJMS

It is a familiar refrain from The Source, but it bears repeating – as an author you should do due diligence on where you publish your work and ‘research your research’. Using your skills as a researcher for publication and not just what you want to publish will save a huge amount of pain in the future, both for avoiding the bad journals and choosing the good ones.

Open with purpose

This week is Open Access Week, which you will not have missed due to the slew of Twitter activity, press releases and thought pieces being published – unless you are an author, perhaps. In this week’s blog, Simon Linacre focuses on academic researchers who can often be overlooked by the OA conversation, despite the fact they should be the focus of the discussion.

The other day, I was talking to my 16 year-old son about university, as he has started to think about what he might study and where he might like to go (“Dunno” and “dunno” are currently his favourite subjects and destinations). In order to spark some interest in the thought of higher education, I told him about how great the freedom was to be away, the relaxed lifestyle, and the need to be responsible for your own actions, such as handing in your work on time, even if you had to pull an all-nighter.

“What do you mean ‘hand in your work’?”, he said.

“You know, put my essay in my tutor’s pigeon hole”, I said.

“Why didn’t you just email it? And what do pigeons have to do with it?”, he replied.

Yes, university in the early 90s was a very different beast than today, and I decided to leave pigeons out of the ensuing discussion, but it highlighted to me that while a non-digital university experience is now just a crusty anecdote for students in education today, the transition from the 80s and 90s to the present state of affairs is the norm for those teaching in today’s universities. And in addition, many of the activities and habits that established themselves 20 to 30 years ago and beyond are still in existence, albeit changed to adapt with new technology.

One of these activities that has changed, but remained the same, is of course academic publishing. In the eyes of many people, publishing now will differ incredibly to what it was in the 80s pre-internet – physical vs digital, delayed vs instant, subscription vs open. But while the remnants of the older forms of publishing remain in the shape of page numbers or journal issues, there are still shadows from the introduction of in the early 2000s. This was brought home to me in some webinars recently in Turkey, Ukraine and India (reported here) where the one common question about predatory journals was: “Are all open access journals predatory?”

To those of us who have worked in publishing or to Western academics, this may seem a naïve question. But it is not. Open Access articles – that is, an article which is both free to read on the internet and free to re-use – are still relatively unknown by many academics around the world. In addition, being asked to pay money to publish is still not the norm – most journals listed by the Directory of Open Access Journals do not charge an Article Processing Charge (APC) – and publisher marketing communications are dominated by spam emails from predatory journals rather than press releases during Open Access Week. As such, while the dial may have moved appreciably in Europe and North America following initiatives such as Plan S and high-profile standoffs such as that between the University of California and Elsevier, discussion about OA may not have been replicated elsewhere.

So, while there will be many interesting conversations about Open Access (and delta Think has some fascinating data here), it is also important not to forget many authors may be hearing about it for the first time, or previously may have only heard negative or misleading information. Thankfully, there are plenty of useful resources out there, such as this introduction from Charlesworth Author Services to help authors identify the right OA outlet for their research. And of course, authors should remember that most Open Access journals are not predatory – but to be on the safe side, they can check our Predatory Reports database or criteria to judge for themselves.

The RAS Commission for Counteracting the Falsification of Scientific Research

Predatory publishing is undoubtedly a global phenomenon, but with unique characteristics in different countries. In this week’s blog, Simon Linacre shares insight from Russia and a group of researchers keen to shine the spotlight on breaches in publication ethics from within their own country.

For many of us, the Summer months are usually filled with holidays, conferences and a less than serious new agenda. The so-called ‘silly season’ could reliably be identified by news stories of the cat-stuck-in-tree variety, signaling to us all that there was nothing going on and we could safely ignore the news and concentrate on more important things, such as what cocktail to order next from the poolside bar.

Not anymore.

It is hard to put a finger on it, but since around 2015 the Summer months seem to have been populated almost exclusively by epoch-making events from which it is impossible to escape. Whether it’s Brexit, COVID or any number of natural or man-made disasters, the news cycle almost seems to go up a gear rather than start to freewheel. And news stories in scholarly communications are no different. This summer saw a number of big stories, including one in Nature Index regarding alleged plagiarism and article publications in predatory journals by Russian university officials. Intrigued, I contacted the research group behind the investigation to learn more.

The group in question is the Commission for Counteracting the Falsification of Scientific Research, Russian Academy of Sciences, and earlier this year they compiled what they claimed to be the first evidence of large-scale ‘translation plagiarism’ by Russian authors in English-language journals (“Predatory Journals at Scopus and WoS:  Translation Plagiarism from Russian Sources” (2020). Commission for Counteracting the Falsification of Scientific Research, Russian Academy of Sciences in collaboration with Anna A. Abalkina, Alexei S. Kassian, Larisa G. Melikhova). In total, the Commission said it had detected 94 predatory journals with259 articles from Russian authors, many of which were plagiarised after being translated from Russian into English.

In addition, the study saw that over 1,100 Russian authors had put their names to translated articles which were published in predatory journals. These included heads of departments at Russian universities, and in the case of three authors over 100 publications each in predatory journals. The report (the original can be found here) was authored by some of the founders of Dissernet, a Russian research project which is developing a database of mainly Russian journals which publish plagiarised articles or violate some other criteria of publication ethics. They are concerned that the existence of paper mills in Russia that spam authors and offer easy publication in journals is leading to wide-ranging breaches of publication ethics, supported by inflated metrics appearing to lend some legitimacy to them. Cabells hopes to be able to support the work of Dissernet in highlighting the problem in Russia and internationally, so watch this space.

A case study of how bad science spreads

Fake news has been the go-to criticism of the media for some politicians, which in turn has been rejected as propaganda and fear-mongering by journalists. However, as former journalist Simon Linacre argues, the fourth estate needs to have its own house is in order first, and ensure they are not tripped up by predatory journals.


I class myself as a ‘runner’, but only in the very loosest sense. Inspired by my wife taking up running a few years ago, I decided I should exercise consistently instead of the numerous half-hearted, unsuccessful attempts I had made over the years. Three years later I have done a couple of half-marathons, run every other day, and track my performance obsessively on Strava. I have also recently started to read articles on running online, and have subscribed to the magazine Runners World. So yes, I think I may actually be a runner now.

But I’m also an experienced journalist, a huge cynic, and have a bulls*** radar the size of the Grand Canyon, so even while relaxing with my magazine I like to think I can spot fakery a mile off. And so it proved earlier this summer while reading a piece on how hill running can improve your fitness. This was music to my ears as someone who lives half-way up a valley side, but my interest was then piqued when I saw a reference to the study that formed the basis for the piece, which was to an article in the International Journal of Scientific Research. Immediately, I smelt a rat. “There is no way that is the name of a reputable, peer-reviewed journal,” I thought. And I was right.

But that wasn’t even half of the problem.

After checking Cabells’ Predatory Reports database, I found not one but TWO journals are listed on the database with that name, both with long lists of breaches of the Cabells’ criteria that facilitate the identification of predatory journals. I was still curious as to the nature of the research, as it could have been legitimate research in an illegitimate journal, or just bad research, full stop. As it turned out, neither journal had ever published any research on hill running and the benefits of increasing VO2 max. So where was the story from?

After some more digging, an article matching the details in the Runners World piece could be found in a third similarly-named journal, the International Journal of Scientific and Research Publications. The article, far from the recent breakthrough suggested in the August 2020 issue of Runners World, was actually published in August 2017 by two authors from Addis Ababa University in Ethiopia. While the science of the article seems OK, the experiment that produced the results was on just 32 people over 12 weeks, which means it really needs further validation across greater subjects to confirm its findings. Furthermore, while the journal itself was not included in Cabells’ Predatory Reports database, a review found significant failings, including unusually quick peer review processes and, more seriously, that the “owner/Editor of the journal or publisher falsely claims academic positions or qualifications”. The journal has subsequently been added to Predatory Reports, and the article itself has never been cited in the three years since publication.

Yet one question remains: how did a relatively obscure article, published in a predatory journal and that has never been cited, find its way into a news story in a leading consumer magazine? Interestingly, similar research was also quoted on MSN.com in May 2020 which also quoted the International Journal of Scientific Research, while other sites have also quoted the same research but from the International Journal of Scientific and Research Publications. It appears likely that, having been quoted online once, the same story has been doing the rounds for three years like a game of ‘Telephone,’ all based on uncited research that may not have been peer reviewed in the first place, that used a small sample size and was published in a predatory journal.

While no damage has been done here – underlying all this, it does make sense that hill running can aid overall performance – one need only need to think about the string of recent health news stories around the coronavirus to see how one unverified article could sweep like wildfire through news outlets and online. This is the danger that predatory journals pose.

No time for rest

This week The Economist published an article on predatory publishing following collaboration with Cabells. Simon Linacre looks into the findings and points to how a focus on education can avert a disaster for Covid-19 and other important research.


One of the consequences of the all-consuming global interest in the coronavirus pandemic is that it has catapulted science and scientists right onto the front pages and into the public’s range of vision. For the most part, this should be a good thing, as there quite rightly has to be a respect and focus on what the facts say about one of the most widespread viruses there has ever been. However, there have been some moments where science itself has been undermined by some of the rather complex structures that support it. And like it or not, scholarly communication is one of them.

Let’s take the perspective of, say, a mother who is worried about the safety of her kids when they go back to school. Understandably, she starts to look online and in the media for what the science says, as many governments have sought to quell fears people have by saying they are ‘following the science’. But once online, they are faced with a tangled forest of articles, journals, jargon, paywalls and small print, with the media seemingly supporting contradictory statements depending on the newspaper or website you read. For example, this week’s UK newspapers have led on how the reduction of social distancing from 2m to 1m can double the infection rate, or be many times better than having no social distancing – both factually accurate and from the same peer reviewed study in The Lancet.

Another area that has seen a good deal of coverage has been preprints, and how they can speed up the dissemination of science… or have the capability of disseminating false data and findings due to lack of peer review, again depending on where you cast your eye. The concerns represented by media bias, the complexity of information and lack of peer review all combine into one huge problem that could be coming down the line very soon, and that is the prospect of predatory journals publishing erroneous, untested information as research in one of the thousands of predatory journals currently active.

This week Cabells collaborated with The Economist to focus some of these issues, highlighting that:

  • Around a third of journals on both the Cabells Journal Whitelist and Blacklist focus on health, with predatory journals in subjects such as maths and physics number more than legitimate journals
  • Geography plays a significant role, with many more English language predatory journals based in India and Nigeria than reliable ones
  • The average output of a predatory journal is 50 articles a year, although 60% of these will never be cited (compared to 10% for legitimate journals)
  • Despite the like of peer review or any of the usual publishing checks, an estimated 250,000 articles each year are cited in other journals
  • Most common severe behaviors (which are liable to lead to inclusion in the Blacklist) are articles missing from issues or archives, lack of editor or editorial board on the website, and journals claiming misleading metrics or inclusion in well-known indexes.

Understandably, The Economist makes the link between so much fake or unchecked science being published and the current coronavirus threat, concluding: “Cabells’ guidelines will only start to catch dodgy studies on COVID-19 once they appear in predatory journals. But the fact that so many “scholars” use such outlets means that working papers on the disease should face extra-thorough scrutiny.” We have been warned.

Doing your homework…and then some

Researchers have always known the value of doing their homework – they are probably the best there is at leaving no stone unturned. But that has to apply to the work itself. Simon Linacre looks at the importance of ‘researching your research’ and using the right sources and resources.


Depending on whether you are a glass half full or half empty kind of a person, it is either a great time for promoting the value of scientific research, or, science is seeing a crisis in confidence. On the plus side, the value placed on research to lead us out of the COVID-19 pandemic has been substantial, and rarely have scientists been so much to the fore for such an important global issue. On the other hand, there have been uprisings against lockdowns in defiance of science, and numerous cases of fake science related to the Coronavirus. Whether it is COVID-19, Brexit, or global warming, we seem to be in an age of wicked problems and polarising opinions on a global scale.

If we assume that our glass is more full than empty in these contrarian times, and try to maintain a sense of optimism, then we should be celebrating researchers and the contribution they make. But that contribution has to be validated in scientific terms, and its publication validated in such a way that users can trust in what it says. For the first part, there has been a good deal of discussion in academic circles and even in the press about the nature of preprints, and how users have to take care to understand that they may not yet have been peer reviewed, so any conclusions should not yet be taken as read.

For the second part, however, there is a concern that researchers in a hurry to publish their research may run afoul of predatory publishers, or simply publish their articles in the wrong way, in the wrong journal for the wrong reasons. This was highlighted to me when a Cabells customer alerted us to a new website called Academic Accelerator. I will leave people to make their own minds up as to the value of the site, however, a quick test using academic research on accounting (where I managed journals for over a decade, so know the area) showed that:

  • Attempting to use the ‘Journal Writer’ function for an accounting article suggested published examples from STM journals
  • Trying to use the ‘Journal Matcher’ function for an accounting article again only recommended half a dozen STM journals as a suitable destination for my research
  • Accessing data for individuals journals seems to have been crowdsourced by users, and didn’t match the actual data for many journals in the discipline.

The need for researchers to publish as quickly as possible has perhaps never been greater, and the tools and options for them to do so have arguably never been as open. However, with this comes a gap in the market that many operators may choose to exploit, and at this point, the advice for researchers is the same as ever. Always research your research – know what you are publishing and where you are publishing it, and what the impact will be both in scholarly terms and in real-world terms. In an era where working from home is the norm, there is no excuse for researchers not to do their homework on what they publish.


***REMINDER***

If you haven’t already completed our survey, there is still time to provide your opinion. Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.

Our short survey should take no more than ten minutes to complete, and can be taken here.

As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.

Many thanks in advance for your valuable feedback!

Simon Linacre

Unintended consequences: how will COVID-19 shape the future of research

What will happen to global research output during lockdowns as a result of the coronavirus?  Simon Linacre looks at how the effect in different countries and disciplines could shape the future of research and scholarly publications.


We all have a cabin fever story now after many countries have entered into varying states of lockdown. Mine is how the little things have lifted what has been quite an oppressive mood – the smell of buns baking in the oven; lying in bed that little bit longer in a morning; noticing the newly born lambs that have suddenly appeared in nearby fields. All of these would be missed during the usual helter-skelter days we experience during the week. But things are very far from usual in these coronavirus-infected days. And any distraction is a welcome one.

On a wider scale, the jury is still very much out as to how researchers are dealing with the situation, let alone how things will be affected in the future. What we do know is that in those developed countries most impacted by the virus, universities have been closed down, students sent home and labs mothballed. In some countries such as Italy there are fears important research work could be lost in the shutdown, while in the US there is concern for the welfare of those people – and animals – who are currently in the middle of clinical trials. Overall, everyone hopes that the specific research into the coronavirus yields some quick results.

On the flip side, however, for those researchers not confined to labs or field research, this period could accelerate their work. For those in social science or humanities freed from the commute, teaching commitments and office politics of daily academic life, the additional time will no doubt be put to good use. More time to set up surveys; more time for reading; more time for writing papers. Increased research output is perhaps inevitable in those areas where academics are not tied to labs or other physical experiments.

These two countervailing factors may cancel each other out, or one may prevail over the other. As such, the scholarly publishing community does not know yet what to expect down the line. In the short term, it has been focused on making related content freely accessible (such as this site from The Lancet: ). However, what we may see is that there is greater pressure to see research in potentially globally important areas to be made open access at the source given how well researchers and networks have seemed to work together so far during the short time the virus has been at large.

Again, unintended consequences could be one of the key legacies of the crisis once the virus has died down. Organizations concerned about how their people can work from home will no doubt have their fears allayed, while the positive environmental impact of less travelling will be difficult to give up. For publishers and scholars, understanding how their research could have an impact when the world is in crisis may change their research aims forever.

The future of research evaluation

Following last week’s guest post from Rick Anderson on the risks of predatory journals, we turn our attention this week to legitimate journals and the wider issue of evaluating scholars based on their publications. With this in mind, Simon Linacre recommends a broad-based approach with the goal of such activities permanently front and center.


This post was meant to be ‘coming to you LIVE from London Book Fair’, but as you may know, this event has been canceled, like so many other conferences and other public gatherings in the wake of the coronavirus outbreak. While it is sad to miss the LBF event, meetings will take place virtually or in other places, and it is to be hoped the organizers can bring it back bigger and better than ever in 2021.

Some events are still going ahead, however, in the UK, and it was my pleasure to attend the LIS-Bibliometrics Conference at the University of Leeds last week to hear the latest thinking on journal metrics and performance management for universities. The day-long event was themed ‘The Future of Research Evaluation’, and it included both longer talks from key people in the industry, and shorter ‘lightning talks’ from those implementing evaluation systems or researching their effectiveness in different ways.

There was a good deal of debate, both on the floor and on Twitter (see #LisBib20 to get a flavor), with perhaps the most interest in speaker Dr. Stephen Hill, who is Director of Research at Research England, and chair of the steering group for the 2021 Research Excellence Framework (REF) in the UK. For those of us wishing to see crumbs from his table in the shape of a steer for the next REF, we were sadly disappointed as he was giving nothing away. However, what he did say was that he saw four current trends shaping the future of research evaluation:

  • Outputs: increasingly they will be diverse, include things like software code, be more open, more collaborative, more granular and potentially interactive rather than ‘finished’
  • Insight: different ways of understanding may come into play, such as indices measuring interdisciplinarity
  • Culture: the context of research and how it is received in different communities could become explored much more
  • AI: artificial intelligence will become a bigger player both in terms of the research itself and how the research is analyzed, e.g. the Unsilo tools or so-called ‘robot reviewers’ that can remove any reviewer bias.

Rather revealingly, Dr. Hill suggested a fifth trend might be the societal impact, and this is despite the fact that such impact has been one of the defining traits of both the current and previous REFs. Perhaps, the full picture has yet to be understood regarding impact, and there is some suspicion that many academics have yet to buy-in to the idea at all. Indeed, one of the takeaways from the day was that there was little input into the discussion from academics at all, and one wonders if they might have contributed to the discussion about the future of research evaluation, as it is their research being evaluated after all.

There was also a degree of distrust among the librarians present towards publishers, and one delegate poll should be of particular interest to them as it showed what those present thought were the future threats and challenges to research evaluation. The top three threats were identified as publishers monopolizing the area, commercial ownership of evaluation data, and vendor lock-in – a result which led to a lively debate around innovation and how solutions could be developed if there was no commercial incentive in place.

It could be argued that while the UK has taken the lead on impact and been busy experimenting with the concept, the rest of the higher education world has been catching up with a number different takes on how to recognize and reward research that has a demonstrable benefit. All this means that we are yet to see the full ‘impact of impact,’ and certainly, we at Cabells are looking carefully at what potential new metrics could aid this transition. Someone at the event said that bibliometrics should be “transparent, robust and reproducible,” and this sounds like a good guiding principle for whatever research is being evaluated.

Will academia lead the way?

Universities are usually expected to have all the answers – they are full of clever people after all. But sometimes, they need some help to figure out specific problems. Simon Linacre attended a conference recently where the questions being asked of higher education are no less than solving the problems of climate change, poverty, clean water supply and over a dozen more similar issues. How can academic institutions respond?


Most people will be aware of the United Nations and the Sustainable Development Goals (SDGs), which they adopted to solve 17 of the world’s biggest problems by 2030. Solving the climate change crisis by that date has perhaps attracted the most attention, but all of the goals present significant challenges to global society.

Universities are very much at the heart of this debate, and there seems to be an expectation that because of the position they have in facilitating research, they will unlock the key to solving these major problems. And so far they seem to have taken up the challenge with some gusto, with new research centers and funding opportunities appearing all the time for those academics aiming to contribute to these global targets in some way. What seems to be missing, however, is that many academics don’t seem to have received the memo on what they should be researching.
 
Following several conversations at conferences and with senior management at a number of universities, the two themes that are repeated when it comes to existing research programs is that there is a problem with both ‘culture and capabilities’. By culture, university hierarchies report that their faculty members are still as curious and keen to do research as ever, but they are not as interested when they are told to focus their energies on certain topics. And when they do, they lack the motivation or incentives to ensure the outcomes of their research lie in real-world impact. For the academic, impact still means a smallish number with three decimal places – ie, the Impact Factor.

In addition, when it comes to the capability of undertaking the kind of research that is likely to contribute to moving forward the SDGs, academics have not had any training, guidance, or support in what to do. In the UK, for example, where understanding and exhibiting impact is further forward than anywhere else in the world thanks to the Research Excellence Framework (REF), there still seem to be major issues with academics being focused on research that will get published rather than research that will change things. In one conversation, while I was referring to research outcomes as real-world benefits, an academic was talking about the quality of journals in which research would be published. Both are legitimate research outcomes, but publication is still way ahead in terms of cultural expectations. And internal incentives are in reality far behind the overarching aims stated by governments and research organizations.

Perhaps we are being too optimistic to expect the grinding gears of academia to move more smoothly towards a major culture change, and perhaps the small gains that are being made and the work done in the public space by the likes of Greta Thunberg will ultimately be enough to enable real change. But when the stakes are so high and the benefits are so great, maybe our expectations should weigh heavily on academia, as they are probably the people best placed to solve the world’s problems after all.