What will happen to global research output during lockdowns as a result of the coronavirus? Simon Linacre looks at how the effect in different countries and disciplines could shape the future of research and scholarly publications.
We all have a cabin fever story now after many countries have entered into varying states of lockdown. Mine is how the little things have lifted what has been quite an oppressive mood – the smell of buns baking in the oven; lying in bed that little bit longer in a morning; noticing the newly born lambs that have suddenly appeared in nearby fields. All of these would be missed during the usual helter-skelter days we experience during the week. But things are very far from usual in these coronavirus-infected days. And any distraction is a welcome one.
On a wider scale, the jury is still very much out as to how researchers are dealing with the situation, let alone how things will be affected in the future. What we do know is that in those developed countries most impacted by the virus, universities have been closed down, students sent home and labs mothballed. In some countries such as Italy there are fears important research work could be lost in the shutdown, while in the US there is concern for the welfare of those people – and animals – who are currently in the middle of clinical trials. Overall, everyone hopes that the specific research into the coronavirus yields some quick results.
On the flip side, however, for those researchers not confined to labs or field research, this period could accelerate their work. For those in social science or humanities freed from the commute, teaching commitments and office politics of daily academic life, the additional time will no doubt be put to good use. More time to set up surveys; more time for reading; more time for writing papers. Increased research output is perhaps inevitable in those areas where academics are not tied to labs or other physical experiments.
These two countervailing factors may cancel each other out, or one may prevail over the other. As such, the scholarly publishing community does not know yet what to expect down the line. In the short term, it has been focused on making related content freely accessible (such as this site from The Lancet: ). However, what we may see is that there is greater pressure to see research in potentially globally important areas to be made open access at the source given how well researchers and networks have seemed to work together so far during the short time the virus has been at large.
Again, unintended consequences could be one of the key legacies of the crisis once the virus has died down. Organizations concerned about how their people can work from home will no doubt have their fears allayed, while the positive environmental impact of less travelling will be difficult to give up. For publishers and scholars, understanding how their research could have an impact when the world is in crisis may change their research aims forever.
Following last week’s guest post from Rick Anderson on the risks of predatory journals, we turn our attention this week to legitimate journals and the wider issue of evaluating scholars based on their publications. With this in mind, Simon Linacre recommends a broad-based approach with the goal of such activities permanently front and center.
This post was meant to be ‘coming to you LIVE from London Book Fair’, but as you may know, this event has been canceled, like so many other conferences and other public gatherings in the wake of the coronavirus outbreak. While it is sad to miss the LBF event, meetings will take place virtually or in other places, and it is to be hoped the organizers can bring it back bigger and better than ever in 2021.
Some events are still going ahead, however, in the UK, and it was my pleasure to attend the LIS-Bibliometrics Conference at the University of Leeds last week to hear the latest thinking on journal metrics and performance management for universities. The day-long event was themed ‘The Future of Research Evaluation’, and it included both longer talks from key people in the industry, and shorter ‘lightning talks’ from those implementing evaluation systems or researching their effectiveness in different ways.
There was a good deal of debate, both on the floor and on Twitter (see #LisBib20 to get a flavor), with perhaps the most interest in speaker Dr. Stephen Hill, who is Director of Research at Research England, and chair of the steering group for the 2021 Research Excellence Framework (REF) in the UK. For those of us wishing to see crumbs from his table in the shape of a steer for the next REF, we were sadly disappointed as he was giving nothing away. However, what he did say was that he saw four current trends shaping the future of research evaluation:
Outputs: increasingly they will be diverse, include things like software code, be more open, more collaborative, more granular and potentially interactive rather than ‘finished’
Insight: different ways of understanding may come into play, such as indices measuring interdisciplinarity
Culture: the context of research and how it is received in different communities could become explored much more
AI: artificial intelligence will become a bigger player both in terms of the research itself and how the research is analyzed, e.g. the Unsilo tools or so-called ‘robot reviewers’ that can remove any reviewer bias.
Rather revealingly, Dr. Hill suggested a fifth trend might be the societal impact, and this is despite the fact that such impact has been one of the defining traits of both the current and previous REFs. Perhaps, the full picture has yet to be understood regarding impact, and there is some suspicion that many academics have yet to buy-in to the idea at all. Indeed, one of the takeaways from the day was that there was little input into the discussion from academics at all, and one wonders if they might have contributed to the discussion about the future of research evaluation, as it is their research being evaluated after all.
There was also a degree of distrust among the librarians present towards publishers, and one delegate poll should be of particular interest to them as it showed what those present thought were the future threats and challenges to research evaluation. The top three threats were identified as publishers monopolizing the area, commercial ownership of evaluation data, and vendor lock-in – a result which led to a lively debate around innovation and how solutions could be developed if there was no commercial incentive in place.
It could be argued that while the UK has taken the lead on impact and been busy experimenting with the concept, the rest of the higher education world has been catching up with a number different takes on how to recognize and reward research that has a demonstrable benefit. All this means that we are yet to see the full ‘impact of impact,’ and certainly, we at Cabells are looking carefully at what potential new metrics could aid this transition. Someone at the event said that bibliometrics should be “transparent, robust and reproducible,” and this sounds like a good guiding principle for whatever research is being evaluated.
Editor’s Note: This post is by Rick Anderson, Associate Dean for Collections & Scholarly Communication in the J. Willard Marriott Library at the University of Utah. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro and as Director of Resource Acquisition at the University of Nevada, Reno. Rick serves on numerous editorial and advisory boards and is a regular contributor to the Scholarly Kitchen. He has served as president of the North American Serials Interest Group (NASIG), and is a recipient of the HARRASSOWITZ Leadership in Library Acquisitions Award. In 2015 he was elected President of the Society for Scholarly Publishing. He serves as an unpaid advisor on the library boards of numerous publishers and organizations including biorXiv, Elsevier, JSTOR, and Oxford University Press.
This morning I had an experience that is now familiar, and in fact a several-times-daily occurrence—not only for me, but for virtually every one of my professional colleagues: I was invited to submit an article to a predatory journal.
How do I know it was a predatory journal? Well, there were a few indicators, some strong and some merely suggestive. For one thing, the solicitation addressed me as “Dr. Rick Anderson,” a relatively weak indicator given that I’m referred to that way on a regular basis by people who assume that anyone with the title “Associate Dean” must have a doctoral degree.
However, there were other elements of this solicitation that indicated much more strongly that this journal cares not at all about the qualifications of its authors or the quality of its content. The strongest of these was the opening sentence of the message:
This gave me some pause, since I have no expertise whatsoever “on Heart,” and have never published anything on any topic even tangentially related to medicine. Obviously, no legitimate journal would consider me a viable target for a solicitation like this.
Another giveaway: the address given for this journal is 1805 N Carson St., Suite S, Carson City, NV. As luck would have it, I lived in northern Nevada for seven years and am quite familiar with Carson City. The northern end of Carson Street—a rather gritty stretch of discount stores, coffee shops, and motels with names designed to signal affordability—didn’t strike me as an obvious location for any kind of multi-suite office building, let alone a scientific publishing office, but I checked on Google Maps just to see. I found that 1805 North Carson Street is a non-existent address; 1803 North Carson Street is occupied by the A to Zen Thrift Shop, and Carson Coffee is at 1825. There is no building between them.
Having thus had my suspicion stoked, I decided to give this journal a real test. I created a nonsense paper consisting of paragraphs taken at random from articles originally published in a legitimate journal of cardiothoracic medicine, and gave it a title consisting of syntactically coherent but otherwise randomly-chosen terms taken from the discipline. I invented several fictional coauthors, created an email account under the assumed name of the lead author, submitted the manuscript via the journal’s online system and settled down to wait for a decision (which was promised within “14 days,” following the journal’s usual “double blind peer review process”).
***
While we wait for word from this journal’s presumably distinguished team of expert peer reviewers, let’s talk a little bit about the elephant in the room: the fact that the journal we’re testing purports to publish peer-reviewed research on the topic of heart surgery.
The problem of deceptive or “predatory” publishing is not new; it has been discussed and debated at length, and it might seem as if there’s not much new to be said about it: as just about everyone in the world of scholarly publishing now knows, a large and apparently growing number of scam artists have created thousands upon thousands of journals that purport to publish rigorously peer-reviewed science, but will, in fact, publish whatever is submitted (good or bad) as long as it’s accompanied by an article processing charge. Some of these outfits go to great expense to appear legitimate and realize significant revenues from their efforts; OMICS (which was subject to a $50 million judgment after being sued by the Federal Trade Commission for deceptive practices) is probably the biggest and most famous of predatory publishing outfits. But most of these outfits are relatively small; many seem to be minimally staffed fly-by-night operations that have invested in little more than the creation of a website and an online payment system. The fact that so many of these “journals” exist and publish so many articles is a testament to either the startling credulity or the distressing dishonesty of scholars and scientists the world over—or, perhaps, both.
But while the issue of predatory publishing, and its troubling implications for the integrity of science and scholarship, is discussed regularly in broad terms within the scholarly-communication community, I want to focus here on one especially concerning aspect of the phenomenon: predatory journals that falsely claim to publish rigorously peer-reviewed science in fields that have a direct bearing on human health and safety.
In order to try to get a general idea of the scope of this issue, I did some searching within Cabells’ Predatory Reports to see how many journals from such disciplines are listed in that database. My findings were troubling. For example, consider the number of predatory journals found in Predatory Reports that publish in the following disciplines (based on searches conducted on 25 and 26 November 2019):
Disciplinary Keyword
# of Titles
Medicine
3,818
Clinical
300
Cancer
126
Pediatrics
64
Nutrition
88
Surgery
159
Neurology
39
Climate
25
Brain
24
Neonatal
16
Cardiovascular
51
Dentistry
44
Gynecology
44
Alzheimer’s
10
Structural Engineering
10
Anesthesiology
21
Oncology
74
Diabetes
51
Obviously, it’s concerning when scholarship or science of any kind is falsely represented as having been rigorously reviewed, vetted, and edited. But it’s equally obvious that not all scholarship or science has the same impact on human health and safety. A fraudulent study in the field of sociology certainly has the capacity to do significant damage—but perhaps not the same kind or amount of damage as a fraudulent study in the field of pediatric anesthesiology, or diagnostic oncology. The fact that Cabells’ Predatory Reports has identified nearly 4,000 predatory journals in the general field of medicine is certainly cause for very serious concern.
At the risk of offending my hosts, I’ll just add here that this fact leads me to really, really wish that Predatory Reports were available to the general public at no charge. Recognizing, of course, that a product like this can’t realistically be maintained at zero cost—or anything close to zero cost—this begs an important question: what would it take to make this resource available to all?
I can think of one possible solution. Two very large private funding agencies, the Bill & Melinda Gates Foundation and the Wellcome Trust, have demonstrated their willingness to put their money where their mouths are when it comes to supporting open access to science; both organizations require funded authors to make the published results of their research freely available to all, and allow them to use grant funds to pay the attendant article-processing charges. For a tiny, tiny fraction of their annual spend on research and on open-access article processing charges, either one of these grantmakers could underwrite the cost of making Predatory Reports freely available. How tiny? I don’t know what Cabells costs are, but let’s say, for the sake of argument, that it costs $10 million per year to maintain the Predatory Reports product, with a modest amount of profit built-in. That would represent two-tenths of a percent of the Gates Foundation’s annual grantmaking, or 2.3 tenths of a percent of Wellcome’s.
This, of course, is money that they would then not be able to use to directly subsidize research. But since both fundmakers already commit a much, much larger percentage of their annual grantmaking to APCs, this seems like a redirection of funds that would yield tremendous value for dollar.
Of course, underwriting a service like Cabells’ Predatory Reports would entail acknowledging that predatory publishing is real, and a problem. Oddly enough, this is not universally acknowledged, even among those who (one might think) ought to be most concerned about the integrity of the scholcomm ecosystem and about the reputation of open access publishing. Unfortunately, among many members of that ecosystem, APC-funded OA publishing is largely—and unfairly—conflated with predatory publishing.
***
Well, it took much longer than promised (or expected), but after receiving, over a period of two months, occasional messages telling me that my paper was in the “final peer review process,” I finally received the long-awaited-for response in late January: “our” paper had been accepted for publication!
Predatory Reports listing for Journal of Cardiothoracic Surgery and Therapeutics
Over the course of several subsequent weeks I received a galley proof for my review—along with an invoice for an article-processing charge in the amount of $1,100. In my guise as lead author, I expressed shock and surprise at this charge; no one had said anything to me about an APC when my work was solicited for publication. I received a conciliatory note from the editor, explaining that the lack of notice was due to a staff error, and further explaining that the Journal of Cardiothoracic Surgery and Therapeutics is an open-access journal and uses APCs to offset its considerable costs. He said that by paying this fee and allowing publication to go forward I would be ensuring that the article “will be available freely which allows the scientific community to view, download, distribution of an article in any medium (provided that the original work is properly cited) thereby increasing the views of article.” He also promised that our article will be indexed “in Crossref and many other scientific databases.” I responded that I understood the model but had no funds available to pay the fee, and would therefore have to withdraw the paper. “You may consider our submission withdrawn,” I concluded.
Then something interesting happened. My final communication bounced back. I was informed by a system-generated message that my email had been “waitlisted” by a service called Boxbe, and that I would have to add myself to the addressee’s “guest list” in order for it to be delivered. Apparently, the editor no longer wanted to hear from me.
Also interesting: despite my nonpayment of the APC, the article has now been published and can be seen here. It will be interesting to see how long it remains in the journal.
We need to be very clear about one thing here: the problem with my article is not that it represents low-quality science. The problem with my article is that it is nonsense and it is utterly incoherent. Not only is its content entirely plagiarized, it’s so randomly assembled from such disparate sources that it could not possibly be mistaken for an actual study by any informed reader who took the time to read any two of its paragraphs. Furthermore, it was “written” by authors who do not exist, whose names were taken from famous figures in history and literature, and whose institutional affiliations are entirely fictional. (There is no “Brockton State University,” nor is there a “Massapequa University,” nor is there an organization called the “National Clinics of Health.”)
What all of this means is that the fundamental failing of this journal—as it is of all predatory journals—is not its low standards, or the laxness of its peer review and editing. Its fundamental failing is that despite its claims, and despite charging authors for these services, it has no standards at all, performs no peer review, and does no editing. If it did have any standards whatsoever, and if it performed even the most perfunctory peer review and editorial oversight, it would have detected the radical incoherence of my paper immediately.
One might reasonably ask, though: if my paper is such transparently incoherent nonsense, why does its publication pose any danger? No surgeon in the real world will be led by this paper to do anything in an actual surgical situation, so surely there’s no risk of it affecting a patient’s actual treatment in the real world.
This is true of my paper, no doubt. But what the acceptance and publication of my paper demonstrates is not only that the Journal of Cardiothoracic Surgery and Therapeutics will publish transparent nonsense, but also—more importantly and disturbingly—that it will publish anything. Dangerously, this includes papers that may not consist of actual nonsense, but that were flawed enough to be rejected by legitimate journals, or that were written by the employees of device makers or drug companies that have manipulated their data so as to promote their own products, or that were written by dishonest surgeons who have generally legitimate credentials but are pushing crackpot techniques or therapies. The danger illustrated by my paper is not so much that predatory journals will publish literal nonsense; the more serious danger is that they will uncritically publish seriously flawed science while presenting it as carefully-vetted science.
In other words, the defining characteristic of a predatory journal is not that it’s a “low-quality” journal. The defining characteristic of a predatory journal is that it falsely claims to provide quality control of any kind—precisely because to do so would restrict its revenue flow. This isn’t to say that no legitimate science ever gets published in predatory journals; I’m sure quite a bit does since there’s no reason why a predatory journal would reject it, any more than it would reject the kind of utter garbage this particular journal has now published under the purported authorship of Jackson X. Pollock. But the appearance of some legitimate science does nothing to resolve the fundamental issue here, which is one of scholarly and scientific fraud.
Such fraud is distressing wherever it occurs. In the context of cardiothoracic surgery—along with all of the other health-related disciplines in which predatory journals currently publish—it’s terrifying.