Following last week’s guest post from Rick Anderson on the risks of predatory journals, we turn our attention this week to legitimate journals and the wider issue of evaluating scholars based on their publications. With this in mind, Simon Linacre recommends a broad-based approach with the goal of such activities permanently front and center.
This post was meant to be ‘coming to you LIVE from London Book Fair’, but as you may know, this event has been canceled, like so many other conferences and other public gatherings in the wake of the coronavirus outbreak. While it is sad to miss the LBF event, meetings will take place virtually or in other places, and it is to be hoped the organizers can bring it back bigger and better than ever in 2021.
Some events are still going ahead, however, in the UK, and it was my pleasure to attend the LIS-Bibliometrics Conference at the University of Leeds last week to hear the latest thinking on journal metrics and performance management for universities. The day-long event was themed ‘The Future of Research Evaluation’, and it included both longer talks from key people in the industry, and shorter ‘lightning talks’ from those implementing evaluation systems or researching their effectiveness in different ways.
There was a good deal of debate, both on the floor and on Twitter (see #LisBib20 to get a flavor), with perhaps the most interest in speaker Dr. Stephen Hill, who is Director of Research at Research England, and chair of the steering group for the 2021 Research Excellence Framework (REF) in the UK. For those of us wishing to see crumbs from his table in the shape of a steer for the next REF, we were sadly disappointed as he was giving nothing away. However, what he did say was that he saw four current trends shaping the future of research evaluation:
Outputs: increasingly they will be diverse, include things like software code, be more open, more collaborative, more granular and potentially interactive rather than ‘finished’
Insight: different ways of understanding may come into play, such as indices measuring interdisciplinarity
Culture: the context of research and how it is received in different communities could become explored much more
AI: artificial intelligence will become a bigger player both in terms of the research itself and how the research is analyzed, e.g. the Unsilo tools or so-called ‘robot reviewers’ that can remove any reviewer bias.
Rather revealingly, Dr. Hill suggested a fifth trend might be the societal impact, and this is despite the fact that such impact has been one of the defining traits of both the current and previous REFs. Perhaps, the full picture has yet to be understood regarding impact, and there is some suspicion that many academics have yet to buy-in to the idea at all. Indeed, one of the takeaways from the day was that there was little input into the discussion from academics at all, and one wonders if they might have contributed to the discussion about the future of research evaluation, as it is their research being evaluated after all.
There was also a degree of distrust among the librarians present towards publishers, and one delegate poll should be of particular interest to them as it showed what those present thought were the future threats and challenges to research evaluation. The top three threats were identified as publishers monopolizing the area, commercial ownership of evaluation data, and vendor lock-in – a result which led to a lively debate around innovation and how solutions could be developed if there was no commercial incentive in place.
It could be argued that while the UK has taken the lead on impact and been busy experimenting with the concept, the rest of the higher education world has been catching up with a number different takes on how to recognize and reward research that has a demonstrable benefit. All this means that we are yet to see the full ‘impact of impact,’ and certainly, we at Cabells are looking carefully at what potential new metrics could aid this transition. Someone at the event said that bibliometrics should be “transparent, robust and reproducible,” and this sounds like a good guiding principle for whatever research is being evaluated.
Editor’s Note: This post is by Rick Anderson, Associate Dean for Collections & Scholarly Communication in the J. Willard Marriott Library at the University of Utah. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro and as Director of Resource Acquisition at the University of Nevada, Reno. Rick serves on numerous editorial and advisory boards and is a regular contributor to the Scholarly Kitchen. He has served as president of the North American Serials Interest Group (NASIG), and is a recipient of the HARRASSOWITZ Leadership in Library Acquisitions Award. In 2015 he was elected President of the Society for Scholarly Publishing. He serves as an unpaid advisor on the library boards of numerous publishers and organizations including biorXiv, Elsevier, JSTOR, and Oxford University Press.
This morning I had an experience that is now familiar, and in fact a several-times-daily occurrence—not only for me, but for virtually every one of my professional colleagues: I was invited to submit an article to a predatory journal.
How do I know it was a predatory journal? Well, there were a few indicators, some strong and some merely suggestive. For one thing, the solicitation addressed me as “Dr. Rick Anderson,” a relatively weak indicator given that I’m referred to that way on a regular basis by people who assume that anyone with the title “Associate Dean” must have a doctoral degree.
However, there were other elements of this solicitation that indicated much more strongly that this journal cares not at all about the qualifications of its authors or the quality of its content. The strongest of these was the opening sentence of the message:
This gave me some pause, since I have no expertise whatsoever “on Heart,” and have never published anything on any topic even tangentially related to medicine. Obviously, no legitimate journal would consider me a viable target for a solicitation like this.
Another giveaway: the address given for this journal is 1805 N Carson St., Suite S, Carson City, NV. As luck would have it, I lived in northern Nevada for seven years and am quite familiar with Carson City. The northern end of Carson Street—a rather gritty stretch of discount stores, coffee shops, and motels with names designed to signal affordability—didn’t strike me as an obvious location for any kind of multi-suite office building, let alone a scientific publishing office, but I checked on Google Maps just to see. I found that 1805 North Carson Street is a non-existent address; 1803 North Carson Street is occupied by the A to Zen Thrift Shop, and Carson Coffee is at 1825. There is no building between them.
Having thus had my suspicion stoked, I decided to give this journal a real test. I created a nonsense paper consisting of paragraphs taken at random from articles originally published in a legitimate journal of cardiothoracic medicine, and gave it a title consisting of syntactically coherent but otherwise randomly-chosen terms taken from the discipline. I invented several fictional coauthors, created an email account under the assumed name of the lead author, submitted the manuscript via the journal’s online system and settled down to wait for a decision (which was promised within “14 days,” following the journal’s usual “double blind peer review process”).
While we wait for word from this journal’s presumably distinguished team of expert peer reviewers, let’s talk a little bit about the elephant in the room: the fact that the journal we’re testing purports to publish peer-reviewed research on the topic of heart surgery.
The problem of deceptive or “predatory” publishing is not new; it has been discussed and debated at length, and it might seem as if there’s not much new to be said about it: as just about everyone in the world of scholarly publishing now knows, a large and apparently growing number of scam artists have created thousands upon thousands of journals that purport to publish rigorously peer-reviewed science, but will, in fact, publish whatever is submitted (good or bad) as long as it’s accompanied by an article processing charge. Some of these outfits go to great expense to appear legitimate and realize significant revenues from their efforts; OMICS (which was subject to a $50 million judgment after being sued by the Federal Trade Commission for deceptive practices) is probably the biggest and most famous of predatory publishing outfits. But most of these outfits are relatively small; many seem to be minimally staffed fly-by-night operations that have invested in little more than the creation of a website and an online payment system. The fact that so many of these “journals” exist and publish so many articles is a testament to either the startling credulity or the distressing dishonesty of scholars and scientists the world over—or, perhaps, both.
But while the issue of predatory publishing, and its troubling implications for the integrity of science and scholarship, is discussed regularly in broad terms within the scholarly-communication community, I want to focus here on one especially concerning aspect of the phenomenon: predatory journals that falsely claim to publish rigorously peer-reviewed science in fields that have a direct bearing on human health and safety.
In order to try to get a general idea of the scope of this issue, I did some searching within Cabell’s Journal Blacklist to see how many journals from such disciplines are listed in that database. My findings were troubling. For example, consider the number of predatory journals found in Cabell’s Blacklist that publish in the following disciplines (based on searches conducted on 25 and 26 November 2019):
# of Titles
Obviously, it’s concerning when scholarship or science of any kind is falsely represented as having been rigorously reviewed, vetted, and edited. But it’s equally obvious that not all scholarship or science has the same impact on human health and safety. A fraudulent study in the field of sociology certainly has the capacity to do significant damage—but perhaps not the same kind or amount of damage as a fraudulent study in the field of pediatric anesthesiology, or diagnostic oncology. The fact that Cabell’s Blacklist has identified nearly 4,000 predatory journals in the general field of medicine is certainly cause for very serious concern.
At the risk of offending my hosts, I’ll just add here that this fact leads me to really, really wish that Cabell’s Blacklist were available to the general public at no charge. Recognizing, of course, that a product like this can’t realistically be maintained at zero cost—or anything close to zero cost—this begs an important question: what would it take to make this resource available to all?
I can think of one possible solution. Two very large private funding agencies, the Bill & Melinda Gates Foundation and the Wellcome Trust, have demonstrated their willingness to put their money where their mouths are when it comes to supporting open access to science; both organizations require funded authors to make the published results of their research freely available to all, and allow them to use grant funds to pay the attendant article-processing charges. For a tiny, tiny fraction of their annual spend on research and on open-access article processing charges, either one of these grantmakers could underwrite the cost of making Cabell’s Blacklist freely available. How tiny? I don’t know what Cabell’s costs are, but let’s say, for the sake of argument, that it costs $10 million per year to maintain the Blacklist product, with a modest amount of profit built in. That would represent two tenths of a percent of the Gates Foundation’s annual grantmaking, or 2.3 tenths of a percent of Wellcome’s.
This, of course, is money that they would then not be able to use to directly subsidize research. But since both fundmakers already commit a much, much larger percentage of their annual grantmaking to APCs, this seems like a redirection of funds that would yield tremendous value for dollar.
Of course, underwriting a service like Cabell’s Blacklist would entail acknowledging that predatory publishing is real, and a problem. Oddly enough, this is not universally acknowledged, even among those who (one might think) ought to be most concerned about the integrity of the scholcomm ecosystem and about the reputation of open access publishing. Unfortunately, among many members of that ecosystem, APC-funded OA publishing is largely—and unfairly—conflated with predatory publishing.
Well, it took much longer than promised (or expected), but after receiving, over a period of two months, occasional messages telling me that my paper was in the “final peer review process,” I finally received the long-awaited-for response in late January: “our” paper had been accepted for publication!
Journal Blacklist entry for Journal of Cardiothoracic Surgery and Therapeutics
Over the course of several subsequent weeks I received a galley proof for my review—along with an invoice for an article-processing charge in the amount of $1,100. In my guise as lead author, I expressed shock and surprise at this charge; no one had said anything to me about an APC when my work was solicited for publication. I received a conciliatory note from the editor, explaining that the lack of notice was due to a staff error, and further explaining that the Journal of Cardiothoracic Surgery and Therapeutics is an open-access journal and uses APCs to offset its considerable costs. He said that by paying this fee and allowing publication to go forward I would be ensuring that the article “will be available freely which allows the scientific community to view, download, distribution of an article in any medium (provided that the original work is properly cited) thereby increasing the views of article.” He also promised that our article will be indexed “in Crossref and many other scientific databases.” I responded that I understood the model but had no funds available to pay the fee, and would therefore have to withdraw the paper. “You may consider our submission withdrawn,” I concluded.
Then something interesting happened. My final communication bounced back. I was informed by a system-generated message that my email had been “waitlisted” by a service called Boxbe, and that I would have to add myself to the addressee’s “guest list” in order for it to be delivered. Apparently, the editor no longer wanted to hear from me.
Also interesting: despite my nonpayment of the APC, the article has now been published and can be seen here. It will be interesting to see how long it remains in the journal.
We need to be very clear about one thing here: the problem with my article is not that it represents low-quality science. The problem with my article is that it is nonsense and it is utterly incoherent. Not only is its content entirely plagiarized, it’s so randomly assembled from such disparate sources that it could not possibly be mistaken for an actual study by any informed reader who took the time to read any two of its paragraphs. Furthermore, it was “written” by authors who do not exist, whose names were taken from famous figures in history and literature, and whose institutional affiliations are entirely fictional. (There is no “Brockton State University,” nor is there a “Massapequa University,” nor is there an organization called the “National Clinics of Health.”)
What all of this means is that the fundamental failing of this journal—as it is of all predatory journals—is not its low standards, or the laxness of its peer review and editing. Its fundamental failing is that despite its claims, and despite charging authors for these services, it has no standards at all, performs no peer review, and does no editing. If it did have any standards whatsoever, and if it performed even the most perfunctory peer review and editorial oversight, it would have detected the radical incoherence of my paper immediately.
One might reasonably ask, though: if my paper is such transparently incoherent nonsense, why does its publication pose any danger? No surgeon in the real world will be led by this paper to do anything in an actual surgical situation, so surely there’s no risk of it affecting a patient’s actual treatment in the real world.
This is true of my paper, no doubt. But what the acceptance and publication of my paper demonstrates is not only that the Journal of Cardiothoracic Surgery and Therapeutics will publish transparent nonsense, but also—more importantly and disturbingly—that it will publish anything. Dangerously, this includes papers that may not consist of actual nonsense, but that were flawed enough to be rejected by legitimate journals, or that were written by the employees of device makers or drug companies that have manipulated their data so as to promote their own products, or that were written by dishonest surgeons who have generally legitimate credentials but are pushing crackpot techniques or therapies. The danger illustrated by my paper is not so much that predatory journals will publish literal nonsense; the more serious danger is that they will uncritically publish seriously flawed science while presenting it as carefully-vetted science.
In other words, the defining characteristic of a predatory journal is not that it’s a “low-quality” journal. The defining characteristic of a predatory journal is that it falsely claims to provide quality control of any kind—precisely because to do so would restrict its revenue flow. This isn’t to say that no legitimate science ever gets published in predatory journals; I’m sure quite a bit does since there’s no reason why a predatory journal would reject it, any more than it would reject the kind of utter garbage this particular journal has now published under the purported authorship of Jackson X. Pollock. But the appearance of some legitimate science does nothing to resolve the fundamental issue here, which is one of scholarly and scientific fraud.
Such fraud is distressing wherever it occurs. In the context of cardiothoracic surgery—along with all of the other health-related disciplines in which predatory journals currently publish—it’s terrifying.
Recent studies have shown the huge impact that spam emails from predatory journals have on academics’ workflows. Simon Linacre argues that, far from being harmless, they contribute to a wider malaise in academic life.
If I said I have a New Year’s Resolution that could save everyone who reads this blog hundreds of dollars in time and effort, as well as enrich everyone’s lives, would you be interested in joining me? There is no catch, no trick, but there is a small degree of effort involved. And it is quite simple – just open up every email unsolicited email you receive and either block it or unsubscribe. Your life will improve as a result, guaranteed. But will such a straightforward, if humdrum, task really make such savings? Well, two recent studies show that the total cost to academia of spam emails is vast. Firstly, this week’s Times Higher Education (THE) reports on a new study that estimates the time wasted on opening and deleting spam emails, typically ones from predatory journals, is equal to $1.1bn – and this rises to over $2bn when all spam email is included. They arrive at this figure using the following methodology: take an average figure for the number of targeted spam emails academics receive each day from a number of prior studies (which is around five); estimate that each academic spends five seconds dealing with every email; assume the average academic earns $50 an hour; multiply by the number of academics in the world according to the United Nations. It may sound a bit like a back-of-a-napkin calculation, but for many academics, the number of emails and time to sift through them may seem significantly undercooked. Another study published in the British Medical Journal (BMJ) looked more specifically at the impact of emails received from predatory journal publishers by career development grant awardees. This study found that academic spam emails (or ASEs) were a significant distraction for academics and that there was an urgent need to mitigate the problem. The results from a survey of grant awardees showed that almost 90% had a spam filter turned on, but around half said they received up to 10 spam emails a day, with fully 30% estimating they received between 11 and 20. Some unsolicited emails may of course be legitimate, and can be blocked, while others are a result of individuals at some stage signing up to receive emails, usually to gain access to something or when making a purchase. Emails from law-abiding sources such as these can be stopped – it just takes a little time. As can those purchases from Amazon, Gap or Ebay where we have used our work email only to suffer a permanent slew of special offers (don’t worry, we’ve all been there). In these cases, our New Year’s Resolution can indeed help cut down the time spent on email and make more time for more meaningful pursuits. However, as the THE piece points out, there is very little academics can currently do to stem the tide of spam from predatory journals. All we can do is become more savvy in identifying them quickly, choose not to open them and delete straight away. And in the meantime, hope that someone invents a spam filter that genuinely screens ASEs out and doesn’t send important emails from your Dean to your ‘junk’ folder.
This week, Cabells is fortunate enough to connect with colleagues and friends, new and old, across the globe in Lisbon, Portugal at the GBSN 2019 Annual Conference, and in Charleston, South Carolina at the annual Charleston Conference. We greatly value these opportunities to share our experiences and learn from others, and both conference agendas feature industry leaders hosting impactful sessions covering a vast array of thought-provoking topics.
At the GBSN conference in Lisbon, Simon Linacre, Cabells Director of International Marketing and Development, is co-leading the workshop, “Research Impact for the Developing World” which explores ideas to make research more impactful and relevant in local contexts. At the heart of the matter is the notion that unless the global business community is more thoughtful and proactive about the development of research models, opportunities for positively impacting business and management in the growth markets of the future will be lost. We know all in attendance will benefit from Simon’s insight and leadership in working through this important topic.
At the Charleston Conference, a lively and eventful day at the vendor showcase on Tuesday was enjoyed by all and our team was reminded once again how wonderful it is to be a part of the scholarly community. We never take for granted how fortunate we are to have the opportunity to share, learn, and laugh with fellow attendees.
We are always excited to pass along news on the projects we are working on, learn about what we might expect down the road, and consider areas we should focus on going forward. Hearing what is on the collective mind of academia and how we can help move the community forward is what keeps us going. And things are just getting started! With so many important and interesting sessions on the agenda in Charleston, our only regret is that we can’t attend them all!
Cabells is excited to announce the renewal of its partnership with CLOCKSS, the decentralized preservation archive that ensures the long-term survival of scholarly content in digital format. Cabells is pleased to provide complimentary access to the Journal Whitelist and Journal Blacklist databases for an additional two years to CLOCKSS, to further the organizations’ shared goals of supporting and preserving scholarly publications for the benefit of the global research community.
The goal of Cabells is to provide academics with the intelligent data needed for comprehensive journal evaluations to safeguard scholarly communication and advance the dissemination of high-value research. Assisting CLOCKSS in its mission to provide secure and sustainable archives for the preservation of academic publications in their original format is a logical and rewarding collaboration.
“We are proud to renew our partnership with CLOCKSS. Our mission to protect the integrity of scholarly communication goes hand in hand with their work to ensure the secure and lasting preservation of scholarly research,” said Kathleen Berryman, Director of Business Relations with Cabells.
In helping to protect and preserve academic research, Cabells and CLOCKSS are fortunate to play vital roles in the continued prosperity of the scholarly community.
About: Cabells – Since its founding over 40 years ago, Cabells services have grown to include both the Journal Whitelist and the Journal Blacklist, manuscript preparation tools, and a suite of powerful metrics designed to help users find the right journals, no matter the stage of their career. The searchable Journal Whitelist database includes 18 academic disciplines from more than 11,000 international scholarly publications. The Journal Blacklist is the only searchable database of predatory journals, complete with detailed violation reports. Through continued partnerships with major academic publishers, journal editors, scholarly societies, accreditation agencies, and other independent databases, Cabells provides accurate, up-to-date information about academic journals to more than 750 universities worldwide. To learn more, visit www.cabells.com.
About: CLOCKSS is a not-for-profit joint venture between the world’s leading academic publishers and research libraries whose mission is to build a sustainable, international, and geographically distributed dark archive with which to ensure the long-term survival of Web-based scholarly publications for the benefit of the greater global research community.