The Role of Generative Artificial Intelligence in Peer Review

This year, as the research community’s trust in the peer review system’s efficacy and efficiency has wavered, we’ve seen a sharp rise in the proposal and implementation of alterations to the standard peer review process. As such, it’s not surprising that the community-selected theme for the 2023 Peer Review Week is “Peer Review and The Future of Publishing.” When taken in context with the runner-up topics— “Peer Review and Technology” and “Ethical Issues in Peer Review”—it’s clear that the medical community is uncertain about many of these changes, especially changes that involve new and unproven technology. In this article, we’ll narrow our focus to a specific topic that embodies much of the potential (both positive and negative) of these changes: the role of generative artificial intelligence (AI) in peer review.

Artificial Intelligence in Peer Review

Generative AI’s potential role in peer review is complex, with the capacity for great time-saving efficiency as well as for severe ethical violations and misinformation. In theory, generative AI platforms could be used throughout the peer review process, from the initial drafting to the finalization of a decision letter or a reviewer’s critiques. An editor or reviewer could input a manuscript (either in whole or individual sections) into a generative AI platform and then prompt the tool for either an overall review of the paper or for a specific analysis, such as evaluating the reproducibility of the article’s methods or the language clarity. However, this comes with a myriad of potential benefits and drawbacks.

Arguments in support of generative AI in peer review include:

  • Automation of time-intensive tasks, thereby reducing the extensive turnaround windows for manuscript evaluation
  • The rich potential of AI as a supportive tool, not as a total replacement for editors or reviewers
  • Use of AI to draft and refine decision letters and reviewer comments

Conversely, arguments in opposition to generative AI in peer review include:

  • Potential for unreliable, factually incorrect output
  • Discrimination resulting from large language models’ tendency toward biases
  • Non-confidentiality of valuable research data and proprietary information
  • Murky status of autogenerated content as plagiarism

Current State of Generative AI in Peer Review

The debate on whether generative AI should be permissible for peer review has waged for most of 2023, and in recent months, key funders have announced their stance. Foremost among them is the National Institutes of Health (NIH), the largest funder of medical research in the world. In June of 2023, the NIH banned the use of generative AI during peer review, citing confidentiality and security as primary concerns; a Security, Confidentiality and Nondisclosure Agreement stipulating that AI tools are prohibited was then sent all NIH peer reviewers. The Australian Research Council followed quickly afterwards with a similar ban. Other funding bodies, such as the United States’ National Science Foundation and the European Research Council, currently have working groups developing position statements regarding generative AI use for peer review.

Publishers, however, are placed in a unique position. Some journals have proposed adopting generative AI tools to augment the current peer review process and to automate some processes that are currently completed by editors or reviewers, which could meaningfully shorten the time required to complete a thorough peer review. Currently, few publishers have posted public position statements regarding the use of generative AI during peer review; an exception is Elsevier, who has stated that book and commissioned content reviewers are not permitted to use generative AI due to confidentiality concerns. The future of generative AI integration into journals’ manuscript evaluation workflows remains unclear.

Understanding the 2023 Theme Beyond Generative AI

Beyond the proposed role of generative AI and natural language processing in peer review, the 2023 theme of “Peer Review and The Future of Publishing” encompasses a wide range of current and anticipated shifts in the publishing process. These changes can have a domino effect to sway the community’s opinion on generative AI, potentially swinging the needle regarding its use during peer review. Other related considerations include:

Each of these trends will affect peer review in crucial but unclear ways, which has led to a heightened sense of uncertainty regarding peer review throughout the medical research community. The 2023 theme for Peer Review Week aims to hold space for these concerns and allow stakeholders to collaboratively discuss the most effective routes forward to ensure that peer review is an effective and efficient process.

Update: A Journal Hijacking

Editor’s Note: This is an updated version of an article originally posted in August, 2021.


As members of our journal evaluation team work their way around the universe of academic and medical publications, one of the more brazen and egregious predatory publishing scams they encounter is the hijacked, or cloned, journal.  One recent case of this scheme uncovered by our team, while frustrating in its flagrance, also offered some levity by way of its ineptitude. But make no mistake, hijacked journals are one of the more nefarious and injurious operations carried out by predatory publishers. They cause extensive damage not just to the legitimate journal that has had its name and brand stolen, but to medical and academic research at large, their respective communities of researchers and funders, and, ultimately, society.

There are a few different variations on the hijacked journal, but all include a counterfeit operation stealing the title, branding, ISSN, and/or domain name of a legitimate journal to create a duplicate, fraudulent version of the same. They do this to lure unsuspecting (or not) researchers into submitting their manuscripts (on any topic, not just those covered by the original, legitimate publication) for promises of rapid publication for a fee.

A recent case of journal hijacking investigated by our team involved the legitimate journal, Tierärztliche Praxis, a veterinary journal out of Germany with two series, one for small and one for large animal practitioners:

The legitimate website for Tierärztliche Praxis

by this counterfeit operation, using the same name:

The website for the hijacked version of Tierärztliche Praxis

One of the more immediate problems caused by cloned journals is how difficult they make it for scholars to discover and engage with the legitimate journal, as shown in the image below of Google search results for “Tierärztliche Praxis.” The first several search results refer to the fake journal, including the top result which links to the fake journal homepage:

“Tierärztliche praxis” translates to “veterinary practice” in English, and the legitimate journal is of course aimed at veterinary practitioners. Not so for the fake Tierärztliche Praxis “journal” (whose “publishers” didn’t bother/don’t care to find out what “tierärztliche” translates to) which claims to be a multidisciplinary journal covering all subjects and will accept articles on anything by anyone willing to pay to be published:

Aside from a few of the more obvious signs of deception found with the cloned journal: a poor website with duplicate text and poor grammar, an overly simple submission process, no consideration of the range of topics covered, to name a few, this journal’s “archive” of (stolen) articles takes things to a new level:

Above: the original article, stolen from Tuexenia vs. the hijacked version

A few things to note:

  • The stolen article shown in the pictures above is not even from the original journal that is being hijacked, but from a completely different journal, Tuexenia.
  • The white rectangle near the top left of the page to cover the original journal’s title and the poorly superimposed hijacked journal title and ISSN at the header of the pages, and the volume information and page number in the footer (without bothering to redact the original article page numbers).
  • The FINGER at the bottom left of just about every other page of this stolen article.
Predatory Reports listing for the hijacked version of Tierärztliche Praxis

Sadly, not all hijacked or otherwise predatory journals are this easy to spot. Medical and academic researchers must be hyper-vigilant when it comes to selecting a publication to which they submit their work. Refer to Cabells Predatory Reports criteria to become familiar with the tactics used by predatory publishers. Look at journal websites with a critical eye and be mindful of some of the more obvious red flags such as promises of fast publication, no information on the peer review process, dead links or poor grammar on the website, or pictures (with or without fingers) of obviously altered articles in the journal archives.

Bad medicine

Recent studies have shown that academics can have a hard time identifying some predatory journals, especially if they come from high-income countries or medical faculties. Simon Linacre argues that this is not surprising given they are often the primary target of predatory publishers, but a forthcoming product from Cabells could help them.


A quick search of PubMed for predatory journals will throw up hundreds of results – over the last year I would estimate there are on average one or two papers published each week on the site (and you can sign up for email alerts on this and other scholarly communication issues at the estimable Biomed News site). The papers tend to fall into two categories – editorial or thought pieces on the blight of predatory journals in a given scientific discipline, or original research on the phenomenon. While the former are necessary to raise the profile of the problem among researchers, they do little to advance the understanding of such journals.

The latter, however, can provide illuminating details about how predatory journals have developed, and in so doing offer lessons in how to combat them. Two such articles were published last week in the field of medicine. In the first paper ‘Awareness of predatory publishing’, authors Panjikaran and Mathew surveyed over 100 authors who had published articles in predatory journals. While a majority of authors (58%) were ignorant of such journals, of those who said they recognized them nearly half from high-income countries (HICs) failed a recognition test, while nearly a quarter from low-income to middle-income countries (LMICs) also failed. The result, therefore, was a worrying lack of understanding of predatory journals among authors who had already published in them.

The second article was entitled ‘Faculty knowledge and attitudes regarding predatory open access journals: a needs assessment study’ and authored by Swanberg, Thielen and Bulgarelli. In it, they surveyed both regular and medical faculty members of a university to ascertain if they understood what was meant by predatory publishing. Almost a quarter (23%) said they had not heard of the term previously, but of those that had 87% said there confident of being able to assess journal quality. However, when they were tested by being presented with journals in their own fields, only 60% could, with scores even lower for medical faculty.

Both papers call for greater education and awareness programs to support academics in dealing with predatory journals, and it is here that Cabells can offer some good news. Later this year Cabells intends to launch a new medical journal product that identifies good quality journals in the vast medical field. Alongside our current products covering most other subject disciplines, the new medical product will enable academics, university administrators, librarians, tenure committees and research managers to validate research sources and publication outputs of faculty members. They will also still be backed up, of course, by the Cabells Journal Blacklist which now numbers over 13,200 predatory, deceptive or illegitimate journals. Indeed, in the paper by Swanberg et al the researchers ask faculty members themselves what support they would like to see from their institution, and the number one answer was a “checklist to help assess journal quality.” This is exactly the kind of feedback Cabells have received over the years that has driven us to develop the new product for medical journals, and hopefully, it will help support good publishing decisions in the future alongside our other products.


PS: A kind request – Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.

Our short survey should take no more than ten minutes to complete, and can be taken here.

As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.

Many thanks in advance for your valuable feedback!

Simon Linacre