This year, as the research community’s trust in the peer review system’s efficacy and efficiency has wavered, we’ve seen a sharp rise in the proposal and implementation of alterations to the standard peer review process. As such, it’s not surprising that the community-selected theme for the 2023 Peer Review Week is “Peer Review and The Future of Publishing.” When taken in context with the runner-up topics— “Peer Review and Technology” and “Ethical Issues in Peer Review”—it’s clear that the medical community is uncertain about many of these changes, especially changes that involve new and unproven technology. In this article, we’ll narrow our focus to a specific topic that embodies much of the potential (both positive and negative) of these changes: the role of generative artificial intelligence (AI) in peer review.
Artificial Intelligence in Peer Review
Generative AI’s potential role in peer review is complex, with the capacity for great time-saving efficiency as well as for severe ethical violations and misinformation. In theory, generative AI platforms could be used throughout the peer review process, from the initial drafting to the finalization of a decision letter or a reviewer’s critiques. An editor or reviewer could input a manuscript (either in whole or individual sections) into a generative AI platform and then prompt the tool for either an overall review of the paper or for a specific analysis, such as evaluating the reproducibility of the article’s methods or the language clarity. However, this comes with a myriad of potential benefits and drawbacks.
- Automation of time-intensive tasks, thereby reducing the extensive turnaround windows for manuscript evaluation
- The rich potential of AI as a supportive tool, not as a total replacement for editors or reviewers
- Use of AI to draft and refine decision letters and reviewer comments
Conversely, arguments in opposition to generative AI in peer review include:
- Potential for unreliable, factually incorrect output
- Discrimination resulting from large language models’ tendency toward biases
- Non-confidentiality of valuable research data and proprietary information
- Murky status of autogenerated content as plagiarism
Current State of Generative AI in Peer Review
The debate on whether generative AI should be permissible for peer review has waged for most of 2023, and in recent months, key funders have announced their stance. Foremost among them is the National Institutes of Health (NIH), the largest funder of medical research in the world. In June of 2023, the NIH banned the use of generative AI during peer review, citing confidentiality and security as primary concerns; a Security, Confidentiality and Nondisclosure Agreement stipulating that AI tools are prohibited was then sent all NIH peer reviewers. The Australian Research Council followed quickly afterwards with a similar ban. Other funding bodies, such as the United States’ National Science Foundation and the European Research Council, currently have working groups developing position statements regarding generative AI use for peer review.
Publishers, however, are placed in a unique position. Some journals have proposed adopting generative AI tools to augment the current peer review process and to automate some processes that are currently completed by editors or reviewers, which could meaningfully shorten the time required to complete a thorough peer review. Currently, few publishers have posted public position statements regarding the use of generative AI during peer review; an exception is Elsevier, who has stated that book and commissioned content reviewers are not permitted to use generative AI due to confidentiality concerns. The future of generative AI integration into journals’ manuscript evaluation workflows remains unclear.
Understanding the 2023 Theme Beyond Generative AI
Beyond the proposed role of generative AI and natural language processing in peer review, the 2023 theme of “Peer Review and The Future of Publishing” encompasses a wide range of current and anticipated shifts in the publishing process. These changes can have a domino effect to sway the community’s opinion on generative AI, potentially swinging the needle regarding its use during peer review. Other related considerations include:
- The White House Office of Science and Technology Policy’s Nelson Memo, which was published on August 25 of 2022 and requires all research funded by taxpayer dollars to be published open access
- The end of Plan S’s financial support for transformative arrangements in 2024, essentially enacting the core tenet of Plan S and encouraging publishers toward open access publications
- Industry shifts in peer review processes, including, somewhat contradictorily, an increase in both double-blind peer review (i.e., neither authors nor reviewers knowing one another’s identities) and open peer review (i.e., both authors and reviewers knowing one another’s identities, sometimes with the reviewers’ names being publicly listed with the published article)
Each of these trends will affect peer review in crucial but unclear ways, which has led to a heightened sense of uncertainty regarding peer review throughout the medical research community. The 2023 theme for Peer Review Week aims to hold space for these concerns and allow stakeholders to collaboratively discuss the most effective routes forward to ensure that peer review is an effective and efficient process.