Now a fixture in the scholarly communications calendar, Peer Review Week has become one of those pivots that for many people, the year revolves around. Like children going back to school after the summer break, or the inexorable slowdown ahead of Christmas, this week in mid-September serves as a reflection point on what has gone before it, and what should happen afterwards.
Putting aside the near-continuous tumult in world news, for those in university libraries and academic publishing, the year so far has seen two hot topics swim in and out of our collective focus: AI and research integrity. While AI has been in and out of the news all year, research integrity scandals have come and gone with major retractions or big news stories grabbing our attention, only to then fade away.
Dual focus
However, with the welcoming of Peer Review Week this week, we see the spotlight firmly placed on both these ethereal topics through the prism of peer review. While peer review was once a byword for the integrity and trustworthiness of research, the clear increase in AI use to dupe reviewers and editors into accepting papers has undermined this trust – and provided many a hot take for Peer Review Week. And yet, many would have us believe that AI can actually solve these problems as well as the overwhelming of reviewers by providing checks to aid and abet the peer review process.
In a session at the ALPSP Conference in Manchester last week, Chris Leonard from Cactus Communications did a great job of presenting the options for publishers when it comes to the use of AI in peer review. Essentially, as a publisher, you have three choices:
- Human-only: pay for ‘professional’ peer reviews to manage processes
- AI + human: use AI detection and ‘prompt libraries’ in-house (and also include professionals to work on the process)
- AI: overviews of strengths and weaknesses of papers, and include prompts with published articles, and include feedback loops
It is fair to say that, compared to say two years ago, these suggestions were not dismissed immediately as they might have been before. However, people still seem unwilling to give AI much more than a walk-on part when it comes to peer review, with other proposals published this week favoring more tweaking of the system than a radical overhaul.
Stick or twist
There was evident frustration at the ALPSP event with what was perceived as reluctance or even resistance to embrace AI among publishers; however, perhaps the tide may be turning. According to a recent survey of researchers by IOP Publishing, significant numbers seem more open to the use of AI in peer review. However, the survey also underlines the clear fault lines between those in favor of adopting the new technology to solve the problems in peer review and research integrity more broadly, and those who are still very circumspect about its use in peer review or any other academic pursuit.
Perhaps the most significant piece of research in Peer Review Week was not about peer review. A story in Nature highlighted a new tool called Pangram, which has seemingly had almost no false positives in its results when tested. This is not to say that previously undetectable AI is not detectable, but it may well offer publishers, peer reviewers, and faculty members some hope that AI may at least be policed effectively in the future.
