If you are anything like me, you will have been experiencing serious FOMO in the last week as colleagues and acquaintances descended on Frankfurt for the annual publishing social, interrupted intermittently by the Book Fair. From what most LinkedIn posts would have you believe, a good time was had by all, but some of that reporting may be a little OTT.

The acronym that has dominated the Frankfurt Book Fair for years, at least for academic publishers, has been OA. While Open Access is still much discussed – especially now in the middle of International Open Access Week – by all accounts, it has been completely overshadowed by that acronym that now looms large over all our lives: AI.

10% club

Artificial intelligence is now all-pervasive in its influence on our lives, filling up news headlines, our leisure time, as well as our day-to-day working days. It is estimated that one in ten of the global population is using AI on a weekly basis, while as much as half of all online content could now be AI-generated, up from just a few percent five years ago. Another acronym comes to mind when confronted by the scale of growth in AI adoption: WTF?

For the scholarly communications industry in particular, this rapid growth has been posing serious existential problems. Authorship, quality, research integrity, and biases have all dogged the many undoubted opportunities that AI represents for publishers. While we heard at the recent ALPSP conference that there has been a positive shift towards using AI in peer review, some recent stories have muddied the waters further on AI use as an integral part of the publishing process.

Turing point

Firstly, at a recent meeting at the Royal Society in London, there were calls for a new benchmark for assessing AI capabilities as the Turing Test – a slightly vague but understandable test comparing AI to a human’s cognitive ability – was simply outdated, with the most advanced AI products now being able to easily fool people into thinking that they were communicating with another human. This obviously goes for communicating through published academic articles as well.

More sinister, however, was the news this week from China, where it has been estimated that as many as 40,000 research articles have been commissioned from a single paper mill, all written using AI technology. The scope of potential fraud is simply mind-boggling, as China-based authors published well over a million articles in 2024, with rates of publication growing as other countries fall behind. Just as worrying is the way the fraud seems to be perpetrated, with paper mills saying they used university teachers when they actually employed unskilled labor and disguising themselves as AI detectors with tools to supposedly prevent unethical use of AI.

Open ended

The upshot of these developments is still unclear, but the nervousness reported by some emanating from publishers in Frankfurt is understandable. In terms of Open Access, significant problems with research and publishing integrity could have a substantial impact. According to a major survey conducted by Springer Nature in 2024, not only was China the single biggest source of OA articles published, but its growth rate was much higher than other major publishers of academic content, such as the US.

This is not just a problem of China, of course, but one the whole world needs to grapple with. However, if the world’s major driver of both research publications and OA succumbs to AI-related integrity issues, the pollution of the scholarly record we have already seen with such content is going to get a lot worse very quickly.

2 thoughts on “Will AI + OA be OK?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.