A Happy New Year to everyone, and if you can’t quite believe it is January already, then recent news on the impact of AI on scholarly communications is not going to help with that feeling of things not being quite real.
Towards the end of 2025, reports started to emerge of ‘imaginary journals.’ We have become used to hearing that a major problem with generative AI is that it can hallucinate things and present them as factual, but going to the extent of creating fantasy journals and citations surprised many. First reported in Scientific American from an announcement made by the International Committee of the Red Cross, journals such as the ‘Journal of International Relief’ were identified as being cited by authors despite not being an actual publication. We are used to fake or predatory journals, of course, but they do actually exist on the internet. However, these new titles are not listed anywhere other than in a list of references produced using AI.
Curiouser and curiouser
Darker still, reports emerged earlier in the year of authors being created, with articles submitted to journals that were similar to original ones, but where no trace of the author could be found. In an article in Times Higher Education, an editor suggested that these articles could be part of a greater scheme, testing out peer review and plagiarism checking systems, or harvesting data for a wider, even more sinister purpose.
While this may be just wild speculation, the author rightly identifies these articles with a disruption to research and publishing norms as a result of the widespread availability of powerful generative AI tools. There have been many reports that AI has been a major factor in a spike in submissions in the last year or two, and in some cases, preprint servers have had to limit postings due to the increased rate and fears over ‘AI slop.’ Indeed, such is the level of concern in some quarters that the issue has even been reported in Rolling Stone magazine, which linked the problem with that of fake case law being cited in legal cases where lawyers had used AI to build their briefs.
Through the looking glass
The real impact of these issues will be where some research appears online that is wholly fabricated, and that could enable harm were it to be followed through by those reading it. There have already been cases where made-up research has been cited dozens of times, with authors having no idea their names have been used in research they had nothing to do with.
So how do we deal with all of this fakery, and is there a better way to understand what is happening? It is clear that verified sources of research publications will only become more important as AI is ever more widely used, with huge ramifications for publishers, authors, institutions, and funders as lines are blurred between human-based research, hybrid research, and slop that is wholly AI-generated. One take is that this is simply the logical conclusion of the ‘junkification of research,’ a title from a recent paper which points out that as soon as research became commoditized and incentivized, it was inevitable that it would become diminished.
Happy New Year?

Sharing of the underlying data that supports the empirical findings; replication of the results in the same and also in other settings; editorial reviews; two-level blind reviews where even the reviewers do not know who the other reviewers are; random audits of methodology; severe punishments for transgressive behavior; etc., etc. may alleviate but not eliminate the problem due to human ingenuity that always finds escape routes for survival and advancement.