What really counts for rankings?

University and business school rankings have induced hate and ridicule in equal measure since they were first developed, and yet we are told enjoy huge popularity with students. Simon Linacre looks at how the status quo could change thanks in part to some rankings’ own shortcomings.


In a story earlier this month in Times Higher Education (THE), it was reported that the status of a university vis-à-vis sustainability was now the primary consideration for international students, ahead of academic reputation, location, job prospects and even accessibility for Uber Eats deliveries – OK, maybe not the last one. But for those who think students should place such considerations at the top of their lists, this was indeed one of those rare things in higher ed in recent times: a good news story.

But how do students choose such a university? Amazingly, THE produced a ranking just a week later providing students with, you guessed it, a ranking of universities based on their sustainability credentials. Aligned with the UN’s now-ubiquitous Sustainability Development Goals (SDGs), the ranking is now well-established and this year proclaimed the University of Manchester in the UK as the number one university that had the highest impact ranking across all 17 SDGs, although it was somewhat of an outlier for the UK, with four of the top ten universities based in Australia.

Cynics may point out that such rankings have become an essential part of the marketing mix for outfits such as THE, the Financial Times and QS. Indeed the latter has faced allegations this week over possible conflicts of interest between its consulting arm and its rankings with regard to universities in Russia – a charge which QS denies. However, perhaps most concerning is the imbalance that has always existed between the importance placed on rankings by institutions and the transparency and/or relevance of the rankings themselves. A perpetual case of the tail wagging the dog.

Take, for instance, the list of 50 journals used by the Financial Times as the basis for one of its numerous criteria for assessing business schools for its annual rankings. The list is currently under review after not changing since 2016, and then it only added 5 journals from the 45 it used prior to that date, which was itself an upgrade from 40 used in the 2000s. In other words, despite the massive changes seen in business and business education – from Enron to the global financial crisis to globalisation to the COVID pandemic – there has been barely any change in the journals used to assess publications from business schools to determine whether they are high quality.

The FT’s Global Education Editor Andrew Jack was questioned about the relevance of the FT50 and the rankings in general in Davos in 2020, and answered that to change the criteria would endanger the comparability of the rankings. This intransigence by the FT and other actors in higher education and scholarly communications was in part the motivation behind Cabells’ pilot study with the Haub School of Business at St Joseph’s University in the US to create a new rating based on journals’ output intensity in terms of the SDGs. Maintaining the status quo also reinforces paradigms and restricts diversity, marginalizing those in vulnerable and alternative environments.

If students and authors want information on SDGs and sustainability to make their education choices, it is beholden on the industry to try and supply it in as many ways as possible. And not to worry about how well the numbers stack up compared to a world we left behind a long time ago. A world that some agencies seem to want to cling on to despite evident shortcomings.

No more grist to the mill

Numerous recent reports have highlighted the problems caused by published articles that originated from paper mills. Simon Linacre asks what these 21st Century mills do and what other dangers could lurk in the future.


For those of us who remember life before the internet, and have witnessed its all-encompassing influence rise over the years, there is a certain irony in the idea of recommending a Wikipedia page as a trusted source of information on academic research. In the early days of Jimmy Wales’ huge project, whilst it was praised for its utility and breadth, there was always a knowing nod when referring someone there as if to say ‘obviously, don’t believe everything you read on there.’ Stories about fake deaths and hijacked pages cemented its reputation as a useful, but flawed source of information.

However, in recent years those knowing winks seem to have subsided, and in a way, it has become rather boring and reliable. We no longer hear about Dave Grohl dying prematurely and for most of us, such is our level of scepticism we can probably suss out if anything on the site fails to pass the smell test. As a result, it has become perhaps what it always wanted to be – the first port of call for quick information.

That said, one would hesitate to recommend it to one’s children as a sole source of information, and any researcher would think twice before citing it in their work. Hence the irony in recommending the following Wikipedia page as a first step towards understanding the dangers posed by paper mills: https://en.wikipedia.org/wiki/Research_paper_mill. It is the perfect post, describing briefly what paper mills are and citing updated sources from Nature and COPE on the impact they have had on scholarly publishing and how to deal with them.

For the uninitiated, paper mills are third party organisations set up to create articles that individuals can submit to journals to gain a publication without having to do much or even any of the original research. Linked to their cousin the essay mill which services undergraduates, paper mills could have generated thousands of articles that have subsequently been published in legitimate research journals.

What the reports and guidance from Nature and COPE seem to suggest is that while many of the paper mills have sprung up in China and are used by Chinese authors, recent changes in Chinese government policy moving away from strict publication-counting as performance measurement could mitigate the problem. In addition, high-profile cases shared by publishers such as Wiley and Sage point to some success in identifying transgressions, leading to multiple retractions (albeit rather slowly). The problems such articles present is clear – they present junk or fake science that could lead to numerous problems if taken at face value by other researchers or the general public. What’s more, there are the worrying possibilities of paper mills increasing their sophistication to evade detection, ultimately eroding the faith people have always had in peer-reviewed academic research. If Wikipedia can turn round its reputation so effectively, then perhaps it’s not too late for the scholarly publishing industry to act in concert to head off a similar problem.

Rewriting the scholarly* record books

Are predatory journals to academic publishing what PEDs are to Major League Baseball?


The 2021 Major League Baseball season is underway and for fans everywhere, the crack of the bat and pop of the mitt have come not a moment too soon. America’s ‘National Pastime’ is back and for at least a few weeks, players and fans for all 30 teams have reason to be optimistic (even if your team’s slugging first baseman is already out indefinitely with a partial meniscus tear…).

In baseball, what is known as the “Steroid Era” is thought to have run from the late ‘80s through the early 2000s. During this period, many players (some for certain, some suspected) used performance-enhancing drugs (PEDs) which resulted in an offensive explosion across baseball. As a result, homerun records revered by generations of fans were smashed and rendered meaningless.

It wasn’t just star players looking to become superstars that were using PEDs, it was also the fringe players, the ones struggling to win or keep jobs as big league ball players. They saw other players around them playing better, more often, and with fewer injuries. This resulted in promotions, from the minor leagues to the major leagues or from bench player to starter, and job security, in the form of multi-year contracts.

So, there now existed a professional ecosystem in baseball where those who were willing to skirt the rules could take a relatively quick and easy route to the level of production necessary to succeed and advance in their industry. Shortcuts that would enhance their track record and improve their chances of winning and keeping jobs and help build their professional profiles to ‘superstar’ levels, greatly increasing compensation as a result.

Is this much different than the situation for researchers in today’s academic publishing ecosystem?

Where some authors – called “parasite authors” by Dr. Serihy Kozmenko in a guest post for The Source – deliberately “seek symbiosis with predatory journals” in order to boost publication records, essentially amassing publication statistics on steroids. Other authors, those not willing to use predatory journals as a simple path to publication, must operate in the same system, but under a different set of rules that make it more difficult to generate the same levels of production. In this situation, how many authors who would normally avoid predatory journals would be drawn to them, just to keep up with those who use them to publish easily and frequently?

Is it time for asterisks on CVs?

At academic conferences, on message boards, and other forums for discussing issues in scholarly communication, a familiar refrain is that predatory journals are easy to identify and avoid, so predatory publishing, in general, is not a big problem for academic publishing. While there is some level of truth to the fact that many, though not all, predatory journals are relatively easy to spot and steer clear of, this idea denies the existence of parasite authors. These researchers are unconcerned about the quality of the journal as they are simply attempting to publish enough papers for promotion or tenure purposes.

Parasite authors are also likely to be undeterred by the fact that although many predatory journals are indexed in platforms such as Google Scholar, articles published in these journals have low visibility due to the algorithms used to rank research results in these engines. Research published in predatory journals is not easily discovered, not widely read, and not heavily cited, if at all. The work is marginalized and ultimately, the reputation of the researcher is damaged.

There are myriad reasons why an author might consider publishing in a predatory journal, some born out of desperation. The ‘publish or perish’ system places pressure on researchers in all career stages – how much blame for this should be placed on universities? In addition, researchers from the Global South are fighting an uphill battle when dealing with Western publishing institutions. Lacking the same resources, training, language skills, and overall opportunities as their Western counterparts, researchers from the developing world often see no other choice but to use predatory journals (the majority located in their part of the world) to keep pace with their fellow academics’ publishing activity.

To a large degree, Major League Baseball has been able to remove PEDs from the game, mostly due to increased random testing and more severe penalties for those testing positive. Stemming the flow of predatory publishing activity in academia will not be so straightforward. At the very least, to begin with, the scholarly community must increase monitoring and screening for predatory publishing activity (with the help of resources like Cabells’ Predatory Reports) and institute penalties for those found to have used predatory journals as publishing outlets. As in baseball, there will always be those looking to take shortcuts to success, having a system in place to protect those who do want to play by the rules should be of paramount importance.

Opening up the SDGs

While the United Nations Sustainable Development Goals (SDGs) offer a framework for global communities to tackle the world’s biggest challenges, there are still huge barriers to overcome in ensuring research follows the desired path. This week, Simon Linacre reflects on the ‘push’ and ‘pull’ effects in publishing and one organization trying to refine a fragmented infrastructure.

Recently, Cabells has been able to further its commitment to pursuing the UN SDGs by signing up to the SDG Publishers Compact and sharing details of its pilot journal rating system with the Haub School of Business at Saint Joseph’s University that assesses journals in terms of their relevance to the SDGs. Part of the reason Cabells is working with the SDGs – aside from a simple belief that they are a force for good – is that they represent an opportunity to offer reward and recognition for researchers who are using their talents to in some small way make the world a better place.

The scholarly communications industry, like many others, relies on push and pull dynamics to maintain its growth trajectory. The push elements include the availability of citations and other metrics to judge performance, recognition for publishing in certain journals, and various community rewards for well-received research. On the flip side, pull elements include opportunities shared by academic publishers, a facility to record research achievements, and an opportunity to share findings globally. This is how the publishing world turns round.

This dynamic also helps to explain why potentially disruptive developments – such as Open Access or non-peer-reviewed journals and platforms – may fail to gain initial traction, and why they may require additional support in order to become embedded with academics and their mode of operations. Going back to the SDGs, we can see how their emergence could similarly be stymied by the existing power play in scholarly publishing – where are the push and pull factors guiding researchers to focus on SDG-related subjects?

I recently spoke to Stephanie Dawson, CEO at ScienceOpen, which is a discovery platform that seeks to enable academics to enhance their research in an open access environment and offer publishers ‘context building services’ to improve the impact of their outputs. ScienceOpen is very much involved with the UN SDGs, recently creating a number of content containers for SDG-related articles. By offering curative opportunities, post-publication enhancements, and article-level data services, ScienceOpen is most definitely doing its part to support a pull strategy in the industry.

Stephanie says, “We began this project working with the University College London (UCL) Library to showcase their outputs around the UN SDGs. Because we believe there needs to be broad community buy-in, we also wanted to encourage researchers globally to highlight their contributions to the Sustainable Development Goals by adding keywords and author summaries on ScienceOpen, regardless of the journal they published in and demanding publisher engagement for new works.”

And this is what Cabells is also trying to achieve – by offering new metrics that can be used to guide authors to the optimal publishing option (push) and highlighting traditionally overlooked journals with low citations as destination publications (pull), we hope we can change the conversation from ‘Is this a good journal?’ to ‘Does this research matter?’. And we think reframing the context like ScienceOpen is doing is an important first step.

Spotlight on Turkey

Turkey has been making great strides in recent years as a force to be reckoned with on the international research stage. However, it seems to have encountered more problems than other countries with regard to predatory journals. Simon Linacre looks at the problems facing the country and highlights some resources available to help Turkish scholars.

A simple Google search of “predatory journals Turkey” provides quick insight into the concerns academic researchers there have regarding these deceptive publications. Numerous articles fill the first pages of results highlighting the particular issue Turkey seems to share with a few other countries such as India and Nigeria. Alongside, however, are anonymous websites offering unsupported claims about predatory publications. Validated information appears to be thin on the ground.

Luckily, the Turkish government understands there is a problem and in the Spring of 2019 it decided to take action. According to Professor Zafer Koçak in his article ‘Predatory Publishing and Turkey’, the Turkish Council of Higher Education decreed that “scientific papers published in predatory journals would not be taken into account in academic promotion and assignment. Thus, Turkey has taken the step of becoming one of the first countries to implement this in the world”.

According to its website, the Turkish Council of Higher Education believed the phenomenon was increasing, and was doing so internationally. A number of articles have been published recently that back this up – for example here and here – and there is the potential for Turkish authors to get caught up in this global swell due to their increasing publication output.

To support Turkish authors and institutions, Cabells has translated its information video on its Journalytics and Predatory Reports products, as well as translating this page, into Turkish. Hopefully, the availability of independently verified information on predatory journals and greater dialogue will improve the conditions for Turkey and its scholars to continue to grow their influence in global research.



Türkiye son yıllarda uluslararası araştırma sahnesinde yabana atılamayacak büyük bir aşama kaydetmektedir. Ancak yağmacı dergilerle diğer ülkelerde olduğundan daha fazla sorunlarla karşılaşıyor gibi görünüyor. Simon Linacre bu konuda ülkenin karşı karşıya olduğu sorunlara bakıyor ve Türk bilim insanlarına yardımcı olacak mevcut kaynakların altını çiziyor.

Basit bir “predatory journals Turkey” Google taraması akademik araştırmacıların bu aldatıcı yayınlarla ilgili endişelere sahip oldukları konusunda hızlı bir anlayış sağlıyor. Taramanın ilk sayfaları, Türkiye’nin bu sorunu Hindistan ve Nijerya gibi diğer bir kaç ülke ile paylaştığını gösteren sonuçlarla dolu. Fakat bu sonuçların bir kısmı da yağmacı yayınlar hakkında desteklenmeyen iddialar sunan anonim web sayfaları. Doğrulanmış ve güvenilir bilgi nadir görülüyor.

Neyse ki, Türk hükümeti bir sorun olduğunun farkında ve 2019 Baharında önlem almaya karar verdi. Profesör Zafer Koçak’ın ‘Predatory Publishing and Turkey’ makalesine göre, Yükseköğretim Kurulu tarafından alınan kararla “yağmacı dergilerde yayımlanan bilimsel makaleler akademik yükseltmelerde dikkate alınmayacak. Böylece Türkiye dünyada bu kararı yürürlüğe koyan ilk ülkelerden biri olma adımını attı”.

Yükseköğretim Kurulu web sitesine göre, Kurul hem ulusal hem de uluslararası ortamlarda yağmacı yayıncılığın arttığına inanıyor. Son zamanlarda bunu destekleyen bir çok makale yayımlandı – örneklerini burada ve burada görebilirsiniz – ve yayın sayıları ile birlikte hızla artan küresel yağmacılığa Türk yazarların yakalanma olasılığı var. Cabells, Türk yazarları ve kurumları desteklemek için Journalytics ve Predatory Reports ürünlerinin bilgilendirici videosu ile birlikte bu sayfayı da Türkçeye çevirdi. Umarız ki, yağmacı dergiler hakkında bağımsız olarak onaylanmış bilginin ulaşılabilirliği ve daha güçlü iletişim, Türkiye’nin ve  akademisyenlerinin global araştırmadaki etkilerini arttırarak devam ettirmeleri konusunda şartları iyileştirecek.

Cabells launches new SDG Impact Intensity™ journal rating system in partnership with Saint Joseph’s University’s Haub School of Business

Following hot on the heels of Cabells’ inclusion in the United Nations SDG Publishers Compact, we are also announcing an exclusive partnership with Saint Joseph’s University (SJU) for a new metric assessing journals and their engagement with the UN’s Sustainable Development Goals (SDGs). Simon Linacre explains the origins of the collaboration and how the new metric could help researchers, funders, and universities alike.

If you can remember way back to the halcyon days when we went to academic conferences, you will remember one of the many benefits we enjoyed was to meet a kindred spirit, someone who shared your thoughts and ideas and looked forward to seeing again at another event. These international friendships also had the benefit of enabling you to develop something meaningful with your work, and went some way to justifying the time and expense the trips often entailed.

I was lucky enough to have one such encounter at the GBSN annual conference in Lisbon, Portugal at the end of 2019 when I met professor David Steingard from Saint Joseph’s University in the US. He was at the event to present some of the work he had been doing at SJU on its SDG Dashboard – an interactive visualization and data analytics tool demonstrating how university programmes align with the 17 SDGs. At the gala dinner I sought Dr. Steingard out and asked him something that had been buzzing inside my head ever since I heard him speak:

What if we applied your SDG reporting methodology to journals?

An animated conversation then followed, which continued on the bus home to the hotel, at the conference the next day and ultimately to the lobby of a swanky hotel in Davos (there are no other kinds of hotels there, to be honest) a year ago. From then on, small teams at SJU and Cabells have been working on a methodology for analysing and assessing the extent to which a journal has engaged with the UN’s SDGs through the articles it has published over time. This has resulted in the new metric we are releasing shortly – SDG Impact Intensity™ – the first academic journal rating system for evaluating how journals contribute to positively impacting the SDGs.

Using data collated from Cabells’ Journalytics database and running it through SJU’s AI-based methodology for identifying SDG relevance, SDG Impact Intensityprovides a rating of up to three ‘SDG rings’ to summarise the SDG relevance of articles published in the journals over a five-year period (2016-2020). For the first pilot phase of development, we chose 50 of the most storied business and management journals used for the Financial Times Global MBA ranking as well as 50 of the most dynamic journals from Cabells’ Journalytics database focused on sustainability, ethics, public policy and environmental management.

It may come as no surprise to learn that the so-called top journals lagged way behind their counterparts when it came to their levels of SDG focus. For example, none of the top 26 journals in the pilot phase are from the FT50, and only four of the top ten are from the world’s five biggest academic publishers. In contrast, the journals traditionally ranked at the very top of management journal rankings from the past 50 years in disciplines such as marketing, accounting, finance and management languish at bottom of the pilot phase ratings. While these results are hardly surprising, it perhaps shows that while governments, funders and society as a whole have started to embrace the SDGs, this has yet to filter through to what has been published in journals traditionally regarded as high impact. There has long been criticism that such titles have been favoured by business school management structures over more innovative, real-world relevant journals, and this very much seems to be borne out by the results of Cabells’ research with SJU. The very notion of what academic journal “quality” means is fundamentally challenged in light of considering how journals can make an ”impact” through engaging the SDGs.

Cabells and SJU are hoping to further their partnership and broaden their coverage of journals to enable more researchers and other interested parties to understand the type of research their target journals are publishing. With more information and greater understanding of the SDGs at hand, it is to be hoped we see a move away from a narrow, single-focus on traditional quality metrics towards a broader encouragement of research and publication that generates a positive impact on bettering the human condition and environmentally sustaining the Earth as detailed in the SDGs. In turn, we should see academia and scholarly communications play their part in ensuring the UN’s 2030 Agenda for Sustainable Development moves forward that much quicker.

Beware the known unknowns

Following a recent study showing an alarming lack of knowledge and understanding of predatory journals in China, Simon Linacre looks at the potential impact of the world’s biggest producer of research succumbing to the threat of deceptive publications.

That China has achieved something remarkable in its continued growth in research publications is surely one of the most important developments in modern research and scholarly communications. It passed the US in 2018 and all indications suggest it has increased its lead since then, propelled by huge investment in research by the Chinese government.

Cabells sought to reflect on this success when it published the list of top Chinese-language management journals in December 2020 following a collaboration with AMBA. However, research on that project also highlighted the significant risk for Chinese scholars in publishing in the wrong journals. Until last year, academics tended to be pushed towards – and recognised for – publishing in Impact Factor journals. This policy has, however, now changed, with more of a focus on Chinese-language journals as well as other international titles. The concern then arises, that some scholars may be lured into publishing in predatory journals with the shift in policy.

This thought has been fortified by the publication of the article ‘Chinese PhD Students’ Perceptions of Predatory Journals’ (2021) by Jiayun Wang, Jie Xu and DIanyou Chen in the Journal of Scholarly Publishing. Their study looks at the attitudes of over 300 Chinese doctoral students towards predatory journals, making three key findings:

  1. In STEM subjects, students regularly confused predatory journals with Open Access (OA) journals
  2. In Humanities and Social Science subjects, students tended to only identify predatory journals in the Chinese language, but not in English
  3. While the majority of respondents said they had no intention of submitting to predatory journals (mainly due to the potential harm it could do to their reputation), the few that would do so cited quick publication times and easy acceptance as motivating factors.

While there are limitations to the Wang et al article due to its relatively small sample and restricted scope, it is clear there is at least the potential for widespread use and abuse of the predatory publishing model in China, in parallel to what has been observed to a greater or lesser degree around the rest of the world. In conclusion, the authors state:

“PhD candidates in China generally have insufficient knowledge about predatory journals, and also generally disapprove of publishing in them.” (2021, pp. 102)

This lack of knowledge is referred to time and time again in articles about predatory publishing, of which there is now a small library to choose from. While there is considerable debate on how to define predatory journals, how to identify them and even score them, there is a gap where a better understanding of how to prevent publication in them can be engendered, particularly in the PhD and early career scholar (ECR) communities. Some studies on this aspect of predatory publishing would be very welcome indeed.

Cabells becomes a member of United Nations SDG Publishers Compact

Cabells is proud to announce its acceptance as a full member of the United Nations SDG Publishers Compact, becoming one of the first U.S. organizations and non-primary publishers globally to be awarded membership. Cabells joined the initiative as part of its ongoing commitment to support research and publications focused on sustainable solutions.

The SDG Publisher Compact was launched at the end of 2020 as a way to stimulate action among the scholarly communications community. It was launched in collaboration with the International Publishers Association (IPA) with the aim of speeding up progress towards the UN’s 17 Sustainable Development Goals (SDGs) by 2030.

As a signatory of the Publishers Compact, Cabells commits to developing sustainable practices and playing a key role in its networks and communities as a champion of the SDGs during what is becoming known as the ‘Decade of Action‘ from 2020–2030. As such, Cabells is developing a number of solutions designed to help identify SDG-relevant journals and research for authors, librarians, funders, and other research-focused organizations.

Cabells’ Director of International Marketing & Development, Simon Linacre, said: “The UN SDGs have already done a remarkable job in directing funding and research to the most important questions facing our planet at this time. Becoming part of the UN SDG Publishers Compact will inspire Cabells into further playing our part in meeting these grand challenges.”

For more information, visit www.cabells.com or read the UN’s original press release.

Predatory journals vs. preprints: What’s the difference?

While working towards publication in a legitimate journal, however circuitous the route, is of course a much better path than publishing in an illegitimate journal, Simon Linacre examines why this is a useful question to consider.

A blog post this week in The Geyser pointed out the problems surrounding version control of the same article on multiple preprint servers and on the F1000 platform.

TL;DR? It isn’t pretty.

The article used as an example is unquestionably a legitimate study relating to the coronavirus pandemic, and as such is a small but important piece in the jigsaw being built around science’s pandemic response. That this article has yet to be validated – and as such enabled as a piece that fits the COVID-19 jigsaw – is something that will presumably be achieved once it is published in a recognized peer-reviewed journal.

However, this does raise the following rather thorny question: how is the article any better served fragmented on different preprint servers and publishing platforms than it would be having been published as a single entity in a predatory journal?

I am being facetious here – working towards a legitimate publication, however circuitous the route, is far better than publishing in an illegitimate journal. However, comparing the two options is not as strange as one might think, and perhaps offers some guidance for authors uncertain about where to publish their research in the first place.

Firstly, early career researchers (ECRs), while often offered very little direction when it comes to publication ethics and decision-making, are understandably worried about sharing their data and findings on preprint servers for fear of being ‘scooped’ by other researchers who copy their results and get published first. This is a legitimate fear, and is one explanation why a researcher, although unfamiliar with a journal, might submit their research for a low fee and quick turnaround.

Secondly, ECRs or more experienced researchers may be incentivised by their institutions to simply achieve a publication without any checks on the type of journal they publish in. As such, they need a journal to validate their publication – even if the journal itself has not been validated – which is something preprints or non-journal platforms are unable to provide.

Finally, while recent research has shown that just over half of articles published in predatory journals do not receive any citations, just less than 50% did receive citations, and authors may prefer one sole accessible source for their research than multiple sources across different preprints. This is not to say that preprints can’t receive citations – indeed Google Scholar reveals 22 citations to the article above from its original posting on Arxiv – but the perception may be that only journals can deliver citations, and will therefore be the aim for some authors.

Of course, authors should know the very real difference between a predatory journal and a preprint, but the evidence of 14,000+ journals on Cabells Predatory Reports database and the millions of spam emails received daily from illegitimate journals points to at least some researchers falling for the same tricks and continue to line the pockets of predatory publishers. While research publishing options remain as varied and as complex as they are – and while higher education institutions and funders simply assume every researcher has an effective publishing strategy – then as many will fall into the predatory trap as they have always done.

Book review – Gaming the Metrics: Misconduct and Manipulation in Academic Research

The issues of gaming metrics and predatory publishing undoubtedly go hand-in-hand, outputs from the same system that requires academic researchers the world over to sing for their supper in some form or other. However, the two practices are often treated separately, almost as if there was no link at all, so editors Biagioli and Lippman are to be congratulated in bringing them together under the same roof in the shape of their book Gaming the Metrics: Misconduct and Manipulation in Academic Research (MIT Press, 2020).

The book is a collection of chapters that cover the whole gamut of wrongheaded – or just plain wrong – publication decisions on behalf of authors the word over on where to publish the fruits of their research. This ‘submission decision’ is unenviable, as it inevitably shapes academic careers to a greater or lesser degree. The main reason why authors make poor decisions is laid firmly at the doors of a variety of ‘publish or perish’ systems which seek to quantify the outputs from authors with a view to… well, the reason why outputs are quantified is never really explained. However, the reason why such quantification should be a non-starter is well-argued by Michael Power in Chapter 3, as well as Barbara M. Kehm (Ch. 6) in terms of the ever-popular university rankings. Even peer review comes under attack from Paul Wouters (Ch. 4), but as with the other areas any solutions are either absent, or in the case of Wouters proffered with minimal detail or real-world context.

Once into the book, any author would quickly realize that their decision to publish is fraught with difficulty with worrying about predatory publishers lurking on the internet to entice their articles and APCs from them. As such, any would be author would be well advised to heed the call ‘Caveat scriptor’ and read this book in advance of sending off their manuscript to any journals.

That said, there is also a case for advising ‘caveat lector’ before would-be authors read the book, as there are other areas where additional context would greatly help in addressing the problems of gaming metrics and academic misconduct. When it comes to predatory journals, there is a good deal of useful information included in several of the later chapters, especially the case studies in Chapters 7 and 15 which detail a suspiciously prolific Czech author and sting operation, respectively.

Indeed, these cases provide the context that is perhaps the single biggest failing of the book, which through its narrow academic lens doesn’t quite capture the wider picture of why gaming metrics and the scholarly communications system as a whole is ethically wrong, both for those who perpetrate it and arguably the architects of the systems. As with many academic texts that seek to tackle societal problems, the unwillingness to get dirt under the fingernails in the pursuit of understanding what’s really going on simply distances the reader from the problem at hand.

As a result, after reading Gaming the Metrics, one is like to simply shrug one’s shoulders in apathy about the plight of authors and their institutions, whereas a great deal more impact might have been achieved if the approach had been less academic and included more case studies and insights into the negative impact resulting from predatory publishing practices. After all, the problem with gaming the system is that, for those who suffer, it is anything but a game.

Gaming the Metrics: Misconduct and Manipulation in Academic Research, edited by Mario Biagioli and Alexandra Lippman (published Feb. 21 2020, MIT Press USA) ISBN: 978-0262537933.