What really counts for rankings?

University and business school rankings have induced hate and ridicule in equal measure since they were first developed, and yet we are told enjoy huge popularity with students. Simon Linacre looks at how the status quo could change thanks in part to some rankings’ own shortcomings.


In a story earlier this month in Times Higher Education (THE), it was reported that the status of a university vis-à-vis sustainability was now the primary consideration for international students, ahead of academic reputation, location, job prospects and even accessibility for Uber Eats deliveries – OK, maybe not the last one. But for those who think students should place such considerations at the top of their lists, this was indeed one of those rare things in higher ed in recent times: a good news story.

But how do students choose such a university? Amazingly, THE produced a ranking just a week later providing students with, you guessed it, a ranking of universities based on their sustainability credentials. Aligned with the UN’s now-ubiquitous Sustainability Development Goals (SDGs), the ranking is now well-established and this year proclaimed the University of Manchester in the UK as the number one university that had the highest impact ranking across all 17 SDGs, although it was somewhat of an outlier for the UK, with four of the top ten universities based in Australia.

Cynics may point out that such rankings have become an essential part of the marketing mix for outfits such as THE, the Financial Times and QS. Indeed the latter has faced allegations this week over possible conflicts of interest between its consulting arm and its rankings with regard to universities in Russia – a charge which QS denies. However, perhaps most concerning is the imbalance that has always existed between the importance placed on rankings by institutions and the transparency and/or relevance of the rankings themselves. A perpetual case of the tail wagging the dog.

Take, for instance, the list of 50 journals used by the Financial Times as the basis for one of its numerous criteria for assessing business schools for its annual rankings. The list is currently under review after not changing since 2016, and then it only added 5 journals from the 45 it used prior to that date, which was itself an upgrade from 40 used in the 2000s. In other words, despite the massive changes seen in business and business education – from Enron to the global financial crisis to globalisation to the COVID pandemic – there has been barely any change in the journals used to assess publications from business schools to determine whether they are high quality.

The FT’s Global Education Editor Andrew Jack was questioned about the relevance of the FT50 and the rankings in general in Davos in 2020, and answered that to change the criteria would endanger the comparability of the rankings. This intransigence by the FT and other actors in higher education and scholarly communications was in part the motivation behind Cabells’ pilot study with the Haub School of Business at St Joseph’s University in the US to create a new rating based on journals’ output intensity in terms of the SDGs. Maintaining the status quo also reinforces paradigms and restricts diversity, marginalizing those in vulnerable and alternative environments.

If students and authors want information on SDGs and sustainability to make their education choices, it is beholden on the industry to try and supply it in as many ways as possible. And not to worry about how well the numbers stack up compared to a world we left behind a long time ago. A world that some agencies seem to want to cling on to despite evident shortcomings.

No more grist to the mill

Numerous recent reports have highlighted the problems caused by published articles that originated from paper mills. Simon Linacre asks what these 21st Century mills do and what other dangers could lurk in the future.


For those of us who remember life before the internet, and have witnessed its all-encompassing influence rise over the years, there is a certain irony in the idea of recommending a Wikipedia page as a trusted source of information on academic research. In the early days of Jimmy Wales’ huge project, whilst it was praised for its utility and breadth, there was always a knowing nod when referring someone there as if to say ‘obviously, don’t believe everything you read on there.’ Stories about fake deaths and hijacked pages cemented its reputation as a useful, but flawed source of information.

However, in recent years those knowing winks seem to have subsided, and in a way, it has become rather boring and reliable. We no longer hear about Dave Grohl dying prematurely and for most of us, such is our level of scepticism we can probably suss out if anything on the site fails to pass the smell test. As a result, it has become perhaps what it always wanted to be – the first port of call for quick information.

That said, one would hesitate to recommend it to one’s children as a sole source of information, and any researcher would think twice before citing it in their work. Hence the irony in recommending the following Wikipedia page as a first step towards understanding the dangers posed by paper mills: https://en.wikipedia.org/wiki/Research_paper_mill. It is the perfect post, describing briefly what paper mills are and citing updated sources from Nature and COPE on the impact they have had on scholarly publishing and how to deal with them.

For the uninitiated, paper mills are third party organisations set up to create articles that individuals can submit to journals to gain a publication without having to do much or even any of the original research. Linked to their cousin the essay mill which services undergraduates, paper mills could have generated thousands of articles that have subsequently been published in legitimate research journals.

What the reports and guidance from Nature and COPE seem to suggest is that while many of the paper mills have sprung up in China and are used by Chinese authors, recent changes in Chinese government policy moving away from strict publication-counting as performance measurement could mitigate the problem. In addition, high-profile cases shared by publishers such as Wiley and Sage point to some success in identifying transgressions, leading to multiple retractions (albeit rather slowly). The problems such articles present is clear – they present junk or fake science that could lead to numerous problems if taken at face value by other researchers or the general public. What’s more, there are the worrying possibilities of paper mills increasing their sophistication to evade detection, ultimately eroding the faith people have always had in peer-reviewed academic research. If Wikipedia can turn round its reputation so effectively, then perhaps it’s not too late for the scholarly publishing industry to act in concert to head off a similar problem.

Rewriting the scholarly* record books

Are predatory journals to academic publishing what PEDs are to Major League Baseball?


The 2021 Major League Baseball season is underway and for fans everywhere, the crack of the bat and pop of the mitt have come not a moment too soon. America’s ‘National Pastime’ is back and for at least a few weeks, players and fans for all 30 teams have reason to be optimistic (even if your team’s slugging first baseman is already out indefinitely with a partial meniscus tear…).

In baseball, what is known as the “Steroid Era” is thought to have run from the late ‘80s through the early 2000s. During this period, many players (some for certain, some suspected) used performance-enhancing drugs (PEDs) which resulted in an offensive explosion across baseball. As a result, homerun records revered by generations of fans were smashed and rendered meaningless.

It wasn’t just star players looking to become superstars that were using PEDs, it was also the fringe players, the ones struggling to win or keep jobs as big league ball players. They saw other players around them playing better, more often, and with fewer injuries. This resulted in promotions, from the minor leagues to the major leagues or from bench player to starter, and job security, in the form of multi-year contracts.

So, there now existed a professional ecosystem in baseball where those who were willing to skirt the rules could take a relatively quick and easy route to the level of production necessary to succeed and advance in their industry. Shortcuts that would enhance their track record and improve their chances of winning and keeping jobs and help build their professional profiles to ‘superstar’ levels, greatly increasing compensation as a result.

Is this much different than the situation for researchers in today’s academic publishing ecosystem?

Where some authors – called “parasite authors” by Dr. Serihy Kozmenko in a guest post for The Source – deliberately “seek symbiosis with predatory journals” in order to boost publication records, essentially amassing publication statistics on steroids. Other authors, those not willing to use predatory journals as a simple path to publication, must operate in the same system, but under a different set of rules that make it more difficult to generate the same levels of production. In this situation, how many authors who would normally avoid predatory journals would be drawn to them, just to keep up with those who use them to publish easily and frequently?

Is it time for asterisks on CVs?

At academic conferences, on message boards, and other forums for discussing issues in scholarly communication, a familiar refrain is that predatory journals are easy to identify and avoid, so predatory publishing, in general, is not a big problem for academic publishing. While there is some level of truth to the fact that many, though not all, predatory journals are relatively easy to spot and steer clear of, this idea denies the existence of parasite authors. These researchers are unconcerned about the quality of the journal as they are simply attempting to publish enough papers for promotion or tenure purposes.

Parasite authors are also likely to be undeterred by the fact that although many predatory journals are indexed in platforms such as Google Scholar, articles published in these journals have low visibility due to the algorithms used to rank research results in these engines. Research published in predatory journals is not easily discovered, not widely read, and not heavily cited, if at all. The work is marginalized and ultimately, the reputation of the researcher is damaged.

There are many reasons why an author might consider publishing in a predatory journal. The ‘publish or perish’ system places pressure on researchers in all career stages – how much blame for this should be placed on universities? In addition, researchers from the Global South are fighting an uphill battle when dealing with Western publishing institutions. Lacking the same resources, training, language skills, and overall opportunities as their Western counterparts, researchers from the developing world often see no other choice but to use predatory journals (the majority located in their part of the world) to keep pace with their fellow academics’ publishing activity.

To a large degree, Major League Baseball has been able to remove PEDs from the game, mostly due to increased random testing and more severe penalties for those testing positive. Stemming the flow of predatory publishing activity in academia will not be so straightforward. At the very least, to begin with, the scholarly community must increase monitoring and screening for predatory publishing activity (with the help of resources like Cabells’ Predatory Reports) and institute penalties for those found to have used predatory journals as publishing outlets. As in baseball, there will always be those looking to take shortcuts to success, having a system in place to protect those who do want to play by the rules should be of paramount importance.