No laughing matter

The latest meme to sweep Twitter in the last week has been a satirical look at typical journal articles. Simon Linacre introduces Cabells’ own take on the theme and reflects on the impact they can have on our shared conscience.


We all like memes, right? Those social media nuggets that we can all relate to and laugh at, a form of in-joke without having to be with a group of people, which under current circumstances has meant a kind of gold rush for this form of humor. Whether it is the boyfriend looking over his shoulder at another woman or the excerpt from the film Downfall with Hitler going berserk, the number of situations and news items that lend themselves to this form of parody is literally endless.

So, when the meme spotlight fell on our own corner of the scholarly publishing world, we couldn’t help but join in and adapt the scientific paper meme to predatory journals (see image). To be honest, it wasn’t too difficult to think of 12 journal titles that highlighted the problems predatory publishing causes, and a whole series of memes could easily be created to underscore the joke that is the predatory journal phenomenon.

It’s easy to spot the themes we chose to lampoon, although however much we become familiar with the predatory journal tropes, publications and new journals are emerging all the time, as the total number of journals listed in Cabells’ Predatory Reports hitting 14,500 this week testifies. Among the issues we put under the spotlight in the graphic are both the unethical and unaware authors publishing in predatory titles, how poor research or plagiarized content can easily be published, and some of the poor excuses those who end up publishing in dodgy journals have provided.

However, underneath the tomfoolery there is a serious point to be made. A recent op-ed in The Atlantic took the opportunity of highlighting not just the shared joy and geekiness of the scientific paper meme, but also the existential dread it spotlighted. As the article expertly points out, while academics recognize the hamster-in-a-wheel absurdity the meme represents, they cannot help but see themselves in the wheel, unable to stop running. For some, they will just shrug their shoulders and find the next piece of clickbait; for others, there is little consolation in the humor and plenty of angst to try and control to preserve their sanity.

When it comes to predatory journals, from a pure eyeballs perspective we can see that articles and social media posts about the often bizarre world of predatory publishing get the most traction, such as the fact that one predatory journal lists Yosemite Sam on the editorial board. And yet there is always a serious point behind these fun stories, which is that predatory journals can make an unholy mess of scientific research, causing millions of funding dollars to be wasted and allowing either junk or rank bad science to contaminate legitimate published research. This is the real punchline and it rings pretty hollowly sometimes.

What really counts for rankings?

University and business school rankings have induced hate and ridicule in equal measure since they were first developed, and yet we are told enjoy huge popularity with students. Simon Linacre looks at how the status quo could change thanks in part to some rankings’ own shortcomings.


In a story earlier this month in Times Higher Education (THE), it was reported that the status of a university vis-à-vis sustainability was now the primary consideration for international students, ahead of academic reputation, location, job prospects and even accessibility for Uber Eats deliveries – OK, maybe not the last one. But for those who think students should place such considerations at the top of their lists, this was indeed one of those rare things in higher ed in recent times: a good news story.

But how do students choose such a university? Amazingly, THE produced a ranking just a week later providing students with, you guessed it, a ranking of universities based on their sustainability credentials. Aligned with the UN’s now-ubiquitous Sustainability Development Goals (SDGs), the ranking is now well-established and this year proclaimed the University of Manchester in the UK as the number one university that had the highest impact ranking across all 17 SDGs, although it was somewhat of an outlier for the UK, with four of the top ten universities based in Australia.

Cynics may point out that such rankings have become an essential part of the marketing mix for outfits such as THE, the Financial Times and QS. Indeed the latter has faced allegations this week over possible conflicts of interest between its consulting arm and its rankings with regard to universities in Russia – a charge which QS denies. However, perhaps most concerning is the imbalance that has always existed between the importance placed on rankings by institutions and the transparency and/or relevance of the rankings themselves. A perpetual case of the tail wagging the dog.

Take, for instance, the list of 50 journals used by the Financial Times as the basis for one of its numerous criteria for assessing business schools for its annual rankings. The list is currently under review after not changing since 2016, and then it only added 5 journals from the 45 it used prior to that date, which was itself an upgrade from 40 used in the 2000s. In other words, despite the massive changes seen in business and business education – from Enron to the global financial crisis to globalisation to the COVID pandemic – there has been barely any change in the journals used to assess publications from business schools to determine whether they are high quality.

The FT’s Global Education Editor Andrew Jack was questioned about the relevance of the FT50 and the rankings in general in Davos in 2020, and answered that to change the criteria would endanger the comparability of the rankings. This intransigence by the FT and other actors in higher education and scholarly communications was in part the motivation behind Cabells’ pilot study with the Haub School of Business at St Joseph’s University in the US to create a new rating based on journals’ output intensity in terms of the SDGs. Maintaining the status quo also reinforces paradigms and restricts diversity, marginalizing those in vulnerable and alternative environments.

If students and authors want information on SDGs and sustainability to make their education choices, it is beholden on the industry to try and supply it in as many ways as possible. And not to worry about how well the numbers stack up compared to a world we left behind a long time ago. A world that some agencies seem to want to cling on to despite evident shortcomings.

No more grist to the mill

Numerous recent reports have highlighted the problems caused by published articles that originated from paper mills. Simon Linacre asks what these 21st Century mills do and what other dangers could lurk in the future.


For those of us who remember life before the internet, and have witnessed its all-encompassing influence rise over the years, there is a certain irony in the idea of recommending a Wikipedia page as a trusted source of information on academic research. In the early days of Jimmy Wales’ huge project, whilst it was praised for its utility and breadth, there was always a knowing nod when referring someone there as if to say ‘obviously, don’t believe everything you read on there.’ Stories about fake deaths and hijacked pages cemented its reputation as a useful, but flawed source of information.

However, in recent years those knowing winks seem to have subsided, and in a way, it has become rather boring and reliable. We no longer hear about Dave Grohl dying prematurely and for most of us, such is our level of scepticism we can probably suss out if anything on the site fails to pass the smell test. As a result, it has become perhaps what it always wanted to be – the first port of call for quick information.

That said, one would hesitate to recommend it to one’s children as a sole source of information, and any researcher would think twice before citing it in their work. Hence the irony in recommending the following Wikipedia page as a first step towards understanding the dangers posed by paper mills: https://en.wikipedia.org/wiki/Research_paper_mill. It is the perfect post, describing briefly what paper mills are and citing updated sources from Nature and COPE on the impact they have had on scholarly publishing and how to deal with them.

For the uninitiated, paper mills are third party organisations set up to create articles that individuals can submit to journals to gain a publication without having to do much or even any of the original research. Linked to their cousin the essay mill which services undergraduates, paper mills could have generated thousands of articles that have subsequently been published in legitimate research journals.

What the reports and guidance from Nature and COPE seem to suggest is that while many of the paper mills have sprung up in China and are used by Chinese authors, recent changes in Chinese government policy moving away from strict publication-counting as performance measurement could mitigate the problem. In addition, high-profile cases shared by publishers such as Wiley and Sage point to some success in identifying transgressions, leading to multiple retractions (albeit rather slowly). The problems such articles present is clear – they present junk or fake science that could lead to numerous problems if taken at face value by other researchers or the general public. What’s more, there are the worrying possibilities of paper mills increasing their sophistication to evade detection, ultimately eroding the faith people have always had in peer-reviewed academic research. If Wikipedia can turn round its reputation so effectively, then perhaps it’s not too late for the scholarly publishing industry to act in concert to head off a similar problem.

Rewriting the scholarly* record books

Are predatory journals to academic publishing what PEDs are to Major League Baseball?


The 2021 Major League Baseball season is underway and for fans everywhere, the crack of the bat and pop of the mitt have come not a moment too soon. America’s ‘National Pastime’ is back and for at least a few weeks, players and fans for all 30 teams have reason to be optimistic (even if your team’s slugging first baseman is already out indefinitely with a partial meniscus tear…).

In baseball, what is known as the “Steroid Era” is thought to have run from the late ‘80s through the early 2000s. During this period, many players (some for certain, some suspected) used performance-enhancing drugs (PEDs) which resulted in an offensive explosion across baseball. As a result, homerun records revered by generations of fans were smashed and rendered meaningless.

It wasn’t just star players looking to become superstars that were using PEDs, it was also the fringe players, the ones struggling to win or keep jobs as big league ball players. They saw other players around them playing better, more often, and with fewer injuries. This resulted in promotions, from the minor leagues to the major leagues or from bench player to starter, and job security, in the form of multi-year contracts.

So, there now existed a professional ecosystem in baseball where those who were willing to skirt the rules could take a relatively quick and easy route to the level of production necessary to succeed and advance in their industry. Shortcuts that would enhance their track record and improve their chances of winning and keeping jobs and help build their professional profiles to ‘superstar’ levels, greatly increasing compensation as a result.

Is this much different than the situation for researchers in today’s academic publishing ecosystem?

Where some authors – called “parasite authors” by Dr. Serihy Kozmenko in a guest post for The Source – deliberately “seek symbiosis with predatory journals” in order to boost publication records, essentially amassing publication statistics on steroids. Other authors, those not willing to use predatory journals as a simple path to publication, must operate in the same system, but under a different set of rules that make it more difficult to generate the same levels of production. In this situation, how many authors who would normally avoid predatory journals would be drawn to them, just to keep up with those who use them to publish easily and frequently?

Is it time for asterisks on CVs?

At academic conferences, on message boards, and other forums for discussing issues in scholarly communication, a familiar refrain is that predatory journals are easy to identify and avoid, so predatory publishing, in general, is not a big problem for academic publishing. While there is some level of truth to the fact that many, though not all, predatory journals are relatively easy to spot and steer clear of, this idea denies the existence of parasite authors. These researchers are unconcerned about the quality of the journal as they are simply attempting to publish enough papers for promotion or tenure purposes.

Parasite authors are also likely to be undeterred by the fact that although many predatory journals are indexed in platforms such as Google Scholar, articles published in these journals have low visibility due to the algorithms used to rank research results in these engines. Research published in predatory journals is not easily discovered, not widely read, and not heavily cited, if at all. The work is marginalized and ultimately, the reputation of the researcher is damaged.

There are many reasons why an author might consider publishing in a predatory journal. The ‘publish or perish’ system places pressure on researchers in all career stages – how much blame for this should be placed on universities? In addition, researchers from the Global South are fighting an uphill battle when dealing with Western publishing institutions. Lacking the same resources, training, language skills, and overall opportunities as their Western counterparts, researchers from the developing world often see no other choice but to use predatory journals (the majority located in their part of the world) to keep pace with their fellow academics’ publishing activity.

To a large degree, Major League Baseball has been able to remove PEDs from the game, mostly due to increased random testing and more severe penalties for those testing positive. Stemming the flow of predatory publishing activity in academia will not be so straightforward. At the very least, to begin with, the scholarly community must increase monitoring and screening for predatory publishing activity (with the help of resources like Cabells’ Predatory Reports) and institute penalties for those found to have used predatory journals as publishing outlets. As in baseball, there will always be those looking to take shortcuts to success, having a system in place to protect those who do want to play by the rules should be of paramount importance.

Opening up the SDGs

While the United Nations Sustainable Development Goals (SDGs) offer a framework for global communities to tackle the world’s biggest challenges, there are still huge barriers to overcome in ensuring research follows the desired path. This week, Simon Linacre reflects on the ‘push’ and ‘pull’ effects in publishing and one organization trying to refine a fragmented infrastructure.

Recently, Cabells has been able to further its commitment to pursuing the UN SDGs by signing up to the SDG Publishers Compact and sharing details of its pilot journal rating system with the Haub School of Business at Saint Joseph’s University that assesses journals in terms of their relevance to the SDGs. Part of the reason Cabells is working with the SDGs – aside from a simple belief that they are a force for good – is that they represent an opportunity to offer reward and recognition for researchers who are using their talents to in some small way make the world a better place.

The scholarly communications industry, like many others, relies on push and pull dynamics to maintain its growth trajectory. The push elements include the availability of citations and other metrics to judge performance, recognition for publishing in certain journals, and various community rewards for well-received research. On the flip side, pull elements include opportunities shared by academic publishers, a facility to record research achievements, and an opportunity to share findings globally. This is how the publishing world turns round.

This dynamic also helps to explain why potentially disruptive developments – such as Open Access or non-peer-reviewed journals and platforms – may fail to gain initial traction, and why they may require additional support in order to become embedded with academics and their mode of operations. Going back to the SDGs, we can see how their emergence could similarly be stymied by the existing power play in scholarly publishing – where are the push and pull factors guiding researchers to focus on SDG-related subjects?

I recently spoke to Stephanie Dawson, CEO at ScienceOpen, which is a discovery platform that seeks to enable academics to enhance their research in an open access environment and offer publishers ‘context building services’ to improve the impact of their outputs. ScienceOpen is very much involved with the UN SDGs, recently creating a number of content containers for SDG-related articles. By offering curative opportunities, post-publication enhancements, and article-level data services, ScienceOpen is most definitely doing its part to support a pull strategy in the industry.

Stephanie says, “We began this project working with the University College London (UCL) Library to showcase their outputs around the UN SDGs. Because we believe there needs to be broad community buy-in, we also wanted to encourage researchers globally to highlight their contributions to the Sustainable Development Goals by adding keywords and author summaries on ScienceOpen, regardless of the journal they published in and demanding publisher engagement for new works.”

And this is what Cabells is also trying to achieve – by offering new metrics that can be used to guide authors to the optimal publishing option (push) and highlighting traditionally overlooked journals with low citations as destination publications (pull), we hope we can change the conversation from ‘Is this a good journal?’ to ‘Does this research matter?’. And we think reframing the context like ScienceOpen is doing is an important first step.

Spotlight on Turkey

Turkey has been making great strides in recent years as a force to be reckoned with on the international research stage. However, it seems to have encountered more problems than other countries with regard to predatory journals. Simon Linacre looks at the problems facing the country and highlights some resources available to help Turkish scholars.

A simple Google search of “predatory journals Turkey” provides quick insight into the concerns academic researchers there have regarding these deceptive publications. Numerous articles fill the first pages of results highlighting the particular issue Turkey seems to share with a few other countries such as India and Nigeria. Alongside, however, are anonymous websites offering unsupported claims about predatory publications. Validated information appears to be thin on the ground.

Luckily, the Turkish government understands there is a problem and in the Spring of 2019 it decided to take action. According to Professor Zafer Koçak in his article ‘Predatory Publishing and Turkey’, the Turkish Council of Higher Education decreed that “scientific papers published in predatory journals would not be taken into account in academic promotion and assignment. Thus, Turkey has taken the step of becoming one of the first countries to implement this in the world”.

According to its website, the Turkish Council of Higher Education believed the phenomenon was increasing, and was doing so internationally. A number of articles have been published recently that back this up – for example here and here – and there is the potential for Turkish authors to get caught up in this global swell due to their increasing publication output.

To support Turkish authors and institutions, Cabells has translated its information video on its Journalytics and Predatory Reports products, as well as translating this page, into Turkish. Hopefully, the availability of independently verified information on predatory journals and greater dialogue will improve the conditions for Turkey and its scholars to continue to grow their influence in global research.



Türkiye son yıllarda uluslararası araştırma sahnesinde yabana atılamayacak büyük bir aşama kaydetmektedir. Ancak yağmacı dergilerle diğer ülkelerde olduğundan daha fazla sorunlarla karşılaşıyor gibi görünüyor. Simon Linacre bu konuda ülkenin karşı karşıya olduğu sorunlara bakıyor ve Türk bilim insanlarına yardımcı olacak mevcut kaynakların altını çiziyor.

Basit bir “predatory journals Turkey” Google taraması akademik araştırmacıların bu aldatıcı yayınlarla ilgili endişelere sahip oldukları konusunda hızlı bir anlayış sağlıyor. Taramanın ilk sayfaları, Türkiye’nin bu sorunu Hindistan ve Nijerya gibi diğer bir kaç ülke ile paylaştığını gösteren sonuçlarla dolu. Fakat bu sonuçların bir kısmı da yağmacı yayınlar hakkında desteklenmeyen iddialar sunan anonim web sayfaları. Doğrulanmış ve güvenilir bilgi nadir görülüyor.

Neyse ki, Türk hükümeti bir sorun olduğunun farkında ve 2019 Baharında önlem almaya karar verdi. Profesör Zafer Koçak’ın ‘Predatory Publishing and Turkey’ makalesine göre, Yükseköğretim Kurulu tarafından alınan kararla “yağmacı dergilerde yayımlanan bilimsel makaleler akademik yükseltmelerde dikkate alınmayacak. Böylece Türkiye dünyada bu kararı yürürlüğe koyan ilk ülkelerden biri olma adımını attı”.

Yükseköğretim Kurulu web sitesine göre, Kurul hem ulusal hem de uluslararası ortamlarda yağmacı yayıncılığın arttığına inanıyor. Son zamanlarda bunu destekleyen bir çok makale yayımlandı – örneklerini burada ve burada görebilirsiniz – ve yayın sayıları ile birlikte hızla artan küresel yağmacılığa Türk yazarların yakalanma olasılığı var. Cabells, Türk yazarları ve kurumları desteklemek için Journalytics ve Predatory Reports ürünlerinin bilgilendirici videosu ile birlikte bu sayfayı da Türkçeye çevirdi. Umarız ki, yağmacı dergiler hakkında bağımsız olarak onaylanmış bilginin ulaşılabilirliği ve daha güçlü iletişim, Türkiye’nin ve  akademisyenlerinin global araştırmadaki etkilerini arttırarak devam ettirmeleri konusunda şartları iyileştirecek.

Cabells launches new SDG Impact Intensity™ journal rating system in partnership with Saint Joseph’s University’s Haub School of Business

Following hot on the heels of Cabells’ inclusion in the United Nations SDG Publishers Compact, we are also announcing an exclusive partnership with Saint Joseph’s University (SJU) for a new metric assessing journals and their engagement with the UN’s Sustainable Development Goals (SDGs). Simon Linacre explains the origins of the collaboration and how the new metric could help researchers, funders, and universities alike.

If you can remember way back to the halcyon days when we went to academic conferences, you will remember one of the many benefits we enjoyed was to meet a kindred spirit, someone who shared your thoughts and ideas and looked forward to seeing again at another event. These international friendships also had the benefit of enabling you to develop something meaningful with your work, and went some way to justifying the time and expense the trips often entailed.

I was lucky enough to have one such encounter at the GBSN annual conference in Lisbon, Portugal at the end of 2019 when I met professor David Steingard from Saint Joseph’s University in the US. He was at the event to present some of the work he had been doing at SJU on its SDG Dashboard – an interactive visualization and data analytics tool demonstrating how university programmes align with the 17 SDGs. At the gala dinner I sought Dr. Steingard out and asked him something that had been buzzing inside my head ever since I heard him speak:

What if we applied your SDG reporting methodology to journals?

An animated conversation then followed, which continued on the bus home to the hotel, at the conference the next day and ultimately to the lobby of a swanky hotel in Davos (there are no other kinds of hotels there, to be honest) a year ago. From then on, small teams at SJU and Cabells have been working on a methodology for analysing and assessing the extent to which a journal has engaged with the UN’s SDGs through the articles it has published over time. This has resulted in the new metric we are releasing shortly – SDG Impact Intensity™ – the first academic journal rating system for evaluating how journals contribute to positively impacting the SDGs.

Using data collated from Cabells’ Journalytics database and running it through SJU’s AI-based methodology for identifying SDG relevance, SDG Impact Intensityprovides a rating of up to three ‘SDG rings’ to summarise the SDG relevance of articles published in the journals over a five-year period (2016-2020). For the first pilot phase of development, we chose 50 of the most storied business and management journals used for the Financial Times Global MBA ranking as well as 50 of the most dynamic journals from Cabells’ Journalytics database focused on sustainability, ethics, public policy and environmental management.

It may come as no surprise to learn that the so-called top journals lagged way behind their counterparts when it came to their levels of SDG focus. For example, none of the top 26 journals in the pilot phase are from the FT50, and only four of the top ten are from the world’s five biggest academic publishers. In contrast, the journals traditionally ranked at the very top of management journal rankings from the past 50 years in disciplines such as marketing, accounting, finance and management languish at bottom of the pilot phase ratings. While these results are hardly surprising, it perhaps shows that while governments, funders and society as a whole have started to embrace the SDGs, this has yet to filter through to what has been published in journals traditionally regarded as high impact. There has long been criticism that such titles have been favoured by business school management structures over more innovative, real-world relevant journals, and this very much seems to be borne out by the results of Cabells’ research with SJU. The very notion of what academic journal “quality” means is fundamentally challenged in light of considering how journals can make an ”impact” through engaging the SDGs.

Cabells and SJU are hoping to further their partnership and broaden their coverage of journals to enable more researchers and other interested parties to understand the type of research their target journals are publishing. With more information and greater understanding of the SDGs at hand, it is to be hoped we see a move away from a narrow, single-focus on traditional quality metrics towards a broader encouragement of research and publication that generates a positive impact on bettering the human condition and environmentally sustaining the Earth as detailed in the SDGs. In turn, we should see academia and scholarly communications play their part in ensuring the UN’s 2030 Agenda for Sustainable Development moves forward that much quicker.

Predatory journals vs. preprints: What’s the difference?

While working towards publication in a legitimate journal, however circuitous the route, is of course a much better path than publishing in an illegitimate journal, Simon Linacre examines why this is a useful question to consider.

A blog post this week in The Geyser pointed out the problems surrounding version control of the same article on multiple preprint servers and on the F1000 platform.

TL;DR? It isn’t pretty.

The article used as an example is unquestionably a legitimate study relating to the coronavirus pandemic, and as such is a small but important piece in the jigsaw being built around science’s pandemic response. That this article has yet to be validated – and as such enabled as a piece that fits the COVID-19 jigsaw – is something that will presumably be achieved once it is published in a recognized peer-reviewed journal.

However, this does raise the following rather thorny question: how is the article any better served fragmented on different preprint servers and publishing platforms than it would be having been published as a single entity in a predatory journal?

I am being facetious here – working towards a legitimate publication, however circuitous the route, is far better than publishing in an illegitimate journal. However, comparing the two options is not as strange as one might think, and perhaps offers some guidance for authors uncertain about where to publish their research in the first place.

Firstly, early career researchers (ECRs), while often offered very little direction when it comes to publication ethics and decision-making, are understandably worried about sharing their data and findings on preprint servers for fear of being ‘scooped’ by other researchers who copy their results and get published first. This is a legitimate fear, and is one explanation why a researcher, although unfamiliar with a journal, might submit their research for a low fee and quick turnaround.

Secondly, ECRs or more experienced researchers may be incentivised by their institutions to simply achieve a publication without any checks on the type of journal they publish in. As such, they need a journal to validate their publication – even if the journal itself has not been validated – which is something preprints or non-journal platforms are unable to provide.

Finally, while recent research has shown that just over half of articles published in predatory journals do not receive any citations, just less than 50% did receive citations, and authors may prefer one sole accessible source for their research than multiple sources across different preprints. This is not to say that preprints can’t receive citations – indeed Google Scholar reveals 22 citations to the article above from its original posting on Arxiv – but the perception may be that only journals can deliver citations, and will therefore be the aim for some authors.

Of course, authors should know the very real difference between a predatory journal and a preprint, but the evidence of 14,000+ journals on Cabells Predatory Reports database and the millions of spam emails received daily from illegitimate journals points to at least some researchers falling for the same tricks and continue to line the pockets of predatory publishers. While research publishing options remain as varied and as complex as they are – and while higher education institutions and funders simply assume every researcher has an effective publishing strategy – then as many will fall into the predatory trap as they have always done.

Know before you go

Earlier this week, the Guardian in the UK released its updated university rankings, just the latest of myriad national and international exercises in defining the “best” university. At a time when deciding to go to university is fraught with unkowns, Simon Linacre argues that critical thinking skills are more important than ever.


I’ll admit it, I love it when my own university tops any kind of ranking. The fact that I was there 25 years ago and the subjects and academics taught there are unrecognisable, is of no consequence. That the department I graduated from is the BEST in the country is something I have to tell my wife and colleagues about, despite the arched eyebrows and disdain I can feel from several thousand miles away.

What does this mean? Well, aside from the fact that I must be a walking target for alumni fundraisers, it shows that my critical faculties, if not entirely absent, are certainly being overridden by shameless pride in something I have no right to be proud about. But like your favourite football team, you can’t help pulling for them through thick and thin – when they suck you say “we’re awful!”, and when they do well you say “we’re top!”.

Who’s “we”?

However, deciding which university to go to in a year or two is not the same as choosing a football team to support. You should use every critical faculty you have and hone it until it is razor-sharp before you even think of filling in a form or visiting a campus. And that means you should learn to read a university ranking like you would read a balance sheet before investing in a company, or reviewing a journal before submitting an article. I do not believe there is anything inherently wrong in any ranking as it can provide extremely useful data on which to base a decision. But you need to know where the data came from and how it relates to the investment you are making in your future life and career. This is why we always recommend users of Cabells’ Journalytics database use other relevant data points for their individual circumstances.

This week, the Guardian published its UK university rankings for 2021, with Oxford, St Andrews and Cambridge leading the way overall (full disclosure: I attended St Andrews). Each broad subject is then broken down into separate rankings with the promise that, “unlike other league tables, the Guardian rankings are designed with students in mind.” What, other university rankings do NOT have students in mind? Straight away, the amount of spin adopted here should tell you that (a) you should be careful of other hyperbolae, and (b) you should look at other league tables to see why the Guardian would say this.

And there are plenty of tables to choose from – both nationally and internationally there are dozens of such rankings, all seeking to advise students on which university to choose. Why? Because of the 48 pages of the university guide, six are adverts. Organisations publish rankings guides to sell advertising and cement their reputation as education experts, further enhancing their opportunities to sell education-related advertising in the future. Knowing why there is spin and why these guides exist in the first place should help students understand what information is in front of them and ensure a better decision-making process.

But students should also dig deep into the data. In the “Business, management and marketing” subject ranking, readers are told that, “most universities will boast of having good links with business,” that “group work is a key part of many courses” and “there will also be a practical element to assessment.” But none of these points are addressed in the rankings, which include data on criteria such as course and teaching satisfaction, spend per student and “career after 15 months.” All of this information is relevant but only some has data to back it up.

Sitting with my 12-year-old at breakfast, he looked at the page on architecture (which he has wanted to do since the age of about seven), and decided he should go to Cambridge, UCL or Bath as the top three for that subject. None of those would be a bad choice, but neither would they be an informed one.