Will academia lead the way?

Universities are usually expected to have all the answers – they are full of clever people after all. But sometimes, they need some help to figure out specific problems. Simon Linacre attended a conference recently where the questions being asked of higher education are no less than solving the problems of climate change, poverty, clean water supply and over a dozen more similar issues. How can academic institutions respond?


Most people will be aware of the United Nations and the Sustainable Development Goals (SDGs), which they adopted to solve 17 of the world’s biggest problems by 2030. Solving the climate change crisis by that date has perhaps attracted the most attention, but all of the goals present significant challenges to global society.

Universities are very much at the heart of this debate, and there seems to be an expectation that because of the position they have in facilitating research, they will unlock the key to solving these major problems. And so far they seem to have taken up the challenge with some gusto, with new research centers and funding opportunities appearing all the time for those academics aiming to contribute to these global targets in some way. What seems to be missing, however, is that many academics don’t seem to have received the memo on what they should be researching.
 
Following several conversations at conferences and with senior management at a number of universities, the two themes that are repeated when it comes to existing research programs is that there is a problem with both ‘culture and capabilities’. By culture, university hierarchies report that their faculty members are still as curious and keen to do research as ever, but they are not as interested when they are told to focus their energies on certain topics. And when they do, they lack the motivation or incentives to ensure the outcomes of their research lie in real-world impact. For the academic, impact still means a smallish number with three decimal places – ie, the Impact Factor.

In addition, when it comes to the capability of undertaking the kind of research that is likely to contribute to moving forward the SDGs, academics have not had any training, guidance, or support in what to do. In the UK, for example, where understanding and exhibiting impact is further forward than anywhere else in the world thanks to the Research Excellence Framework (REF), there still seem to be major issues with academics being focused on research that will get published rather than research that will change things. In one conversation, while I was referring to research outcomes as real-world benefits, an academic was talking about the quality of journals in which research would be published. Both are legitimate research outcomes, but publication is still way ahead in terms of cultural expectations. And internal incentives are in reality far behind the overarching aims stated by governments and research organizations.

Perhaps we are being too optimistic to expect the grinding gears of academia to move more smoothly towards a major culture change, and perhaps the small gains that are being made and the work done in the public space by the likes of Greta Thunberg will ultimately be enough to enable real change. But when the stakes are so high and the benefits are so great, maybe our expectations should weigh heavily on academia, as they are probably the people best placed to solve the world’s problems after all.

GBSN: Measuring the Impact of Business Schools

Business schools and the MBAs they teach have been reinvented on a regular basis almost since they began life early in the 20th century. However, Simon Linacre suggests that as the Global Business School Network meets for its annual conference in Lisbon this week, calls for a new approach might just be followed through


Another day, another business school conference. As a veteran of at least a dozen or so such events, then it is hard not to be a little cynical when reading about the conference theme set out on the website. Business schools need to change? Check. New programs being promoted? Check. Social running club at 7am on the first morning? Oh, that’s actually quite different.

The Global Business School Network (GBSN) is quite different. With a mission to “partner with business schools, industry, foundations, and aid agencies to improve access to quality, locally relevant management education for the developing world”, it’s focus is very much on a sustainable future rather than on shiny new MBAs for the privileged few who can afford them. As such, the theme of ‘Measuring the Impact of Business Schools’ is more than simply an on-trend marketing slogan, but rather a statement of purpose.

But despite its undoubted sincerity, can such an objective be achieved? The reason it just might is that it is very much aligned with a changing mood in business education. A recent report in The Economist referred to the development of a ‘New Capitalism’, where greed is no longer good and sustainability the imperative rather than simply growth. Evidence can be seen not just in the numerous business school deans being quoted in the piece, but in wider moves such as the New Zealand Prime Minister Jacinda Ardern’s recent pivot to adopt the Happiness Index as a metric for national development. The times they are a-changin’, as someone once said.

Ultimately, such changes may be as much to do with the bottom line rather than more altruistic motives. Recruitment in the US to MBAs is down, with students apparently becoming more demanding when it comes to what is being taught, with a focus on sustainability and wider impact at the top of the list. The mantra ‘doing well by doing good’ is not a new one, but perhaps we are entering an era where that shifts from just another strapline to becoming a true aphorism for change.

Cabells is supporting the GBSN event by hosting a session on research Impact for the Developing World. There are no preconceived ideas or solutions, just that the existing notions of impact are changing, and that each school needs to be laser-focused on investing in impact in the most relevant way for its own mission and purpose. Whatever business schools can therefore learn about measuring their impact will mean that the conference’s theme actually means something for once.

Open Access Week 2019: “Open for Whom? Equity in Open Knowledge”

It is #OpenAccessWeek, and a number of players in the scholarly communications industry have used the occasion to produce their latest thinking and surveys, with some inevitable contradictions and confusion. Simon Linacre unpicks the spin to identify the key takeaways from the week.


It’s that time again, Open Access Week -or #openaccessweek, or #OAWeek19 or any number of hashtag-infected labels. The aim of this week for those in scholarly communications is to showcase what new products, surveys or insight they have to a market more focused than usual on all things Open Access.

There is a huge amount of content out there to wade through, as any Twitter search or scroll through press releases will confirm. A number have caught the eye, so here is your indispensable guide to what’s hot and what’s not in OA:

  • There are a number of new OA journal and monograph launches with new business models, in particular with IET Quantum Communication and MIT Press, which uses a subscription model to offset the cost of OA
  • There have been a number of publisher surveys over the years which show that authors are still to engage fully with OA, and this year is no exception. Taylor & Francis have conducted a large survey which shows that fewer than half of researchers believe everyone who needs access to their research has it, but just 18% have deposited a version of their article in a repository. Fewer than half would pay an APC to make their article OA, but two-thirds did not recognize any of the initiatives that support OA. Just 5% had even heard of Plan S
  • And yet, a report published by Delta Think shows that OA publications continue to increase, with articles published in Hybrid OA journals alongside paywall articles declining compared to pure OA articles. In other words, more and more OA articles continue to be published, but the hybrid element is on the decrease, hence the reports’ assertion that the scholarly communications market had already reached ‘peak hybrid’

At the end of the Delta Think report was perhaps the most intriguing question among all the other noise around OA. If the share of Hybrid OA is in decline, but there is an increase in so-called read-and-publish or transformative agreements between consortia and publishers, could Plan S actually revive Hybrid OA? The thinking is that as transformative agreements usually include waivers for OA articles in Hybrid journals, the increase in these deals could increase Hybrid OA articles, the very articles that Plan S mandates against.

And this puts large consortia in the spotlight, as in some cases a major funding agency signed up to Plan S may conflict with read-and-publish agreements increasing Hybrid OA outputs. It will be interesting to see how all this develops in the next OA Week in October 2020. The countdown starts here.

Bringing clarity to academic publishing

How do you know if a journal is a good or a bad one? It is a simple enough question, but there is a lack of clear information out there for researchers, and often scams that lay traps for the unaware. In his latest post, Simon Linacre presents some new videos from Cabells that explain what it does to ensure authors can keep fully informed.


On a chilly Spring day in Edinburgh, myself and one of my colleagues were asked to do what nobody really wants to do if they can help it, and that is to ‘act natural’. It is one of life’s great ironies that it is so difficult to act naturally when told to do so. However, it was for a good cause, as we had been asked to explain to people through a short film what it was that Cabells did and why we thought it was important.

Video as a medium has been relatively ignored by scholarly publishers until quite recently. Video has of course been around for decades, and it has been possible to embed video on websites next to articles for a number of years. However, embedding video into pdfs has been tricky, and as every publisher will tell you when they ask you about user needs – academics ‘just want the pdf’. As a result, there has been little in the way of innovation when it comes to scholarly communication, despite some brave attempts such as video journals, video abstracts and other accompaniments to the humble article.

Video has been growing as a means of search, particularly for younger academics, and it can be much more powerful when it comes to engagement and social media. Stepping aside from the debate about what constitutes impact and whether Altmetrics and hits via social media really mean anything, video can be ‘sticky’ in the sense that people spend longer watching it than skipping over words on a web page. As such, the feeling is that video is a medium whose time may have yet to come when it comes to scholarly communications.

So, in that spirit, Cabells has shot a short video with some key excerpts that take people through the Journal Whitelist and Journal Blacklist. It is hoped that it answers some questions that people may have, and spurs others to get in touch with us. The idea of the film is the first step towards Cabells’ development of a number of resources in lots of different platforms that will help researchers drink in knowledge of journals to optimize their decision-making. In a future of Open Access, new publishing platforms, and multiple publishing choices, the power to publish will increasingly be in the hands of the author, with the scholarly publishing industry increasingly seeking ways to satisfy their needs. Knowledge about publishing is the key to unlocking that power.

The Journal Blacklist surpasses the 12,000 journals listed mark

Just how big a problem is predatory publishing? Simon Linacre reflects on the news this week that Cabells announced it has reached 12,000 journals on its Journal Blacklist and shares some insights into publishing’s dark side.


Predatory publishing has seen a great deal of coverage in 2019, with a variety of sting operations, opinion pieces and studies published on various aspects of the problem. It seems that while on the one side, there is no doubt that it is a problem for academia globally, on the other side there is huge debate as to the size, shape and relative seriousness of that problem.

On the first of those points, the size looks to be pretty big – Cabells announced this week that its Journal Blacklist has hit the 12,000 mark. This is less than a year since it hit 10,000, and it is now triple the size it was when it was launched in 2017. Much of this is to do with the incredibly hard work of its evaluations team, but also because there are a LOT of predatory journals out there, with the numbers increasing daily.

On the last of those points, the aftershocks of the Federal Trade Commission’s ruling against OMICS earlier this year are still being felt. While there is no sign of any contrition on the part of OMICS – or of the $50m fine being paid – the finding has garnered huge publicity and acted as a warning for some academics not to entrust their research with similar publishers. In addition, it has been reported that CrossRef has now cut OMICS membership.

However, the shape of the problem is still hard for many to grasp, and perhaps it would help to share some of the tools of the trade of deceptive publishers. Take one journal on the Cabells Journal Blacklist – the British Journal of Marketing Studies.

Cabells Blacklist Screenshot

Sounds relatively normal, right? But a number of factors relating to this journal highlight many of the problems presented by deceptive journals:

  • The title includes the word ‘British’ as a proxy for quality, however, over 600 journals include this descriptor in the Blacklist compared to just over 200 in Scopus’ entire index of over 30,000 journals
  • The journal is published by European-American Journals alongside 81 other journals – a remarkable feat considering the publisher lists a small terraced house in Gillingham as its main headquarters
  • When Cabells reviewed it for inclusion in the Blacklist, it noted among other things that:
    • It falsely claimed to be indexed in well-known databases – we know this because among these was Cabells itself
    • It uses misleading metrics, including an “APS Impact Factor” of 6.80 – no such derivation of the Web of Science metric exists, apart from on other predatory journal sites
    • There is no detailed peer review policy stated
    • There is no affiliation for the Editor, one Professor Paul Simon, and searches cannot uncover any marketing professors with such a name (or a Prof. Garfunkel, for that matter)

This IS a problem for academia because, no matter what the size and seriousness of predatory publishing may be unless researchers learn to spot the signs of what it looks like, they will continue to get drawn in and waste their research, funding dollars, and even career, on deceptive publishing practices.

When does research end and publishing begin?

In his latest post, Simon Linacre argues that in order for authors to make optimal decisions – and not to get drawn into predatory publishing nightmares – research and publishing efforts should overlap substantially.


In a recent online discussion on predatory publishing, there was some debate as to the motivations of authors to chose predatory journals. A recent study in the ALPSP journal Learned Publishing found that academics publishing in such journals usually fell into one of two camps – either they were “uninformed” that the journal they had chosen to publish in was predatory in nature, or they were “unethical” in knowingly choosing such a journal in order to satisfy some publication goals.

However, a third category of researcher was suggested, that of the ‘unfussy’ author who neither cares nor knows what sort of journal they are publishing in. Certainly, there may be some overlap with the other two categories, but what they all have in common is bad decision-making. Whether one does not know, does not care, or does not mind which journal one publishes in, it seems to me that one should do so on all three counts.

It was at this point where one of the group posed one of the best questions I have seen in many years in scholarly communications: when it comes to article publication, where does the science end in scientific research? Due in part to the terminology as well as the differing processes, the concept of research and publication are regarded as somehow distinct or separate. Part of the same eco-system, for sure, but requiring different skills, knowledge and approaches. The question is a good one as it challenges this duality. Isn’t is possible for science to encompass some of the publishing process itself? And shouldn’t the publishing process become more involved in the process of research?

The latter is already happening to a degree in moves by major publishers to climb up the supply chain and become more involved in research services provision (e.g. the acquisition of article platform services provider Atypon by Wiley). On the other side, there is surely an argument that at the end of experiments or data collection, analyzing data logically and writing up conclusions, there is a place for scientific process to be followed in choosing a legitimate outlet with appropriate peer review processes? Surely any university or funder would expect such a scientific approach at every level from their employees or beneficiaries. And a failure to do this allows in not only sub-optimal choices of journal, but worse predatory outlets which will ultimately delegitimize scientific research as a whole.

I get that that it may not be such a huge scandal if some ho-hum research is published in a ‘crappy’ journal so that an academic can tick some boxes at their university. However, while the outcome may not be particularly harmful, the tacit allowing of such lazy academic behavior surely has no place in modern research. Structures that force gaming of the system should, of course, be revised, but one can’t help thinking that if academics carried the same rigor and logic forward into their publishing decisions as they did in their research, scholarly communications would be in much better shape for all concerned.

Still without peer?

Next week the annual celebration of peer review takes place, which despite being centuries old is still an integral part of scholarly communications. To show Cabells’ support of #PeerReviewWeek, Simon Linacre looks at why peer review deserves its week in the calendar and to survive for many years to come.


I was recently asked by Cabells’ partners Editage to upload a video to YouTube explaining how the general public benefited from peer review. This is a good question, because I very much doubt the general public is aware at all of what peer review is and how it impacts their day-to-day lives. But if you reflect for just a moment, it is clear it impacts almost everything, much of which is taken for granted on a day-to-day basis.

Take making a trip to the shops. A car is the result of thousands of experiments and validated peer review research over a century to come up with the safest and most efficient means of driving people and things from one place to another; each supermarket product has been health and safety tested; each purchase uses digital technology such as the barcode that has advanced through the years to enable fast and accurate purchasing; even the license plate recognition software that gives us a ticket when we stay too long in the car park will be a result of some peer reviewed research (although most people may struggle to describe that as a ‘benefit’).

So, we do all benefit from peer review, even if we do not appreciate it all the time. Does that prove the value of peer review? For some, it is still an inefficient system for scholarly communications, and over the years a number of platforms have sought to disrupt it. For example, PLoS has been hugely successful as a publishing platform where a ‘light touch peer review’ has taken place to enable large-scale, quick turnaround publishing. More recently, F1000 has developed a post-publication peer review platform where all reviews are visible and take place on almost all articles that are submitted. While these platforms have undoubtedly offered variety and author choice to scientific publishing processes, they have yet to change the game, particularly in social sciences where more in-depth peer review is required.

Perhaps real disruption will be seen to accommodate peer review rather than change it. This week’s announcement at the ALPSP Conference by Cactus Communications – part of the same organization as Editage – of an AI-powered platform that can allow authors to submit articles to be viewed by multiple journal editors may just change the way peer review works. Instead of the multiple submit-review-reject cycles authors have to endure, they can submit their article to a system that can check for hygiene factor quality characteristics and relevance to journals’ coverage, and match them with potentially interested editors who can offer the opportunity for the article to then be peer reviewed.

If it works across a good number of journals, one can see that from the perspective of authors, editors and publishers, it would be a much more satisfactory process than the traditional one that still endures. And a much quicker one to boot, which means that the general public should see the benefits of peer review all the more speedily.

Agile thinking

In early November, Cabells is very pleased to be supporting the Global Business School Network (GBSN) at its annual conference in Lisbon, Portugal. In looking forward to the event, Simon Linacre looks at its theme of ‘Measuring the Impact of Business Schools’, and what this means for the development of scholarly communications.


For those of you not familiar with the Global Business School Network, they have been working with business schools, industry and charitable organizations in the developing world for many years, with the aim of enhancing access to high quality, highly relevant management education. As such, they are now a global player in developing international networking, knowledge-sharing and collaboration in wider business education communities.

Cabells will support their Annual Conference in November in its position as a leader in publishing analytics, and will host a workshop on ‘Research Impact for the Developing World’. This session will focus on the nature of management research itself – whether it should focus on global challenges rather than just business ones, and whether it can be measured effectively by traditional metrics, or if new ones can be introduced. The thinking is that unless the business school community is more pro-active about research and publishing models themselves, wider social goals will not be met and an opportunity lost to set the agenda globally.

GBSN and its members play a pivotal role here, both in seeking to take a lead on a new research agenda and also in seizing an opportunity to be at the forefront of what relevant research looks like and how it can be incentivized and rewarded. With the advent of the United Nations Sustainable Development Goals (SDGs) – a “universal call to action to end poverty, protect the planet and ensure that all people enjoy peace and prosperity” – not only is there an increased push to change the dynamics of what prioritizes research, there comes with it a need to assess that research in different terms. This question will form the nub many of the discussions in Lisbon in November.

So, what kind of new measures could be applied? Well firstly, this does assume that measures can be applied in the first place, and there are many people who think that any kind of measurement is unhelpful and unworkable. However, academic systems are based around reward and recognition, so to a greater or lesser degree, it is difficult to see measures disappearing completely. Responsible use of such measures is key, as is the informed use of a number of data points available – this is why Cabells includes unique data such as time to publication, acceptance rates, and its own Cabells Classification Index© (CCI©) which measures journal performance using citation data within subject areas.

In a new research environment, just as important will be new measures such as Altmetrics, which Cabells also includes in its data. Altmetrics can help express the level of online engagement research publications have had, and there is a feeling that this research communications space will become much bigger and more varied as scholars and institutions alike seek new ways to disseminate research information. This is one of the most exciting areas of development in research at the moment, and it will be fascinating to see what ideas GBSN and its members can come up with at their Annual Conference.

If you would like to attend the GBSN Annual Conference, Early Bird Registration is open until the 15th September.

Publish and be damned

The online world is awash with trolling, gaslighting and hate speech, and the academic portion is sadly not immune, despite its background in evidence, logical argument and rigorous analysis. For the avoidance of doubt, Simon Linacre establishes fact from fiction for Cabells in terms of Open Access, predatory publishing and product names.


When I went to university as a very naive 18-year-old Brit, for the first time I met an American. He was called Rick who lived down the corridor in my hall of residence. He was older than me, and a tad dull if I’m honest, but one evening we were chatting in our room about someone else in the hall, and he warned me about setting too much store by perceptions: “To assume is to make an ass out of u and me,” he told me sagely.

Twenty years later, while I have come to realize this phrase is a little corny, it still sticks in my mind every time I see or hear about people being angry about a situation where the full facts are not known. Increasingly, this is played out on social media, where sadly there are thousands of people making asses out of u, me and pretty much everyone else without any evidence whatsoever for their inflammatory statements.

To provide a useful point of reference for the future, we at Cabells thought we should define positions we hold on some key issues in scholarly publishing. So, without further ado, here is your cut-out-and-keep guide to our ACTUAL thinking in response to the following statements:

  • ‘Cabells is anti-OA’ or ‘Cabells likes paywalls’: Not true, in any way shape or form. Cabells is pro-research, pro-quality and pro-authors; we are OA-neutral in the sense that we do not judge journals or publishers in these terms. Over 13% of the Whitelist are pure OA journals and two-thirds are hybrid OA.
  • ‘Cabells is anti-OA like Beall’ or ‘You’re carrying on Beall’s work’: Cabells had several discussions with Jeffrey Beall around the time he stopped work on his list and Cabells published the Blacklist for the first time in the same year (2017). However, the list did NOT start with Beall’s list, does NOT judge publishers (only journals) for predatory behavior, and the Blacklist shows considerable divergence from Beall’s List – only 234 journals are listed on both (according to Strinzel et al (2019)).
  • ‘Predatory publishing is insignificant’ or ‘Don’t use the term predatory publishing’: The recent FTC judgement fining the Omics Group over $50m shows that these practices are hardly insignificant, and that sum in no way quantifies the actual or potential hurt done by publishing fake science or bad science without the safety net of peer review. Other terms for such practices may in time gain traction – fake journals, junk science, deceptive practices – but until then the most commonly used term seems the most appropriate.
  • ‘The Whitelist and Blacklist are racist’: The origins of the word ‘blacklist’ come from 17th century England, and was used to describe a number of people who had opposed Charles II during the Interregnum. It is a common feature of language that some things are described negatively as dark or black and vice versa. Cabells is 100% pro-equality and pro-diversity in academic research and publishing.
  • ‘Cabells unfairly targets new or small journals with the Blacklist’: Some of the criteria used for the Blacklist include benchmarks that a very few legitimate journals may not pass. For example, there is a criterion regarding the speed of growth of a journal. This is a minor violation and identifies typical predatory behavior. It is not a severe violation which Cabells uses to identify journals for the Blacklist, and nor is it used in isolation – good journals will not be stigmatized by inclusion on the Blacklist simply because they won’t be included. In the two years the Blacklist has been in operation, just three journals out of 11,500+ listed have requested a review.

The power of four

After hearing so many different ways that its Journal Whitelist and Journal Blacklist have been used by customers, Cabells has started to map out how any researcher can use journal data to optimize their decision-making. Fresh from its debut at the World Congress on Research Integrity in Hong Kong last month, Simon Linacre shares the thinking behind the new ‘Four Factor Framework’ and how it could be used in the future.


The 6th World Congress on Research Integrity (WCRI) was held in Hong Kong last month, bringing together the great and the good of those seeking to improve the research process globally. Topics were surprisingly wide, taking a look at such diverse areas as human rights, predatory publishing, data sharing, and policy. It was significant that while much of the focus of the conference was on the need to improve education and learning on how to conduct research ethically, many of the cases presented showed that there is still much to do in this respect.

Cabells was also there and used its presence to share some ideas on how to overcome some of these challenges, particularly with regard to engagement with improved research and publishing practices. Taking the established issues within predatory publishing encountered the world over as a starting point (i.e. choosing the wrong journal), as well as the need to look at as much data as possible (i.e. choosing the right journal), Cabells has very much put the author at the center of its thinking to develop what it has called the ‘Four Factor Framework’:

 

The framework, or FFF, firstly puts the onus on the researcher to rule out any poor, deceptive or predatory journals, using resources such as the Blacklist. This ‘negative’ first step then opens up the next stage, which is to take the four following factors into account before submitting to a research paper to a journal:

  • Strategic: understanding how a journal will impact career ambitions or community perspectives
  • Environmental: bringing in wider factors such as research impact or ethical issues
  • Political: understanding key considerations such as publishing in titles on journal lists, avoiding such lists or journals based in certain areas
  • Cultural: taking into account types of research, peer review or article form

Having talked to many customers over a period of time, these factors all become relevant to authors at some point during that crucial period when they are choosing which journal to publish in. Customers have fed back to Cabells that use of Cabells’ Whitelist and Blacklist – as well as other sources of data and guidance – can be related to as benchmarking, performance-focused or risk management. While it is good to see that the databases can help so many authors in so many different ways, judging by the evidence at WCRI there is still a huge amount to do in educating researchers to take advantage of these optimized approaches. And this will be the main aim of Cabells’ emerging strategy – to enable real impact by researchers and universities through the provision of validated information and support services around scholarly publishing.