The future of research evaluation

Following last week’s guest post from Rick Anderson on the risks of predatory journals, we turn our attention this week to legitimate journals and the wider issue of evaluating scholars based on their publications. With this in mind, Simon Linacre recommends a broad-based approach with the goal of such activities permanently front and center.


This post was meant to be ‘coming to you LIVE from London Book Fair’, but as you may know, this event has been canceled, like so many other conferences and other public gatherings in the wake of the coronavirus outbreak. While it is sad to miss the LBF event, meetings will take place virtually or in other places, and it is to be hoped the organizers can bring it back bigger and better than ever in 2021.

Some events are still going ahead, however, in the UK, and it was my pleasure to attend the LIS-Bibliometrics Conference at the University of Leeds last week to hear the latest thinking on journal metrics and performance management for universities. The day-long event was themed ‘The Future of Research Evaluation’, and it included both longer talks from key people in the industry, and shorter ‘lightning talks’ from those implementing evaluation systems or researching their effectiveness in different ways.

There was a good deal of debate, both on the floor and on Twitter (see #LisBib20 to get a flavor), with perhaps the most interest in speaker Dr. Stephen Hill, who is Director of Research at Research England, and chair of the steering group for the 2021 Research Excellence Framework (REF) in the UK. For those of us wishing to see crumbs from his table in the shape of a steer for the next REF, we were sadly disappointed as he was giving nothing away. However, what he did say was that he saw four current trends shaping the future of research evaluation:

  • Outputs: increasingly they will be diverse, include things like software code, be more open, more collaborative, more granular and potentially interactive rather than ‘finished’
  • Insight: different ways of understanding may come into play, such as indices measuring interdisciplinarity
  • Culture: the context of research and how it is received in different communities could become explored much more
  • AI: artificial intelligence will become a bigger player both in terms of the research itself and how the research is analyzed, e.g. the Unsilo tools or so-called ‘robot reviewers’ that can remove any reviewer bias.

Rather revealingly, Dr. Hill suggested a fifth trend might be the societal impact, and this is despite the fact that such impact has been one of the defining traits of both the current and previous REFs. Perhaps, the full picture has yet to be understood regarding impact, and there is some suspicion that many academics have yet to buy-in to the idea at all. Indeed, one of the takeaways from the day was that there was little input into the discussion from academics at all, and one wonders if they might have contributed to the discussion about the future of research evaluation, as it is their research being evaluated after all.

There was also a degree of distrust among the librarians present towards publishers, and one delegate poll should be of particular interest to them as it showed what those present thought were the future threats and challenges to research evaluation. The top three threats were identified as publishers monopolizing the area, commercial ownership of evaluation data, and vendor lock-in – a result which led to a lively debate around innovation and how solutions could be developed if there was no commercial incentive in place.

It could be argued that while the UK has taken the lead on impact and been busy experimenting with the concept, the rest of the higher education world has been catching up with a number different takes on how to recognize and reward research that has a demonstrable benefit. All this means that we are yet to see the full ‘impact of impact,’ and certainly, we at Cabells are looking carefully at what potential new metrics could aid this transition. Someone at the event said that bibliometrics should be “transparent, robust and reproducible,” and this sounds like a good guiding principle for whatever research is being evaluated.

Predatory publishing from A to Z

During 2019, Cabells published on its Twitter feed (@CabellsPublish) at least one of its 70+ criteria for including a journal on the Cabells Journal Blacklist, generating great interest among its followers. For 2020, Simon Linacre highlights a new initiative below where Cabells will publish its A-Z of predatory publishing each week to help authors identify and police predatory publishing behavior.


This week a professor I know well approached me for some advice. He had been approached by a conference to present a plenary address on his research area but had been asked to pay the delegate fee. Something didn’t seem quite right, so knowing I had some knowledge in this area he asked me for some guidance. Having spent considerable time looking at predatory journals, it did not take long to notice signs of predatory activity: direct commissioning strategy from unknown source; website covering hundreds of conferences; conferences covering very wide subject areas; unfamiliar conference organizers; guaranteed publication in unknown journal; evidence online of other researchers questioning the conference and its organizers’ legitimacy.

Welcome to ‘C for Conference’ in Cabells’ A-Z of predatory publishing.

From Monday 17 February, Cabells will be publishing some quick hints and tips to help authors, researchers and information professionals find their way through the morass of misinformation produced by predatory publishers and conference providers. This will include links to helpful advice, as well as the established criteria Cabells uses to judge if a journal should be included in its Journal Blacklist. In addition, we will be including examples of predatory behavior from the 12,000+ journals currently listed on our database so that authors can see what predatory behavior looks like.

So, here is a sneak preview of the first entry: ‘A is for American’. The USA is a highly likely source of predatory journal activity, as the country lends credence to any claim of legitimacy a journal may adopt to hoodwink authors into submitting articles to them. In the Cabells Journal Blacklist there are over 1,900 journals that include the name ‘American’ in their titles or publisher name. In comparison, just 308 Scopus-indexed journals start with the word ‘American’. So for example, the American Journal of Social Issues and Humanities purports to be published from the USA, but this cannot be verified, and it has 11 violations of Journal Blacklist criteria, including the use of a fake ISSN number and complete lack of any editor or editorial board member listed on the journal’s website (see image).

‘A’ also stands for ‘Avoid at all costs’.

Please keep an eye out for the tweets and other blog posts related to this series, which we will use from time to time to dig deeper into understanding more about predatory journal and conference behavior.

Look before you leap!

A recent paper published in Nature has provided a tool for researchers to use to check the publication integrity of a given article. Simon Linacre looks at this welcome support for researchers, and how it raises questions about the research/publication divide.

Earlier this month, Nature published a well-received comment piece by an international group of authors entitled ‘Check for publication integrity before misconduct’ (Grey et al, 2020). The authors wanted to create a tool to enable researchers to spot potential problems with articles before they got too invested in the research, citing a number of recent examples of misconduct. The tool they came up with is a checklist called REAPPRAISED, which uses each letter to identify an area – such as plagiarism or statistics and data – that researchers should check as part of their workflow.
 
As a general rule for researchers, and as a handy mnemonic, the tool seems to work well, and undoubtedly authors using this as part of their research should avoid the potential pitfalls of using poorly researched and published work. Perhaps we at Cabells would argue that an extra ‘P’ should be added for ‘Predatory’, and the checks researchers should make to ensure the journals they are using and intend to publish in are legitimate. To do this comprehensively, we would recommend using our own criteria for the Cabells Journal Blacklist as a guide, and of course, using the database itself where possible.
 
The guidelines also raise a fundamental question for researchers and publishers alike as to where research ends and publishing starts. For many involved in academia and scholarly communications, the two worlds are inextricably linked and overlap, but are nevertheless different. Faculty members of universities do their research thing and write articles to submit to journals; publishers manage the submission process and publish the best articles for other academics to read and in turn use in their future research. 
 
Journal editors seem to sit at the nexus of these two areas as they tend to be academics themselves while working for the publisher, and as such have feet in both camps. But while they are knowledgeable about the research that has been done and may actively research themselves, as editor their role is one performed on behalf of the publisher, and ultimately decides which articles are good enough to be recorded in their publication; the proverbial gatekeeper.
 
What the REAPPRAISED tool suggests, however, is that for authors the notional research/publishing divide is not a two-stage process, but rather a continuum. Only if authors embark on research intent on fully appraising themselves of all aspects of publishing integrity can they guarantee the integrity of their own research, and in turn this includes how and where that research is published. Rather than a two-step process, authors can better ensure the quality of their research AND publications by including all publishing processes as part of their own research workflow. By doing this, and using tools such as REAPPRAISED and Cabells Journal Blacklist along the way, authors can better take control of their academic careers.


Plan S for/versus Early-Career Researchers

Last week saw the joint-hosting by ALPSP and UKSG of a one day conference in London on the theme of “Shifting Power Centres in Scholarly Communications”. Former researcher and Cabells Journal Auditor Dr. Sneha Rhode attended the event and shares her thoughts below from both sides of the research and policy divide.


The ALPSP and UKSG event on scholarly communications was filled with illuminating talks by librarians, funders and publishers – but the academics panel on Plan S, which comprised of three early career researchers and a professor of sociology, was the highlight for me. Plan S – an initiative for Open Access (OA) publishing that is supported by cOAlition S, an international consortium of research funders – launched in September 2018, and requires that scholarly publications from research funded by public/private grants provided by national, regional and international research councils and funding bodies must be published in compliant OA journals or platforms from January 2021 [1].

It was clear from the panel discussion that Plan S/OA isn’t a priority for researchers. This was shocking for most attendees, but not for me! Not long ago, I was an early career researcher myself at the University of Cambridge and Imperial College London. I know from experience that early career researchers have busy lives doing research, applying for jobs, writing grant proposals, writing research papers and submitting to different journals until their research is accepted. Add to this, the stress of short tenures, limited funding and the power imbalance between researchers and principal investigators. It was apparent from the panel discussion that advancing in a highly-competitive career, where success is based mostly on the number and impact factor of research articles produced while having little control over money and decisions that are made in the publishing life cycle, makes it hard for OA to be a priority for researchers. Moreover, there has been limited engagement between librarians/funders/publishers and researchers regarding Plan S. No wonder, most researchers know very little about Plan S.

I wholeheartedly support OA and Plan S – it is a great initiative aimed at solving some of the problems that researchers face. Principle 10 in Plan S mentions assessing the impact of work during funding decisions, rather than the impact factor or other journal metrics (shout out to Ireland for putting this into place from next year) while Principle 4 mentions OA publication fees being covered by the funders or research institutions, and not by individual researchers [1]. However, individual members of cOAlition S plan to monitor compliance and sanction non-compliance by withholding grant funds, discounting non-compliant publications as part of a researcher’s track record in grant applications, and/or excluding non-compliant grant holders from future funding calls. This seems harsh, and yet it might be the only way to ensure compliance. However, I wish there were other methods to ensure Plan-S compliance out there that are geared towards researchers. I wish funders would introduce incentives for researchers to publish OA that could help them in their funding grants, tenure-track applications and promotions instead. Moreover, I also hope additional funds are assigned, and efforts are made by university librarians and funders to improve author engagement and awareness to Plan S and OA.

We at Cabells understand the need for author awareness to Plan S and OA. We are constantly trying to innovate to improve researchers’ work. With this in mind, the Journal Whitelist, our curated list of over 11,000 academic journals spanning 18 disciplines that guides researchers and institutions in getting the most impact out of their research, will soon start listing additional “Access” information. This additional information about OA policies (that govern Plan-S compliance) for individual journals will help smoothen researchers’ transition into publishing Plan S-compliant research.

ALPSP and UKSG deserve huge credit for showing us that a lot needs to be done by the publishing industry to make early career researchers are an integral part of Plan S. Early career researchers are invaluable to the publishing industry. They perform research that is published in journals and they are the editors and reviewers of tomorrow. We at Cabells recognize this and look forward to creating a synergy between researchers and Plan S into the future.

[1] https://www.coalition-s.org/.


sneha
Dr. Sneha Rhode, Cabells Journal Auditor

 

 

 

 

Open Access Week 2019: “Open for Whom? Equity in Open Knowledge”

It is #OpenAccessWeek, and a number of players in the scholarly communications industry have used the occasion to produce their latest thinking and surveys, with some inevitable contradictions and confusion. Simon Linacre unpicks the spin to identify the key takeaways from the week.


It’s that time again, Open Access Week -or #openaccessweek, or #OAWeek19 or any number of hashtag-infected labels. The aim of this week for those in scholarly communications is to showcase what new products, surveys or insight they have to a market more focused than usual on all things Open Access.

There is a huge amount of content out there to wade through, as any Twitter search or scroll through press releases will confirm. A number have caught the eye, so here is your indispensable guide to what’s hot and what’s not in OA:

  • There are a number of new OA journal and monograph launches with new business models, in particular with IET Quantum Communication and MIT Press, which uses a subscription model to offset the cost of OA
  • There have been a number of publisher surveys over the years which show that authors are still to engage fully with OA, and this year is no exception. Taylor & Francis have conducted a large survey which shows that fewer than half of researchers believe everyone who needs access to their research has it, but just 18% have deposited a version of their article in a repository. Fewer than half would pay an APC to make their article OA, but two-thirds did not recognize any of the initiatives that support OA. Just 5% had even heard of Plan S
  • And yet, a report published by Delta Think shows that OA publications continue to increase, with articles published in Hybrid OA journals alongside paywall articles declining compared to pure OA articles. In other words, more and more OA articles continue to be published, but the hybrid element is on the decrease, hence the reports’ assertion that the scholarly communications market had already reached ‘peak hybrid’

At the end of the Delta Think report was perhaps the most intriguing question among all the other noise around OA. If the share of Hybrid OA is in decline, but there is an increase in so-called read-and-publish or transformative agreements between consortia and publishers, could Plan S actually revive Hybrid OA? The thinking is that as transformative agreements usually include waivers for OA articles in Hybrid journals, the increase in these deals could increase Hybrid OA articles, the very articles that Plan S mandates against.

And this puts large consortia in the spotlight, as in some cases a major funding agency signed up to Plan S may conflict with read-and-publish agreements increasing Hybrid OA outputs. It will be interesting to see how all this develops in the next OA Week in October 2020. The countdown starts here.

Cabells renews partnership with CLOCKSS to further shared goals of supporting scholarly research

Cabells is excited to announce the renewal of its partnership with CLOCKSS, the decentralized preservation archive that ensures the long-term survival of scholarly content in digital format. Cabells is pleased to provide complimentary access to the Journal Whitelist and Journal Blacklist databases for an additional two years to CLOCKSS, to further the organizations’ shared goals of supporting and preserving scholarly publications for the benefit of the global research community.

The goal of Cabells is to provide academics with the intelligent data needed for comprehensive journal evaluations to safeguard scholarly communication and advance the dissemination of high-value research.  Assisting CLOCKSS in its mission to provide secure and sustainable archives for the preservation of academic publications in their original format is a logical and rewarding collaboration.

“We are proud to renew our partnership with CLOCKSS. Our mission to protect the integrity of scholarly communication goes hand in hand with their work to ensure the secure and lasting preservation of scholarly research,” said Kathleen Berryman, Director of Business Relations with Cabells.

In helping to protect and preserve academic research, Cabells and CLOCKSS are fortunate to play vital roles in the continued prosperity of the scholarly community.


 

About: Cabells – Since its founding over 40 years ago, Cabells services have grown to include both the Journal Whitelist and the Journal Blacklist, manuscript preparation tools, and a suite of powerful metrics designed to help users find the right journals, no matter the stage of their career. The searchable Journal Whitelist database includes 18 academic disciplines from more than 11,000 international scholarly publications. The Journal Blacklist is the only searchable database of predatory journals, complete with detailed violation reports. Through continued partnerships with major academic publishers, journal editors, scholarly societies, accreditation agencies, and other independent databases, Cabells provides accurate, up-to-date information about academic journals to more than 750 universities worldwide. To learn more, visit www.cabells.com.

About: CLOCKSS is a not-for-profit joint venture between the world’s leading academic publishers and research libraries whose mission is to build a sustainable, international, and geographically distributed dark archive with which to ensure the long-term survival of Web-based scholarly publications for the benefit of the greater global research community.

Updated CCI and DA metrics hit the Journal Whitelist

Hot off the press, newly updated Cabell’s Classification Index© (CCI©) and Difficulty of Acceptance© (DA©) scores for all Journal Whitelist publication summaries are now available. These insightful metrics are part of our powerful mix of intelligent data leading to informed and confident journal evaluations.

Research has become increasingly cross-disciplinary and, accordingly, an individual journal might publish articles relevant to several fields.  This means that researchers in different fields often use and value the same journal differently. Our CCI© calculation is a normalized citation metric that measures how a journal ranks compared to others in each discipline and topic in which it publishes and answers the question, “How and to whom is this journal important?” For example, a top journal in computer science might sometimes publish articles about educational technology, but researchers in educational technology might not really “care” about this journal the same way that computer scientists do. Conversely, top educational technology journals likely publish some articles about computer science, but these journals are not necessarily as highly regarded by the computer science community. In short, we think that journal evaluations must be more than just a number.

CCI screenshot 2019 updates

The CCI© gauges how well a paper might perform in specific disciplines and topics and compares the influence of journals publishing content from different disciplines. Further, within each discipline, the CCI© classifies a journal’s influence for each topic that it covers. This gives users a way to evaluate not just how influential a journal is, but also the degree to which a journal influences different disciplines.

For research to have real impact it must first be seen, making maximizing visibility a priority for many scholars. Our Difficulty of Acceptance© (DA©) metric is a better way for researchers to gauge a journal’s exclusivity to balance the need for visibility with the very real challenge of getting accepted for publication.

DA screenshot 2019 updates

The DA© rating quantifies a journal’s history of publishing articles from top-performing research institutions. These institutions tend to dedicate more faculty, time, and resources towards publishing often and in “popular” journals. A journal that accepts more articles from these institutions will tend to expect the kind of quality or novelty that the availability of resources better facilitates. So, researchers use the DA© to find the journals with the best blend of potential visibility and manageable exclusivity.

For more information on our metrics, methods, and products, please visit www.cabells.com.

When does research end and publishing begin?

In his latest post, Simon Linacre argues that in order for authors to make optimal decisions – and not to get drawn into predatory publishing nightmares – research and publishing efforts should overlap substantially.


In a recent online discussion on predatory publishing, there was some debate as to the motivations of authors to chose predatory journals. A recent study in the ALPSP journal Learned Publishing found that academics publishing in such journals usually fell into one of two camps – either they were “uninformed” that the journal they had chosen to publish in was predatory in nature, or they were “unethical” in knowingly choosing such a journal in order to satisfy some publication goals.

However, a third category of researcher was suggested, that of the ‘unfussy’ author who neither cares nor knows what sort of journal they are publishing in. Certainly, there may be some overlap with the other two categories, but what they all have in common is bad decision-making. Whether one does not know, does not care, or does not mind which journal one publishes in, it seems to me that one should do so on all three counts.

It was at this point where one of the group posed one of the best questions I have seen in many years in scholarly communications: when it comes to article publication, where does the science end in scientific research? Due in part to the terminology as well as the differing processes, the concept of research and publication are regarded as somehow distinct or separate. Part of the same eco-system, for sure, but requiring different skills, knowledge and approaches. The question is a good one as it challenges this duality. Isn’t is possible for science to encompass some of the publishing process itself? And shouldn’t the publishing process become more involved in the process of research?

The latter is already happening to a degree in moves by major publishers to climb up the supply chain and become more involved in research services provision (e.g. the acquisition of article platform services provider Atypon by Wiley). On the other side, there is surely an argument that at the end of experiments or data collection, analyzing data logically and writing up conclusions, there is a place for scientific process to be followed in choosing a legitimate outlet with appropriate peer review processes? Surely any university or funder would expect such a scientific approach at every level from their employees or beneficiaries. And a failure to do this allows in not only sub-optimal choices of journal, but worse predatory outlets which will ultimately delegitimize scientific research as a whole.

I get that that it may not be such a huge scandal if some ho-hum research is published in a ‘crappy’ journal so that an academic can tick some boxes at their university. However, while the outcome may not be particularly harmful, the tacit allowing of such lazy academic behavior surely has no place in modern research. Structures that force gaming of the system should, of course, be revised, but one can’t help thinking that if academics carried the same rigor and logic forward into their publishing decisions as they did in their research, scholarly communications would be in much better shape for all concerned.

Still without peer?

Next week the annual celebration of peer review takes place, which despite being centuries old is still an integral part of scholarly communications. To show Cabells’ support of #PeerReviewWeek, Simon Linacre looks at why peer review deserves its week in the calendar and to survive for many years to come.


I was recently asked by Cabells’ partners Editage to upload a video to YouTube explaining how the general public benefited from peer review. This is a good question, because I very much doubt the general public is aware at all of what peer review is and how it impacts their day-to-day lives. But if you reflect for just a moment, it is clear it impacts almost everything, much of which is taken for granted on a day-to-day basis.

Take making a trip to the shops. A car is the result of thousands of experiments and validated peer review research over a century to come up with the safest and most efficient means of driving people and things from one place to another; each supermarket product has been health and safety tested; each purchase uses digital technology such as the barcode that has advanced through the years to enable fast and accurate purchasing; even the license plate recognition software that gives us a ticket when we stay too long in the car park will be a result of some peer reviewed research (although most people may struggle to describe that as a ‘benefit’).

So, we do all benefit from peer review, even if we do not appreciate it all the time. Does that prove the value of peer review? For some, it is still an inefficient system for scholarly communications, and over the years a number of platforms have sought to disrupt it. For example, PLoS has been hugely successful as a publishing platform where a ‘light touch peer review’ has taken place to enable large-scale, quick turnaround publishing. More recently, F1000 has developed a post-publication peer review platform where all reviews are visible and take place on almost all articles that are submitted. While these platforms have undoubtedly offered variety and author choice to scientific publishing processes, they have yet to change the game, particularly in social sciences where more in-depth peer review is required.

Perhaps real disruption will be seen to accommodate peer review rather than change it. This week’s announcement at the ALPSP Conference by Cactus Communications – part of the same organization as Editage – of an AI-powered platform that can allow authors to submit articles to be viewed by multiple journal editors may just change the way peer review works. Instead of the multiple submit-review-reject cycles authors have to endure, they can submit their article to a system that can check for hygiene factor quality characteristics and relevance to journals’ coverage, and match them with potentially interested editors who can offer the opportunity for the article to then be peer reviewed.

If it works across a good number of journals, one can see that from the perspective of authors, editors and publishers, it would be a much more satisfactory process than the traditional one that still endures. And a much quicker one to boot, which means that the general public should see the benefits of peer review all the more speedily.

Agile thinking

In early November, Cabells is very pleased to be supporting the Global Business School Network (GBSN) at its annual conference in Lisbon, Portugal. In looking forward to the event, Simon Linacre looks at its theme of ‘Measuring the Impact of Business Schools’, and what this means for the development of scholarly communications.


For those of you not familiar with the Global Business School Network, they have been working with business schools, industry and charitable organizations in the developing world for many years, with the aim of enhancing access to high quality, highly relevant management education. As such, they are now a global player in developing international networking, knowledge-sharing and collaboration in wider business education communities.

Cabells will support their Annual Conference in November in its position as a leader in publishing analytics, and will host a workshop on ‘Research Impact for the Developing World’. This session will focus on the nature of management research itself – whether it should focus on global challenges rather than just business ones, and whether it can be measured effectively by traditional metrics, or if new ones can be introduced. The thinking is that unless the business school community is more pro-active about research and publishing models themselves, wider social goals will not be met and an opportunity lost to set the agenda globally.

GBSN and its members play a pivotal role here, both in seeking to take a lead on a new research agenda and also in seizing an opportunity to be at the forefront of what relevant research looks like and how it can be incentivized and rewarded. With the advent of the United Nations Sustainable Development Goals (SDGs) – a “universal call to action to end poverty, protect the planet and ensure that all people enjoy peace and prosperity” – not only is there an increased push to change the dynamics of what prioritizes research, there comes with it a need to assess that research in different terms. This question will form the nub many of the discussions in Lisbon in November.

So, what kind of new measures could be applied? Well firstly, this does assume that measures can be applied in the first place, and there are many people who think that any kind of measurement is unhelpful and unworkable. However, academic systems are based around reward and recognition, so to a greater or lesser degree, it is difficult to see measures disappearing completely. Responsible use of such measures is key, as is the informed use of a number of data points available – this is why Cabells includes unique data such as time to publication, acceptance rates, and its own Cabells Classification Index© (CCI©) which measures journal performance using citation data within subject areas.

In a new research environment, just as important will be new measures such as Altmetrics, which Cabells also includes in its data. Altmetrics can help express the level of online engagement research publications have had, and there is a feeling that this research communications space will become much bigger and more varied as scholars and institutions alike seek new ways to disseminate research information. This is one of the most exciting areas of development in research at the moment, and it will be fascinating to see what ideas GBSN and its members can come up with at their Annual Conference.

If you would like to attend the GBSN Annual Conference, Early Bird Registration is open until the 15th September.