The future of research evaluation

Following last week’s guest post from Rick Anderson on the risks of predatory journals, we turn our attention this week to legitimate journals and the wider issue of evaluating scholars based on their publications. With this in mind, Simon Linacre recommends a broad-based approach with the goal of such activities permanently front and center.


This post was meant to be ‘coming to you LIVE from London Book Fair’, but as you may know, this event has been canceled, like so many other conferences and other public gatherings in the wake of the coronavirus outbreak. While it is sad to miss the LBF event, meetings will take place virtually or in other places, and it is to be hoped the organizers can bring it back bigger and better than ever in 2021.

Some events are still going ahead, however, in the UK, and it was my pleasure to attend the LIS-Bibliometrics Conference at the University of Leeds last week to hear the latest thinking on journal metrics and performance management for universities. The day-long event was themed ‘The Future of Research Evaluation’, and it included both longer talks from key people in the industry, and shorter ‘lightning talks’ from those implementing evaluation systems or researching their effectiveness in different ways.

There was a good deal of debate, both on the floor and on Twitter (see #LisBib20 to get a flavor), with perhaps the most interest in speaker Dr. Stephen Hill, who is Director of Research at Research England, and chair of the steering group for the 2021 Research Excellence Framework (REF) in the UK. For those of us wishing to see crumbs from his table in the shape of a steer for the next REF, we were sadly disappointed as he was giving nothing away. However, what he did say was that he saw four current trends shaping the future of research evaluation:

  • Outputs: increasingly they will be diverse, include things like software code, be more open, more collaborative, more granular and potentially interactive rather than ‘finished’
  • Insight: different ways of understanding may come into play, such as indices measuring interdisciplinarity
  • Culture: the context of research and how it is received in different communities could become explored much more
  • AI: artificial intelligence will become a bigger player both in terms of the research itself and how the research is analyzed, e.g. the Unsilo tools or so-called ‘robot reviewers’ that can remove any reviewer bias.

Rather revealingly, Dr. Hill suggested a fifth trend might be the societal impact, and this is despite the fact that such impact has been one of the defining traits of both the current and previous REFs. Perhaps, the full picture has yet to be understood regarding impact, and there is some suspicion that many academics have yet to buy-in to the idea at all. Indeed, one of the takeaways from the day was that there was little input into the discussion from academics at all, and one wonders if they might have contributed to the discussion about the future of research evaluation, as it is their research being evaluated after all.

There was also a degree of distrust among the librarians present towards publishers, and one delegate poll should be of particular interest to them as it showed what those present thought were the future threats and challenges to research evaluation. The top three threats were identified as publishers monopolizing the area, commercial ownership of evaluation data, and vendor lock-in – a result which led to a lively debate around innovation and how solutions could be developed if there was no commercial incentive in place.

It could be argued that while the UK has taken the lead on impact and been busy experimenting with the concept, the rest of the higher education world has been catching up with a number different takes on how to recognize and reward research that has a demonstrable benefit. All this means that we are yet to see the full ‘impact of impact,’ and certainly, we at Cabells are looking carefully at what potential new metrics could aid this transition. Someone at the event said that bibliometrics should be “transparent, robust and reproducible,” and this sounds like a good guiding principle for whatever research is being evaluated.

Predicting 2019 is a fool’s game… so here are some predictions!

Five things that may or may not happen this year — In his first post of 2019, Simon Linacre lifts the lid on what he expects to happen in the most unpredictable of years since, erm, 2018…


A very Happy New Year to everyone, and as has become traditional in post-Christmas, early-January posts, I thought I would bring out the old crystal ball to try to predict some trends and areas of development in scholarly publishing in 2019. However, please do not think for one second that this is in any way a scientific or even divine exercise, as we all know that we may as well just stick a few random happenings on a wall and throw darts at them blindfolded to try and somehow see what may or may not occur in the next few months. So, with that caveat in mind, here are five predictions that at least may have some vague hope of coming to pass this year:

  1. #Plan_S – the agreement from 11 major European funders to mandate certain types of Open Access publications from researchers they have supported – has already kept commentators busy in scholarly communications in the early days of 2019. Suffice it to say it will undoubtedly gain traction, with all eyes on the U.S. and China simultaneously to see if funders in those research behemoths sign-up to or explicitly support the movement. However, while Plan S may hasten change in STEM funding and publishing communities, this change may be quicker than academia itself can change, with petitions being raised against it and significant communities outside either Europe and/or STEM subjects still largely oblivious to it.
  2. The most popular research-related search terms in 2018 included ‘AI’ and ‘blockchain’, as the belief is that both can have a major influence on scientific development in a huge range of areas. Expect 2019 to see these both have more of an influence on scholarly publishing, with applications of blockchain to peer review systems and AI improving the ways knowledge is utilized, especially in countries set up for exploiting such opportunities.
  3. Hot on the heels of the news that the whole Editorial Board of Elsevier’s Informetrics journal has resigned to form their own journal Quantitative Science Studies with MIT Press, bibliometrics should remain in the headlines with new metrics appearing or rumored on a regular basis. Chief among these will be new rankings slated to appear from Times Higher Education and other organizations based around utility, impact or relevance rather than as a proxy for quality.
  4. While any prediction around Brexit – especially this week, day, hour, or even minute – is wholly futile, several shifts can already be seen to be occurring as a result of this and other major political events. Uncertainty around Brexit, especially based on fears of the so-called no-deal Brexit, will inevitably cause some prospective students to think long and hard about any plans they had to study in the UK, while President Trump’s one-of-a-kind presidency may have a similar effect. Major elections in Europe will also have major ramifications for higher education, not least where the EU research money goes if/when the UK eventually exits.
  5. Given the increasingly complicated nature of higher education on both a macro- and micro-scale, it is also to be hoped that we all become a little more skilled and experienced at dealing with this so-called ‘VUCA’ environment – an environment that is volatile, uncertain, complex and ambiguous. Steering through these uncharted waters in the calmest way possible can be the only path to take – and it is to be hoped our leaders show us the way.