Gray area

While Cabells spends much of its time assessing journals for inclusion in its Journal Whitelist or Journal Blacklist, probably the greater number of titles reside outside the parameters of those two containers. In his latest blog, Simon Linacre opens up a discussion on what might be termed ‘gray journals’ and what their profiles could look like.


 

The concept of ‘gray literature’ to describe a variety of information produced outside traditional publishing channels has been around since at least the 1970s, and has been defined as “information produced on all levels of government, academia, business and industry in electronic and print formats not controlled by commercial publishing (ie. where publishing is not the primary activity of the producing body*” (1997; 2004). The definition plays an important role in both characterizing and categorizing information outside the usual forms of academic content, and in a way is the opposite of the chaos and murkiness the term ‘gray’ perhaps suggests.

The same could not be said, however, if we were to apply the same term to those journals that inhabit worlds outside the two main databases Cabells curates. Its Journal Whitelist indexes over 11,000 journals that satisfy its criteria to assess whether a journal is a reputable outlet for publication. As such, it is a list of recommended journals for any academic to entrust their research to. The same cannot be said, however, for the Journal Blacklist, which is a list of over 13,000 journals that NO ONE should recommend publication in, given that they have ‘met’ several of Cabells’ criteria.

So, after these two cohorts of journals, what’s left over? This has always been an intriguing question and one which was alluded to most intelligently recently by Kyle Siler in a piece for the LSE Impact Blog. There is no accurate data available on just how many journals there are in existence, as like grains of sand they are created and disappear before they can all be counted. Scopus currently indexes well over 30,000 journals, so a conservative estimate might be that there are over 50,000 journals currently active, with 10,000 titles or more not indexed in any recognized database. Using Cabells experience of assessing these journals for both Whitelist and Blacklist inclusion, here are some profiles that might help researchers spot which option might be best for them:

  • The Not-for-Academics Academic Journal: Practitioner journals often fall foul of indexers as they are not designed to be used and cited in the same way as academic journals, despite the fact they look like them. As a result, journals that have quite useful content are often overlooked due to lack of citations or a non-academic style, but can include some good quality content
  • The So-Bad-it’s-Bad Journal: Just awful in every way – poor editing, poor language, uninteresting research and research replicated from elsewhere. However, it is honest and peer reviewed, so provides a legitimate outlet of sorts
  • The Niche-of-a-Niche Journal: Probably focusing on a scientific area you have never heard of, this journal drills down into a subject area and keeps on drilling so that only a handful of people in the world have the foggiest what it’s about. But if you are one of the lucky ones, it’s awesome. Just don’t expect citation awards any time soon
  • The Up-and-Coming Journal: Many indexers prefer to wait a year or two before including a journal in their databases, as citations and other metrics can start to be used to assess quality and consistent publication. In the early years, quality can vary widely, but reading the output so far is at least feasible to aid the publishing decision
  • The Worthy Amateur Journal: Often based in a non-research institution or little-known association, these journals have the right idea but publish haphazardly, have small editorial boards and little financial support, producing unattractive-looking journals that may nevertheless hide some worthy articles.

Of course, when you arrive at the publication decision and happen upon a candidate journal that is not indexed, as we said last week simply ‘research your research’: check against the Blacklist and its criteria to detect any predatory characteristics, research the Editor and the journal’s advisory board for their publishing records and seek out the opinion of others before sending your precious article off into the gray ether.


*Third International Conference on Grey Literature in 1997 (ICGL Luxembourg definition, 1997 – Expanded in New York, 2004


***LAST CHANCE!***

If you haven’t already completed our survey, there is still time to provide your feedback. Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.

Our short survey should take no more than ten minutes to complete, and can be taken here.

As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.

Many thanks in advance for your valuable feedback!

Doing your homework…and then some

Researchers have always known the value of doing their homework – they are probably the best there is at leaving no stone unturned. But that has to apply to the work itself. Simon Linacre looks at the importance of ‘researching your research’ and using the right sources and resources.


Depending on whether you are a glass half full or half empty kind of a person, it is either a great time for promoting the value of scientific research, or, science is seeing a crisis in confidence. On the plus side, the value placed on research to lead us out of the COVID-19 pandemic has been substantial, and rarely have scientists been so much to the fore for such an important global issue. On the other hand, there have been uprisings against lockdowns in defiance of science, and numerous cases of fake science related to the Coronavirus. Whether it is COVID-19, Brexit, or global warming, we seem to be in an age of wicked problems and polarising opinions on a global scale.

If we assume that our glass is more full than empty in these contrarian times, and try to maintain a sense of optimism, then we should be celebrating researchers and the contribution they make. But that contribution has to be validated in scientific terms, and its publication validated in such a way that users can trust in what it says. For the first part, there has been a good deal of discussion in academic circles and even in the press about the nature of preprints, and how users have to take care to understand that they may not yet have been peer reviewed, so any conclusions should not yet be taken as read.

For the second part, however, there is a concern that researchers in a hurry to publish their research may run afoul of predatory publishers, or simply publish their articles in the wrong way, in the wrong journal for the wrong reasons. This was highlighted to me when a Cabells customer alerted us to a new website called Academic Accelerator. I will leave people to make their own minds up as to the value of the site, however, a quick test using academic research on accounting (where I managed journals for over a decade, so know the area) showed that:

  • Attempting to use the ‘Journal Writer’ function for an accounting article suggested published examples from STM journals
  • Trying to use the ‘Journal Matcher’ function for an accounting article again only recommended half a dozen STM journals as a suitable destination for my research
  • Accessing data for individuals journals seems to have been crowdsourced by users, and didn’t match the actual data for many journals in the discipline.

The need for researchers to publish as quickly as possible has perhaps never been greater, and the tools and options for them to do so have arguably never been as open. However, with this comes a gap in the market that many operators may choose to exploit, and at this point, the advice for researchers is the same as ever. Always research your research – know what you are publishing and where you are publishing it, and what the impact will be both in scholarly terms and in real-world terms. In an era where working from home is the norm, there is no excuse for researchers not to do their homework on what they publish.


***REMINDER***

If you haven’t already completed our survey, there is still time to provide your opinion. Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.

Our short survey should take no more than ten minutes to complete, and can be taken here.

As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.

Many thanks in advance for your valuable feedback!

Simon Linacre

Bad medicine

Recent studies have shown that academics can have a hard time identifying some predatory journals, especially if they come from high-income countries or medical faculties. Simon Linacre argues that this is not surprising given they are often the primary target of predatory publishers, but a forthcoming product from Cabells could help them.


A quick search of PubMed for predatory journals will throw up hundreds of results – over the last year I would estimate there are on average one or two papers published each week on the site (and you can sign up for email alerts on this and other scholarly communication issues at the estimable Biomed News site). The papers tend to fall into two categories – editorial or thought pieces on the blight of predatory journals in a given scientific discipline, or original research on the phenomenon. While the former are necessary to raise the profile of the problem among researchers, they do little to advance the understanding of such journals.

The latter, however, can provide illuminating details about how predatory journals have developed, and in so doing offer lessons in how to combat them. Two such articles were published last week in the field of medicine. In the first paper ‘Awareness of predatory publishing’, authors Panjikaran and Mathew surveyed over 100 authors who had published articles in predatory journals. While a majority of authors (58%) were ignorant of such journals, of those who said they recognized them nearly half from high-income countries (HICs) failed a recognition test, while nearly a quarter from low-income to middle-income countries (LMICs) also failed. The result, therefore, was a worrying lack of understanding of predatory journals among authors who had already published in them.

The second article was entitled ‘Faculty knowledge and attitudes regarding predatory open access journals: a needs assessment study’ and authored by Swanberg, Thielen and Bulgarelli. In it, they surveyed both regular and medical faculty members of a university to ascertain if they understood what was meant by predatory publishing. Almost a quarter (23%) said they had not heard of the term previously, but of those that had 87% said there confident of being able to assess journal quality. However, when they were tested by being presented with journals in their own fields, only 60% could, with scores even lower for medical faculty.

Both papers call for greater education and awareness programs to support academics in dealing with predatory journals, and it is here that Cabells can offer some good news. Later this year Cabells intends to launch a new medical journal product that identifies good quality journals in the vast medical field. Alongside our current products covering most other subject disciplines, the new medical product will enable academics, university administrators, librarians, tenure committees and research managers to validate research sources and publication outputs of faculty members. They will also still be backed up, of course, by the Cabells Journal Blacklist which now numbers over 13,200 predatory, deceptive or illegitimate journals. Indeed, in the paper by Swanberg et al the researchers ask faculty members themselves what support they would like to see from their institution, and the number one answer was a “checklist to help assess journal quality.” This is exactly the kind of feedback Cabells have received over the years that has driven us to develop the new product for medical journals, and hopefully, it will help support good publishing decisions in the future alongside our other products.


PS: A kind request – Cabells is undertaking a review of the current branding for ‘The Journal Whitelist’ and ‘The Journal Blacklist’. As part of this process, we’d like to gather feedback from the research community to understand how you view these products, and which of the proposed brand names you prefer.

Our short survey should take no more than ten minutes to complete, and can be taken here.

As thanks for your time, you’ll have the option to enter into a draw to win one of four Amazon gift vouchers worth $25 (or your local equivalent). More information is available in the survey.

Many thanks in advance for your valuable feedback!

Simon Linacre

The price of predatory publishing

What is the black market in predatory publishing worth each year? No satisfactory estimate has yet been produced, so Simon Linacre has decided to grab the back of an envelope and an old biro to try to make an educated guess.


Firstly, all of us at Cabells would like to wish everyone well during this unusual and difficult time. We are thinking a great deal about our customers, users, publishers and researchers who must try and maintain their important work during the coronavirus pandemic. Whether you are in lockdown, self-isolating, or are more or less free of restrictions, please be assured that Cabells’ services are still available for your research needs, and if there are any problems with access, please do not hesitate to contact us at journals@cabells.com.

Possibly as a result of spending too much holed up at home, a friend of mine in scholarly communications asked me last week how much predatory publishers earned each year. I confess that I was a little stumped at first. Despite the fact that Cabells has created the world’s most comprehensive database of predatory titles in its Journal Blacklist, it does not collate information on article processing charges (APCs), and even if it did it would not bear any relation to what was actually paid, as often APCs are discounted or waived. Indeed, sometimes authors are even charged a withdrawal fee for the predatory journal to NOT publish their article.

So, where do you start trying to estimate a figure? Well, firstly you can try reviewing the literature, but this brings its own risks. The first article I found estimated it to be in the billions of dollars, which immediately failed the smell test. After looking at the articles it had cited, it became clear that an error had been made – the annual value of all APCs is estimated to be in the billions, so the figure for predatory journals is likely to be a lot less.

The second figure was in an article by Greco (2016) which estimated predatory journals to be earning around $75m a year, which seemed more reasonable. But is there any way to validate this? Well, recently the case was closed by the Federal Trade Commission (FTC) on its judgement against Omics Group which it had fined $50.1m in April 2019 (Linacre, Bisaccio & Earle, 2019). After the judgement was passed, there were further checks on the judgement to ensure the FTC had been fair and equitable in its dealings, and these were all validated. This included the $50m fine and the way it was worked out… which means you could use these calculations and extrapolate them out to all of the journals included in the Cabells Journal Blacklist.

And no, this is not mathematically valid, and nor is it any guarantee of getting near a correct answer – it is just one way of providing an estimate so that we can get a handle on the size of the problem.

So, what the back of my dog-eared envelope shows is that:

  • The judgement against OMICS was for $ $50,130,811, which represented the revenues it had earned between August 25, 2011 and July 31, 2017 (2,167 days, or 5.94 years)
  • The judgement did not state how many journals Omics and its subsidiaries operated, but Cabells includes 776 Omics-related journals in its Journal Blacklist
  • For Omics, if you use this data that means each journal earns revenues of $10,876 per year
  • If we were to assume OMICS were a typical predatory publisher (and they are bigger and more professional than most predatory operators) and were to extrapolate that out to the whole Blacklist of 13,138 journals, that’s a value of $142.9m a year

I do think this is very much top side as many predatory publishers charge ultra-low APCs to attract authors, while some may have stopped functioning. However, on the flip side we are adding to the Blacklist all the time and new journals are being created daily. So, I think a reasonable estimate based on the FTC judgement and Cabells data is that the predatory journal market is probably worth between $75m and $100m a year. What the actual figure might be is, however, largely irrelevant. What is relevant is that millions of dollars of funders’ grants, charitable donations and state funding have been wasted on these outlets.

References:

Greco, A. N. (2016). The impact of disruptive and sustaining digital technologies on scholarly journals. Journal of Scholarly Publishing48(1), 17–39. doi: 10.3138/jsp.48.1.17

Simon Linacre, Michael Bisaccio & Lacey Earle (2019). Publishing in an Environment of Predation: The Many Things You Really Wanted to Know, but Did Not Know How to Ask, Journal of Business-to-Business Marketing, 26:2, 217-228, DOI: 10.1080/1051712X.2019.1603423

 

Unintended consequences: how will COVID-19 shape the future of research

What will happen to global research output during lockdowns as a result of the coronavirus?  Simon Linacre looks at how the effect in different countries and disciplines could shape the future of research and scholarly publications.


We all have a cabin fever story now after many countries have entered into varying states of lockdown. Mine is how the little things have lifted what has been quite an oppressive mood – the smell of buns baking in the oven; lying in bed that little bit longer in a morning; noticing the newly born lambs that have suddenly appeared in nearby fields. All of these would be missed during the usual helter-skelter days we experience during the week. But things are very far from usual in these coronavirus-infected days. And any distraction is a welcome one.

On a wider scale, the jury is still very much out as to how researchers are dealing with the situation, let alone how things will be affected in the future. What we do know is that in those developed countries most impacted by the virus, universities have been closed down, students sent home and labs mothballed. In some countries such as Italy there are fears important research work could be lost in the shutdown, while in the US there is concern for the welfare of those people – and animals – who are currently in the middle of clinical trials. Overall, everyone hopes that the specific research into the coronavirus yields some quick results.

On the flip side, however, for those researchers not confined to labs or field research, this period could accelerate their work. For those in social science or humanities freed from the commute, teaching commitments and office politics of daily academic life, the additional time will no doubt be put to good use. More time to set up surveys; more time for reading; more time for writing papers. Increased research output is perhaps inevitable in those areas where academics are not tied to labs or other physical experiments.

These two countervailing factors may cancel each other out, or one may prevail over the other. As such, the scholarly publishing community does not know yet what to expect down the line. In the short term, it has been focused on making related content freely accessible (such as this site from The Lancet: ). However, what we may see is that there is greater pressure to see research in potentially globally important areas to be made open access at the source given how well researchers and networks have seemed to work together so far during the short time the virus has been at large.

Again, unintended consequences could be one of the key legacies of the crisis once the virus has died down. Organizations concerned about how their people can work from home will no doubt have their fears allayed, while the positive environmental impact of less travelling will be difficult to give up. For publishers and scholars, understanding how their research could have an impact when the world is in crisis may change their research aims forever.

The future of research evaluation

Following last week’s guest post from Rick Anderson on the risks of predatory journals, we turn our attention this week to legitimate journals and the wider issue of evaluating scholars based on their publications. With this in mind, Simon Linacre recommends a broad-based approach with the goal of such activities permanently front and center.


This post was meant to be ‘coming to you LIVE from London Book Fair’, but as you may know, this event has been canceled, like so many other conferences and other public gatherings in the wake of the coronavirus outbreak. While it is sad to miss the LBF event, meetings will take place virtually or in other places, and it is to be hoped the organizers can bring it back bigger and better than ever in 2021.

Some events are still going ahead, however, in the UK, and it was my pleasure to attend the LIS-Bibliometrics Conference at the University of Leeds last week to hear the latest thinking on journal metrics and performance management for universities. The day-long event was themed ‘The Future of Research Evaluation’, and it included both longer talks from key people in the industry, and shorter ‘lightning talks’ from those implementing evaluation systems or researching their effectiveness in different ways.

There was a good deal of debate, both on the floor and on Twitter (see #LisBib20 to get a flavor), with perhaps the most interest in speaker Dr. Stephen Hill, who is Director of Research at Research England, and chair of the steering group for the 2021 Research Excellence Framework (REF) in the UK. For those of us wishing to see crumbs from his table in the shape of a steer for the next REF, we were sadly disappointed as he was giving nothing away. However, what he did say was that he saw four current trends shaping the future of research evaluation:

  • Outputs: increasingly they will be diverse, include things like software code, be more open, more collaborative, more granular and potentially interactive rather than ‘finished’
  • Insight: different ways of understanding may come into play, such as indices measuring interdisciplinarity
  • Culture: the context of research and how it is received in different communities could become explored much more
  • AI: artificial intelligence will become a bigger player both in terms of the research itself and how the research is analyzed, e.g. the Unsilo tools or so-called ‘robot reviewers’ that can remove any reviewer bias.

Rather revealingly, Dr. Hill suggested a fifth trend might be the societal impact, and this is despite the fact that such impact has been one of the defining traits of both the current and previous REFs. Perhaps, the full picture has yet to be understood regarding impact, and there is some suspicion that many academics have yet to buy-in to the idea at all. Indeed, one of the takeaways from the day was that there was little input into the discussion from academics at all, and one wonders if they might have contributed to the discussion about the future of research evaluation, as it is their research being evaluated after all.

There was also a degree of distrust among the librarians present towards publishers, and one delegate poll should be of particular interest to them as it showed what those present thought were the future threats and challenges to research evaluation. The top three threats were identified as publishers monopolizing the area, commercial ownership of evaluation data, and vendor lock-in – a result which led to a lively debate around innovation and how solutions could be developed if there was no commercial incentive in place.

It could be argued that while the UK has taken the lead on impact and been busy experimenting with the concept, the rest of the higher education world has been catching up with a number different takes on how to recognize and reward research that has a demonstrable benefit. All this means that we are yet to see the full ‘impact of impact,’ and certainly, we at Cabells are looking carefully at what potential new metrics could aid this transition. Someone at the event said that bibliometrics should be “transparent, robust and reproducible,” and this sounds like a good guiding principle for whatever research is being evaluated.

Growth of predatory publishing shows no sign of slowing

This week the Cabells Journal Blacklist has hit 13,000 titles, and while the number itself is not that significant, its continued rate of growth shows that the problem of predatory publishing shows no sign of abating. In his latest post, Simon Linacre shares a case study of what a predatory journal looks like and why their continued growth should concern us all.


Firstly, a warning: this post will share a link to a journal that Cabells has identified as predatory in nature, and as such, you should take precautions before giving it a click. This is because there is evidence to show that some predatory journal websites, whether it is by accident or design, contain malware that can infect your computers and its networked systems. So, if you do click on it, please don’t share any information as it could infect your hardware.

Welcome to the dark world of predatory publishing.

Despite the risks, it is useful to look at a specific predatory journal to gain some insight into how they operate and what they contain. The example we are using is the International Journal of Science Technology & Management, which appears to be based in India and has been publishing several issues annually since 2012, and includes hundreds of articles freely accessible as pdfs. This particular journal has one of the highest numbers of breaches of our Blacklist criteria, some of which are included below to help explain why the journal is predatory:

  1. Editors do not actually exist or are deceased. The journal does not name an Editor or Editors but has a huge list of names and affiliations, many of which do not actually exist or are listed without their knowledge.
  2. The journal’s website does not have a clearly stated peer review policy. The journal states it is “refereed”, but there is no evidence this occurs.
  3. Falsely claims indexing in well-known databases (especially SCOPUS, DOAJ, JCR, and Cabells). This is a key indicator of predatory journals, and can be easily checked – this particular journal claims it is indexed by Cabells (this is categorically untrue), listed by DOAJ (also false) and has an Impact Factor of 2.012 (most definitely incorrect).
  4. The website does not identify a physical address for the publisher or gives a fake address. Sometimes an address will be given that is the same address as 8,459 other businesses, which is remarkable in that it turns out to be a small terraced house in suburban England. In this example, there is an address you can find after some searching, but the address is spelled incorrectly and the location in India is also home to dozens of other journals and conferences the publisher operates, but no offices.
  5. The publisher or journal’s website seems too focused on the payment of fees. Many predatory publishers charge the going rate of $1,000+ to publish in them, but this journal ‘only’ charges $60 (plus $20 if you require a certificate). This may seem a bargain to some, but authors are being ripped off even at this low price.

There are many other problems with the journal, not least that the quality of articles published in it would embarrass any high school student, let alone an academic. However, while the desire and ease of publishing in such journals persists, Cabells will have to increase its Journal Blacklist by many more thousands to keep pace with demand.

Predatory publishing from A to Z

During 2019, Cabells published on its Twitter feed (@CabellsPublish) at least one of its 70+ criteria for including a journal on the Cabells Journal Blacklist, generating great interest among its followers. For 2020, Simon Linacre highlights a new initiative below where Cabells will publish its A-Z of predatory publishing each week to help authors identify and police predatory publishing behavior.


This week a professor I know well approached me for some advice. He had been approached by a conference to present a plenary address on his research area but had been asked to pay the delegate fee. Something didn’t seem quite right, so knowing I had some knowledge in this area he asked me for some guidance. Having spent considerable time looking at predatory journals, it did not take long to notice signs of predatory activity: direct commissioning strategy from unknown source; website covering hundreds of conferences; conferences covering very wide subject areas; unfamiliar conference organizers; guaranteed publication in unknown journal; evidence online of other researchers questioning the conference and its organizers’ legitimacy.

Welcome to ‘C for Conference’ in Cabells’ A-Z of predatory publishing.

From Monday 17 February, Cabells will be publishing some quick hints and tips to help authors, researchers and information professionals find their way through the morass of misinformation produced by predatory publishers and conference providers. This will include links to helpful advice, as well as the established criteria Cabells uses to judge if a journal should be included in its Journal Blacklist. In addition, we will be including examples of predatory behavior from the 12,000+ journals currently listed on our database so that authors can see what predatory behavior looks like.

So, here is a sneak preview of the first entry: ‘A is for American’. The USA is a highly likely source of predatory journal activity, as the country lends credence to any claim of legitimacy a journal may adopt to hoodwink authors into submitting articles to them. In the Cabells Journal Blacklist there are over 1,900 journals that include the name ‘American’ in their titles or publisher name. In comparison, just 308 Scopus-indexed journals start with the word ‘American’. So for example, the American Journal of Social Issues and Humanities purports to be published from the USA, but this cannot be verified, and it has 11 violations of Journal Blacklist criteria, including the use of a fake ISSN number and complete lack of any editor or editorial board member listed on the journal’s website (see image).

‘A’ also stands for ‘Avoid at all costs’.

Please keep an eye out for the tweets and other blog posts related to this series, which we will use from time to time to dig deeper into understanding more about predatory journal and conference behavior.

Look before you leap!

A recent paper published in Nature has provided a tool for researchers to use to check the publication integrity of a given article. Simon Linacre looks at this welcome support for researchers, and how it raises questions about the research/publication divide.

Earlier this month, Nature published a well-received comment piece by an international group of authors entitled ‘Check for publication integrity before misconduct’ (Grey et al, 2020). The authors wanted to create a tool to enable researchers to spot potential problems with articles before they got too invested in the research, citing a number of recent examples of misconduct. The tool they came up with is a checklist called REAPPRAISED, which uses each letter to identify an area – such as plagiarism or statistics and data – that researchers should check as part of their workflow.
 
As a general rule for researchers, and as a handy mnemonic, the tool seems to work well, and undoubtedly authors using this as part of their research should avoid the potential pitfalls of using poorly researched and published work. Perhaps we at Cabells would argue that an extra ‘P’ should be added for ‘Predatory’, and the checks researchers should make to ensure the journals they are using and intend to publish in are legitimate. To do this comprehensively, we would recommend using our own criteria for the Cabells Journal Blacklist as a guide, and of course, using the database itself where possible.
 
The guidelines also raise a fundamental question for researchers and publishers alike as to where research ends and publishing starts. For many involved in academia and scholarly communications, the two worlds are inextricably linked and overlap, but are nevertheless different. Faculty members of universities do their research thing and write articles to submit to journals; publishers manage the submission process and publish the best articles for other academics to read and in turn use in their future research. 
 
Journal editors seem to sit at the nexus of these two areas as they tend to be academics themselves while working for the publisher, and as such have feet in both camps. But while they are knowledgeable about the research that has been done and may actively research themselves, as editor their role is one performed on behalf of the publisher, and ultimately decides which articles are good enough to be recorded in their publication; the proverbial gatekeeper.
 
What the REAPPRAISED tool suggests, however, is that for authors the notional research/publishing divide is not a two-stage process, but rather a continuum. Only if authors embark on research intent on fully appraising themselves of all aspects of publishing integrity can they guarantee the integrity of their own research, and in turn this includes how and where that research is published. Rather than a two-step process, authors can better ensure the quality of their research AND publications by including all publishing processes as part of their own research workflow. By doing this, and using tools such as REAPPRAISED and Cabells Journal Blacklist along the way, authors can better take control of their academic careers.


Think globally, act now

The great and the good of world business, finance, and politics met this week in the small Swiss resort of Davos, pledging to enact change and make a real difference to how the world works. But what is so different this time? Simon Linacre reports on his first visit to the World Economic Forum, and how business schools can play a pivotal role in changing the system.


It was 50 years ago when Dr. Klaus Schwab first invited business leaders to the small mountain retreat known as Davos, and since then it has grown into THE conference at which to see and be seen. Just over 3,000 lucky individuals are invited, with even fewer gaining the “access all areas” accreditation that gets you into sessions with the likes of Donald Trump, Greta Thunberg, or Prince Charles. The whole performance is surreal, with limousines whisking delegates the shortest of distances through the traffic-clogged streets, and slightly bewildered-looking skiers and snowboarders look on.

From start to finish, there was a noticeable tension in the air. Security is high level, with airport-standard checks at hotels and conference centers and armed guards at every turn. There is also conflict around the critical issue of climate change – while President Trump declares the issue to be exaggerated, a huge sign has been carved into the snow for all those arriving by helicopter and train to see: ‘ACT ON CLIMATE’:

climate
Photo by Simon Linacre

There also seems to be a conflict in how to deal with climate change and other major issues facing business and management today, and these can be broadly put into two camps. One believes that compromise is the answer, and big business seems to have largely chosen this path in Davos, with everyone seeking to state their environmental credentials or how they are pursuing one of the key phrases of the event, ‘stakeholder capitalism’. An approach first espoused by Dr. Straub fully 50 years earlier at the inaugural event.

The other camp believes that the answer can only depend on change. And not just change, but radical change. An example of this was the launch of the Positive Impact Rating (PIR) in Davos, which is an attempt to rate business schools for students and by students. Over 3,000 of them were surveyed – the results can be seen at www.PositiveImpactRating.org – where 30 business schools were rated as either ‘progressing’ (Level 3) or ‘transforming’ (Level 4) in terms of societal responsibility and impact.  Many of the business school deans and business leaders present were in favor of such an approach, believing that if business schools are to have any credibility in a society where sustainable development goals (SDGs), climate change, and social responsibility play an increasingly important role, the time to change and act is now. PIR is part of a wave of organizations such as Corporate Knights, the UN Global Compact and PRME that recognize and promote progressive business and education practices that are now becoming mainstream.

google crop

This approach is not without its critics, with some existing rating providers and business school leaders cautioning against too much change lest consistency and quality be ignored completely. These voices seem increasingly isolated and anachronistic, however, and there was a feeling that with a deadline of 2030 being set as the deadline for turning things around, business schools have to decide now whether they choose the path of compromise or change. If they are to remain relevant, it seems the latter has to be the right direction to take.