The ORCiD: A Universal Persistent Identifier for Authors

In recent years, the term ‘ORCiD’ has become increasingly common throughout the research publication disciplines. Here, we’ll discuss the purpose, utility, current state, and potential future problems of ORCiD numbers in medical research.

What is ORCiD?

ORCiD, standing for Open Researcher & Contributor ID, is an international cross-disciplinary registry that assigns authors a unique identifying number that can be connected to all of their research publications. ORCiD profiles act as a singular platform that lists all publications associated with a singular researcher, without the cross-posting and mistakes often seen when using names. In other words, an ORCiD is for authors what a DOI is for research articles: a persistent identifier that is associated with only one specific entity, reducing potential confusions or mix-ups. ORCiD is an especially useful tool for maintaining complete publication records for authors with common names, those who chose to use identifiers other than their given and family name (i.e., those who primarily use chosen or middle names), or those who change their name during their research career.

ORCiD numbers result in more comprehensive and accurate research bibliometrics and evaluation of an author’s presence and impact in their field; from a technical standpoint, unique identifying numbers are invaluable for simplifying data handling and cross-reference. From the journal’s perspective, ORCiD is becoming a critical tool for identifying research fraud and prevent problematic authors with a history of research misconduct from submitting their papers.

While ORCiD is most publicly known for associating authors with their published research articles, this identifier can be used across a range of purposes beyond publications. Clinical trial registries, grant funding records, patents, and datasets can also be associated with ORCiDs, creating a singular resource to connect disparate research activities with an individual. Because ORCiD is financially supported by a ~1300 network of member organizations, ORCiD registration is free for authors (even if the author’s institution is not a member organization).

Current State of ORCiD Implementation

ORCiD was not the research community’s first attempt to clarify research publication records or even to create unique ID numbers for authors. There are many discipline-, institution-, or country-specific unique author identifying systems, such as ResearcherID, DAI (digital author identifier), and Scopus Author Identifier numbers. However, ORCiD is unique in that it works with these other systems rather than in competition with them, acting as a centralized platform connecting separate persistent identifier systems with one another.

ORCiD has become a widespread presence throughout medical research journals. Almost all medical journals provide the option for authors to connect their ORCiD numbers to their publication, with ORCiD logon integrations commonly offered in lieu of journal submission platform login credentials or made available during the author listing section of a submission. In recent years, more journals are requiring ORCiD numbers for corresponding authors. With widespread adoption of ORCiD, some researchers are beginning to use the platform as a data source.

ORCiD as a Professional Networking Tool

The ORCiD website has grown from a landing page for the utility-based system to a professional networking platform. Like LinkedIn, ORCiD allows authors to create their own profiles that include associated websites and social links, biography, affiliated institutions/employment history, alternative names, and other unique author identifier numbers (as mentioned above). However, ORCiD is not a social networking platform—you can’t create connections with other researchers, send messages through the platform, or post status updates.

Potential Pitfalls

Though ORCiD has extensive promise as a metadata tool for bibliometrics and publication management, it is not without its drawbacks. There are several shortcomings and concerns that have been cited by researchers.

While there are multiple potential future issues with the ORCiD system, for now, it appears that the benefits outweigh the drawbacks. Regardless, the research community should remain mindful of the potential problems that may arise so that they can be handled swiftly and completely.

How to Create and Use an ORCiD

Creating an ORCiD is a free, quick, and easy process. Visit https://orcid.org/register to complete the three-step registration and verification process. Once your ORCiD number has been created, you can input as much or as little information as you would like to your profile and link prior research articles you’ve published to your new ID number. Once you have an ORCiD, you can link any future publications to your ID number. You can also include your ORCiD number on your professional social media platforms, resume/CV/biosketch, website, or institutional faculty page to enhance your research’s discoverability.

The Art of Research: How Journal Covers Influence Readers and Research

When you think of a research journal, what do you picture? Is it a vivid, detailed art spread, or a simplistic and minimalist design? Journal cover art is a surprisingly polarized medium—most journals feature either highly graphic, detailed, and aesthetically pleasing art or subdued, uniform designs, but there aren’t many journals that fall somewhere in-between. Each style of journal covers communicates a different subtextual message and can play an important role in signaling the target audience of the publication.

Vivid and Graphic Journal Cover Art

Some journals feature highly graphic covers for each issue, similar to the magazine approach of drawing a potential reader’s attention by using vivid, striking design to stand out among its many competitors. As such, we’ll call these ‘magazine-style’ covers for the purpose of this article. Pioneers in the use of magazine-style journal covers are Science, Nature, and the Lancet. Magazine-style cover art must fulfill several critical roles to be successful:

Simplistic and Uniform Journal Cover Art

On the other end of the spectrum is the minimalistic cover art. For these journals, issue covers typically look nearly identical and feature no images at all, with only the text changing from issue to issue; we’ll call these ‘uniform’ cover designs. It may be surprising to see that the journal with the highest impact factor across all of academia—CA: A Cancer Journal for Clinicians, with a whopping 2022 journal impact factor of 254.7—uses a uniform cover art style. Many smaller journals will also use uniform cover art, as this strategy is more cost-effective. With uniform cover design, the journals are aiming to achieve a distinct set of goals:

  • Enhance readability. By using a uniform template for each issue, readers know exactly where to look to gain the information they need. It’s typically quicker to identify the journal issue number or key articles.
  • Prioritize content, not appeal. These articles aren’t aiming to draw in readers—they can trust that readers will read the issues due to the intrinsic content without needing to be drawn in by cover art.
  • Express academic rigor. Some readers may interpret a uniform cover design as a signal of its focus on scientific content rather than mass appeal, improving their impression of the journal’s research value.
  • Build a brand. Similar to highly graphic cover art, these covers still play a large role in establishing a journal’s brand. Even minimalistic covers will be used to establish a journal’s typography, color palette, logo, and other features that will be consistent across all of the journal’s materials, including their website, presentation materials, and more.

What Cover Art Says About the Journal

Ultimately, the most important task that cover art performs is establishing the brand of a journal. Branding is a vital tool for establishing a journal’s reputability, respectability, and target audience (Gringarten et al., 2011). Branding can also be used to align a journal with its publisher or connected organization—for example, some journals published by Harvard University use the University’s distinct red and white colors and Proxima Nova/Merriweather fonts to align with the University’s established brand.

Cover art can play a key role in subconsciously communicating the publication’s target audience. Many journals that focus on a narrow target audience of researchers within the journal’s field of study use uniform journal covers. For these journals, cover designs don’t have to entice readers—they rely on their readers having an established pattern of reviewing every issue or being brought to the publication to read a specific article, not by the issue cover. On the other hand, many journals with magazine-style cover art aim to attract a wider audience of lay people or scientists outside of the specific discipline the journal publishes in. Because they can’t rely on readers independently seeking out research articles, they invest more resources in attracting attention. Having a wider scope also means that they’re up against more competition for attention, so there’s more emphasis on catching readers’ eye.

Which Cover Art Style is Better?

There’s no clear-cut better or worse cover art style. Magazine-style and uniform cover art serve separate purposes and communicate distinct brand identities to the reader. Regardless of cover art style, a journal should be evaluated on the merits of its scientific content; however, the art can serve as a shortcut toward understanding the journal’s intentions.

Understanding Gray Literature: The Value of Nontraditional Publications

As a standard practice, many literature reviews exclude ‘gray literature,’ a category that describes research and literature published outside of the traditional academic publishing industry. However, completely overlooking gray literature results in a wide array of valuable and excellent research being excluded from the overall body of scientific knowledge. A thorough understanding of what gray literature is and the ideal circumstances and cautions for using it could help you uncover hidden evidence that greatly improves your research.

What’s Considered Gray Literature?

In the broadest sense, gray literature comprises any material that isn’t published through a traditional academic publisher (e.g., published through a journal). Importantly, this typically means that the piece hasn’t undergone the traditional peer review process. These pieces may report on a research study, an individual’s opinion, an event, a stakeholder or advisory board discussion, an organizational policy, and more. They’re usually published online only.  Some common examples of gray literature include governmental or industry resorts/white papers, graduate dissertations, conference proceedings, newsletters or mail-outs, policy documents, and blog posts.

When Should Gray Literature Be Used?

Early during the information synthesis process for a research project, it’s a good idea to complete at least a cursory review the gray literature. By searching through gray literature, you could find niche or null-results studies that may influence your research direction—for example, for a hypothesis-driven research study, you may find a report from researchers who attempted to answer the same or similar question but ran into unexpected hurdles or null results. Often, null-result studies aren’t accepted published traditional academic journals, but this type of crucial information could affect how you choose to proceed.

In many cases, gray literature expresses findings and opinions that are more indicative of the ‘real world’ than the often contrived or carefully constructed scenarios used in academic research. Additionally, gray literature often includes a more diverse range of authors who may often be excluded from the profit-driven traditional publishing industry.

However, there are some times when gray literature isn’t appropriate to include. Gray literature often isn’t used in highly regimented systematic reviews with strict inclusion/exclusion criteria. Fast-paced or urgent projects that must be published quickly may be hampered by the time required to sort through the large pool of gray literature available online.

Cautions of Using Gray Literature

It’s absolutely crucial to critically evaluate all gray literature that contributes to your research. While you should never fully depend upon peer review and blindly trust the reliability and rigor of journal-published research, it’s especially important to note that gray literature undergoes no such review. Some gray literature sources (such as governmental reports or conference proceedings) may be less inclined to bias and misinformation than others (such as blog posts), but all sources should be critically reviewed. Consider using medical librarian Jess Tyndall’s AACODS checklist for evaluating gray literature, which includes:

  • Authority. Who wrote the piece? Do they have the expected credentials or experience to speak knowledgably on the topic?
  • Accuracy. Does this piece have adequately rigorous methodology, evidence, or data to support its claims? Are their sources properly cited?
  • Coverage. Does the piece outline its limitations or the authors’ biases/conflicts of interest? Is the scope of the piece clearly outlined?
  • Objectivity. Do the authors or the organization have any explicit bias, such as financial interest in promoting specific results or opinions? Are counterarguments or conflicting evidence/perspectives presented?
  • Date. How long ago was the piece published? Have new advancements or discoveries been made that might disprove, support, or otherwise affect the information in the piece?
  • Significance. Does the piece include enough important, feasible, and relevant information that enriches your own research to justify its inclusion in your citation list?

Additionally, it should be noted that including gray literature in your evidence synthesis will greatly expand the scope of your search. Plan to budget in extra time to evaluate the relevance of sources as well as their credibility.

Tips for Finding Gray Literature

There are a few key strategies you can use to help you navigate the wide range of resources available online. For example, be sure to make the most of Google search techniques! Some helpful search modifiers include:

  • Phrase searching, or using quotation marks around a word or phrase to only pull results that use the exact same words. Example: “Art in sustainability” university programs
  • Site type searches, or using site: in combination with specific website extensions (e.g., .edu, .org., .gov) to only pull results from those pages. Example: Biophysics of mitosis site:edu
  • File type searches, or using filetype: in combination with a specific file extension (e.g., ppt, pdf, xlsx [for excel documents]) to only generate results for a specific document type. Example: Romanesque vs classical art filetype:.ppt

You can also explore databases that collect high-quality gray literature, such as WorldCat, Open Grey, and GreyNet International. Consider also talking to your colleagues about your search— they may know of subject-specific resources that would be hard to discover independently!

Innovations in Peer Review: Increasing Capacity While Improving Quality

Peer review is a critical aspect of modern academic research, but it’s no secret that journals are struggling to provide high-quality and timely peer review for submitted manuscripts. It’s clear that changes are needed to increase the capacity and efficiency of peer review without reducing the quality of the review. However, several alternative peer review models are up to the challenge. We’ll discuss the most well-established alternative peer review strategies, identify some commonalities between models, and provide key takeaways for everyone in academia.

The Current State of Peer Review

Before we can discuss new innovations, it’s important to evaluate the modern peer review structure. Peer review serves as a vetting process for journals to filter out research manuscripts that are considered unsuitable for their readership, whether that’s because of poorly defined methods, suspicious or fraudulent results, a lack of supporting evidence or proof, or unconstructive findings. After peer reviewers read and provide their criticism of a manuscript, they’ll generally advise journals to 1) accept a submission as-is, 2) accept a submission with minor revisions, 3) request major revisions before reevaluating the suitability of a paper for publication, or 4) outright reject a manuscript. Manuscripts will often go through two or three rounds of peer review, usually with the same peer reviewers, before a paper is ready for publication.

Most medical journals require at least two experts in a related field to review a manuscript. This is typically done through anonymized peer review, in which the authors don’t know the identity of the reviewers, but the open peer review model (in which the identity of peer reviewers is known to the author, with or without their reviews being publicly available following manuscript publication) has been gaining traction in recent years.

The Problems with Modern Peer Review

As the academic publishing industry rapidly expands and becomes increasingly digital, the current peer review model has been struggling to keep up. Peer review is resource-intensive, especially in money and time. Peer review is voluntary and reviewers are almost universally not compensated for their contributions, leading to a lack of motivation to participate, especially given the time and effort peer review requires. The lack of transparency in peer review has also been increasingly criticized in recent years because it can lead to biased reviews and unequal standards. Despite this, many academicians place unwarranted trust in the validity and efficacy of the peer review process.

On top of all of this, the peer review process is notoriously slow. This is usually attributed to the shortage of qualified peer reviewers, and with good reason: a 2016 survey found that 20% of individuals performed 69% to 94% of reviews. It’s a tough problem to tackle, but there are some innovative new peer review strategies that aim to improve the timeliness and accessibility of peer review, maximize the effective use of peer reviewers’ time, and maintain or improve upon modern quality expectations.

Alternative Strategies for Peer Review

Portable peer review

Overview: Authors pay a company to perform independent, single-blind (ie, anonymized), unbiased peer review, which is then shared with journals at the time of submission. A subset of this is Peer Review by Endorsement (also called Author-Guided Peer Review), in which authors request their peers to review their manuscripts, which are then provided to journals.

Pros: Journals aren’t responsible for coordinating peer reviews; avoids redundancy of multiple sets of peer reviewers evaluating the same paper for different journals

Cons: Additional fee is burdensome for authors, especially as article processing charges become more common; not many journals currently accept externally provided peer reviews; potential bias for Peer Review by Endorsement

Pre–peer review commenting

Overview: Informal community input is given on a manuscript while authors are simultaneously submitting the paper to journals. This input can be either open (eg, publicly available materials for anyone to comment on) or closed (eg, materials are shown only to a select group of commenters). You may be familiar with a common pre–peer review commenting platform without even knowing it: preprints, and many of the same pros and cons apply here.

Pros: Strengthens the quality of a paper before journal evaluation; faster than traditional peer review; typically involves low costs; may include moderators who filter out unconstructive comments

Cons: Allows non-experts to voice incorrect opinions; reduces editorial control; introduces threat of plagiarism or scooping; may make faulty or inadequate science publicly available

Post-publication commenting

OverviewPeer review takes place after the manuscript has already been published by a journal. Editors invite a group of qualified experts in the field to provide feedback on the publication. Manuscripts may or may not receive some level of peer review before publication.

Pros: Reduces time delay for peer review; comments are typically public and transparent; theoretically provides continuous peer review as new developments and discoveries are made, which may support, disprove, change, or otherwise affect research findings

Cons: Requires buy-in from many peer reviewers who are willing to review; faulty or inadequate science may be made publicly available; can become resource-intensive, especially time-intensive

Registered Reports

Overview: Studies are registered with a journal before research is performed and undergo peer review both before and after research is conducted. The first round of peer review focuses on the quality of the methods, hypothesis, and background, and the second round focuses on the findings.

Pros: Papers are typically guaranteed acceptance with the journal; each round of peer review hypothetically requires less time/effort; papers are typically more thorough and scientifically sound; provides research support and limited mentorship, which can be especially valuable for early-career investigators

Cons: Two rounds of peer review are required instead of one; reduces procedural flexibility; logistical delays are common; seen as inefficient for sequential or ongoing research.

Artificial Intelligence–Assisted Review

Overview: Artificial intelligence and machine learning software are developed to catch common errors or shortcomings, allowing peer reviewers to focus on more conceptually-based criticism, such as the paper’s novelty, rigor, and potential impact. This strategy is more widely seen in humanities and social sciences research.

Pros: Increases efficient use of peer reviewers’ time; improves standardization of review; can automate processes like copyediting or formatting

Cons: Requires extensive upfront cost and development time as well as ongoing maintenance; prone to unintentional bias; ethically dubious; requires human oversight

Commonalities and Takeaways

There are a few key similarities and fundamental practices that are found throughout several of the peer review strategies discussed above:

  1. Peer reviewer compensation, either in the form of financial compensation or public recognition/resume material— though this can often cause its own problems
  2. Decoupling the peer review process from the publication process
  3. Expanding the diversity of peer reviewers
  4. Improving transparency of peer review
  5. Improving standardization of peer review, often through paper priority scores or weighted reviewer scoring based on review evaluation ratings/reputation

The key overall takeaway from these new strategies? Change may be slow, but it’s certainly coming. More and more journals are embracing shifts in peer review, such as the growing traction of transferable peer review (i.e., if a manuscript is transferred between journals, any available reviewers’ comments will be shared with the new journal) and the transition from anonymous to open identification of reviewers, and most experts agree that peer review practices will continue to change in upcoming years.

If you’re interested in becoming more involved in leading the evolution of peer review, take some time to research the many proposed alternative peer review strategies. Try to start conversations about new peer review models in academic spaces to spread the word about alternative strategies. If you’re able, try to participate in validated, evidence-driven research to either validate the efficacy of alternative peer review models or demonstrate the inefficiency of our current structure. Change always requires motivated and driven individuals who are willing to champion the cause. The communal push toward revolutionizing peer review is clearly growing—now, it’s up to the community to determine which model will prevail.

Introducing the All-New Journalytics Academic & Predatory Reports

We have some exciting news to share – a new and improved Journalytics Academic & Predatory Reports platform will soon be here. Our team has been working on multiple updates and enhancements to our tried and true platform that will benefit users in multiple ways. Along with our ongoing addition of new verified and predatory journals, users will experience better search results, new data points and visualizations, increased stability and speed, and more secure logins.

In addition to the visual elements and expanded analytics of this redesign, a key component is the full integration of our Journalytics and Predatory Reports databases. This integration will allow for comprehensive searches that present the full range of publishing opportunities and threats in a given area. Our goal is to facilitate journal discovery and evaluation so our users know the journals and know the risks.

Last month we hosted a webinar to give users a sneak peek at the upcoming changes, which include a new guided search page to jumpstart journal discovery, updated platform and journal card designs, and new data points such as fees and article output. Check out the video below or visit our YouTube channel where you’ll find a time-stamped table of contents in the description for easy navigation to specific points in the video.

A preview of the all-new Journalytics Academic & Predatory Reports.

A fresh look with more data

Guided search

The path to journal discovery now begins on our guided search page, where users can

  • search for a journal by title, ISSN, or keyword
  • access our database of legitimate and predatory journals
  • seamlessly sort verified journals with one of our featured metrics shortcuts
  • jump directly to one of 18 academic disciplines
Our guided search page with shortcuts to journal platforms, featured metric sorting, and disciplines.

New fully-integrated platform

Our redesigned platform now features full integration of verified and predatory journals into the same environment. First rolled out as part of our Journalytics Medicine & Predatory Reports platform, the integration has proven to be an invaluable enhancement. Users can feel confident with fully comprehensive search results that capture both legitimate and deceptive publishing opportunities, and they’ll also have the ability to filter out one or the other with just one click.

NO MORE GUESSWORK: Search results now include both legitimate and predatory journals

Search for publications by title, ISSN, disciplines or other keyword and know that we’ve left nothing to chance – verified and predatory journals each have their own design and data type, making clear whether the journal is listed in Journalytics or Predatory Reports.

Redesigned journal cards with new data points

Judging the quality of a journal, the likelihood of manuscript acceptance, publication timelines, and the potential impact of a journal can be difficult. To assist with clear and confident publication evaluations, we have added a few new data points to verified records to facilitate decision-making:

  • Open Access details – copyrights, archiving, and access details
  • Fees – who pays for publishing articles and how much?
  • Article output – how often does a journal publish per year?
  • We have reminaged the visualization of our CCI citation-backed metric, which shows the historical “influence” or citation-activity for each discipline in which a journal publishes.

The new Journalytics Academic will include the beta version of a new metric: The SDG Impact IntensityTM (SDGII)developed in partnership with the Saint Joseph’s University Haub School of Business.

The SDGII seeks to contextualize and understand the relevance of academic research in terms of the United Nation’s Sustainable Development Goals. Climate change, sustainability, and equity are among the most powerful forces for change in society, and yet they are ignored by traditional citation-based metrics. We hope to help lead the charge to change this dangerous oversight.

This is a pilot program that is currently included in a limited number of business journals but will soon be expanded to increase awareness of sustainability-minded journals publishing impactful research.

For more information, see our videos covering the SDGII on our YouTube channel.

New look, same deceptive operations

In recent years, awareness of the nature and scope of the problem presented by predatory publishers has increased within the scholarly community. At the same time, predatory publishers themselves have become more aware of what they need to do to appear to legitimate and avoid detection. Their tactics are evolving right along with efforts to combat them, and their numbers are growing – we currently have reports on more than 17,000 predatory publications.

  • Journal identification – each report provides the title, publisher, discipline(s), ISSN (if available), and website links for journal discovery and confirmation.
  • Violation categories – journal reports monitor the areas in which the deceptive behaviors occurred.
  • Violation severity – reports also track the severity of the deceptive behaviors.

By providing not just identifying information for predatory journals, but also a report on the nature, scope, and severity of their behaviors, we aim to equip our users with an understanding of the varied tactics predatory publishers employ. Our goal is to educate and inform researchers on the different profiles and archetypes of predatory journals we uncover, so they are better able to identify and avoid them as their career continues.

What’s next?

Be on the lookout for further updates on the timing of the release of the updated platform, as well as information on the upcoming new website that will serve as the central hub for all of our resources, complete with a portal for platform access, links to product information and criteria, and our blog.

We will also host another webinar as we move closer to the launch date for a final look at the upcoming enhancements. Stay tuned!

Countering Systemic Barriers to Equity in the Academic Publishing Process

In recent years, improving diversity has been a core priority of many industries, including scholarly publishing and academia. Almost every large publisher has a dedicated Diversity, Equity, and Inclusion page, and most have published statements dedicating resources toward diversifying their staff, editorial board members, and authors. However, few initiatives have targeted the systemic barriers in place that fundamentally contribute to this inequality. Here, we’ll explore some underlying issues within the overall research publication system that must be addressed in order to achieve equity in academic publishing.

Understanding the Problem

In order to explore potential mechanisms to counter systemic barriers to research publication, we need to start by defining the problem. Systemic barriers describe “policies, procedures, or practices that unfairly discriminate” against marginalized groups, including racial, ethnic, gender, sexual, disability, and religious minority groups. Because of these barriers, authors from minority groups do not have equitable access to high-quality publication avenues as their non-minority counterparts; as a result, almost every academic publishing specialty area suffers from a lack of diverse perspectives and inequality. Likewise, members of minority groups who want to pursue careers in academic publishing industries face additional blockades and challenges than those who are not in minority groups.

There are many systemic barriers that create injustice in academic publishing. In this article, we’ll focus on two barriers that have been the focus of extensive research in recent years, with an exploration of some evidence-supported practices that can help counteract them.

Unequal Access to Education

Unequal access to education, especially due to race, is fundamentally connected to the United States’ history. As Dupree and Boykin (2021) explain, during America’s founding, it was generally illegal for slaves to receive education. Following the abolishment of slavery, the “separate but equal” precedence led to establishment of Black higher education institutions that were woefully unequal to White institution counterparts in quality and accessibility. As integration spread throughout America, minority scholars gained increased access to historically White higher education institutions but faced near intolerable levels of discrimination from students, professors, and administrators. Additionally, academia’s role in racial devaluation through research, such as publication of the biological determinism and the cultural deficit models, cannot be ignored. Similar processes of begrudging integration and enrollment into higher education spaces can be seen across the dimensions of gender, disability, religion, and more.

To this day, higher education institutions are affected by their histories of inequality and the systems that were originally designed to operate within these frameworks of discrimination. Generally, becoming an academic researcher in any field requires at least an undergraduate degree, if not a Master’s or Doctoral degree; as such, limited access to these degrees translates to limited access to research and publication participation.

Evidence-based solutions

Employment & Promotion Inequality

Inequality affects both those who work within academic publishing industry (journal editors, article reviewers, publication specialists, etc.) and the authors seeking publication in academic journals. Within academia, members of minority groups experience discrimination during the interviewing and employment process; this discrimination extends into promotion and tenure opportunities. In the publication industry, the lack of diversity is a known problem, with many initiatives targeted toward countering inequality. Many publishers have released statements acknowledging the inequities in their hiring practices, with Nature recognizing its own role in being “complicit in systemic racism” and publishing a lists of actionable commitments they’ve made toward improving diversity. However, the efficacy of these commitments remains unclear.

Evidence-based solutions

Advocate for funding equality. Many large funding bodies, such as the National Institutes of Health, and universities alike have been recently criticized for inequality in research funding and grant awardees. Because their available funding is minimized, researchers from minority groups are at a disadvantage to demonstrate publication excellence and research experience, which then leads to inequitable tenure and promotion decisions. To counteract this, organizations should evaluate their own funding demographics and overtly advocate for transparency and equality in funding allocation.

Originality in Academic Writing: The Blurry Lines Between Similarity, Plagiarism, and Paraphrasing

Across disciplines, most manuscripts submitted to academic journals undergo a plagiarism check as part of the evaluation process. Authors are widely aware of the industry’s intolerance for plagiarism; however, most of us don’t receive any specific education about what plagiarism is or how to avoid it. Here, we’ll discuss what actually constitutes plagiarism, understand the important differences between similarity and plagiarism, and discuss easy strategies to avoid the problem in the first place.

What actually is plagiarism?

The University of Oxford defines plagiarism as “presenting work or ideas from another source as your own… without full acknowledgement.” This is a fairly fundamental definition, and for most of us, this is the extent of our education on plagiarism.

When we think of plagiarism, we usually imagine an author intentionally copying and pasting text from another source. This form of plagiarism, called direct plagiarism, is actually fairly uncommon. More commonly, plagiarism takes the form of:

  • Accidental plagiarism. Citing the wrong source, misquoting, or unintentionally/coincidentally paraphrasing a source that you’ve never seen before is still considered plagiarism, even when done without intent.
  • Secondary source plagiarism. This is an interesting and challenging issue to tackle. This form of plagiarism refers to authors using a secondary source, but citing the works found in that source’s reference list—for example, finding information in a review but citing the initial/primary study instead, not the review itself. This misattribution “fails to give appropriate credit to the work of the authors of a secondary source and… gives a false image of the amount of review that went into research.
  • Self-plagiarism. There’s still an ongoing debate over the acceptability of reusing one’s own previous work, with differing answers depending on the context. It’s widely agreed that reusing the same figure, for example, in multiple articles without correct attribution to the original publication is unethical; however, it’s less clear whether it’s acceptable to use the same verbatim abstract in a manuscript as you previously published as part of a conference poster. Copyright law can play a major factor in the permissibility of self-plagiarism in niche cases.
  • Improper paraphrasing plagiarism.Some of us have heard from teachers or peers that, to avoid plagiarism, all you need to do is “rewrite it in your own words.” However, this can be misleading, as there’s a fine line between proper and improper paraphrasing. Regardless of the specific language used, if the idea or message being communicated is not your own and is not cited, it’s still plagiarism.

Plagiarism vs similarity

Many publishers, including Elsevier, Wiley, and SpringerNature, perform plagiarism evaluations on all manuscripts they publish by using software iThenticate. However, ‘plagiarism evaluation’ through iThenticate is a bit of a misnomer: iThenticate checks for similarity, not plagiarism. Though the difference between the two is minor, their implications are entirely different.

Whereas plagiarism refers to an act of ethical misconduct, similarity refers to any portion of your paper that recognizably matches text found in previously published literature in iThenticate’s content database. Similarity can include text matches in a manuscript’s references, affiliations, and frequently used/standardized terms or language; it’s natural and nonproblematic, unlike plagiarism.

A similarity report, such as the one generated by iThenticate, is used by editors as a tool to investigate potential concerns for plagiarism. If an iThenticate similarity report flags large sections of text as similar to a previously published article, the editors can then use this as a starting point to evaluate whether the article is, in fact, plagiarized.

Strategies to avoid plagiarism

Being accused of plagiarism—especially direct, intentional plagiarism— can be a serious ethical issue, with long-term implications for your job prospects and career. The best way to avoid this issue is to use preventative measures, such as the ones discussed here.

  1. Educate yourself about what plagiarism truly is and how it occurs. If you’ve made it this far in the article, you’re already making great progress! Consider reviewing the sources cited in this article to continue your education.
  2. Start citing from the note-taking phase. Many times, accidental plagiarism is the result of forgetting the source of an idea or statistic during quick, shorthanded note-taking. To avoid this, start building your reference list from the beginning of your research phase. Consider adding a quick citation for every single line of text—including citing yourself for your own ideas! Reference management software like EndNote and Zotero are great tools for this.
  3. Understand proper and improper paraphrasing. Learn how to correctly paraphrase someone else’s work and the importance of citing your paraphrased text. If you come across a phrase of sentence that perfectly summarizes an idea, don’t be afraid to include it as a direct, cited quotation!
  4. Consider cultural differences in plagiarism policies. This article aligns with the United States’ view of plagiarism as a serious ethical offense. However, this isn’t the case in all countries. In some East Asian countries, for example, the concepts of universal knowledge and memorization to indicate respect leads to a wider cultural acceptance of instances that would be considered plagiarism in America. If you’re unsure about what the expectations are for a certain journal, it’s always recommended to ask the editors.
  5. Use a similarity checker. While a similarity review won’t directly identify plagiarism, it can be great as a final scan for any text you may have copied and pasted with the intention of removing but forgot to erase, and it can help pick up accidental plagiarism! If your institution doesn’t have access to iThenticate, Turnitin, or CrossRef, there are several free similarity scanners available online.

Journalytics Update: Twenty Hindawi Journals Recently Removed from the Journalytics Academic and Journalytics Medicine Databases

As part of our ongoing mission to protect and foster research integrity, the following journals from the publisher Hindawi have been removed from our Journalytics Academic and Journalytics Medicine databases for failure to meet our quality criteria, pending re-evaluation of their policies and practices:

  • Advances In Materials Science And Engineering (ISSN: 1687-8434)
  • Biomed Research International (ISSN: 2314-6133)
  • Computational And Mathematical Methods In Medicine (ISSN: 1748-670X)
  • Computational Intelligence And Neuroscience (ISSN: 1687-5265)
  • Contrast Media & Molecular Imaging (ISSN: 1555-4309)
  • Disease Markers (ISSN: 0278-0240)
  • Education Research International (ISSN: 2090-4002)
  • Evidence-Based Complementary And Alternative Medicine (ISSN: 1741-427X)
  • Journal Of Environmental And Public Health (ISSN: 1687-9805)
  • Journal Of Healthcare Engineering (ISSN: 2040-2295)
  • Journal Of Nanomaterials (ISSN: 1687-4110)
  • Journal Of Oncology (ISSN: 1687-8450)
  • Journal of Sensors (ISSN: 1687-725X)
  • Mathematical Problems In Engineering (ISSN: 1024-123X)
  • Mobile Information Systems (ISSN: 1574-017X)
  • Oxidative Medicine And Cellular Longevity (ISSN: 1942-0900)
  • Scanning (ISSN: 0161-0457)
  • Scientific Programming (ISSN: 1058-9244)
  • Security and Communication Networks (ISSN: 1939-0114)
  • Wireless Communications & Mobile Computing (ISSN: 1530-8669)

Wiley’s statement confirming ‘compromised articles’ in Hindawi special issues, coupled with strong evidence that at least some of the retracted content was generated by paper mills, points to the absence of a functional peer review system in place at the above listed journals. The backbone of not just any legitimate, trustworthy journal, but of all of academic and medical publishing, is a robust and closely managed peer review process.

We covered the wave of retraction notices in recent years from scientific and medical publications on our Journalytics Medicine blog in November. Retractions are, to a certain extent, ‘part of the process’ for journals, but retractions at this level by one publisher shows a breakdown in that process. It is our hope that the removal of these journals from our databases will motivate all scholarly and medical publishers to review their current publication processes and make the necessary improvements or changes to any substandard elements.

SDG Publishers Compact Fellows and HESI to Hold Sustainable Solutions Summit

Immediate action is the only hope for realizing the Sustainable Development Goals (SDGs) by (or anywhere near) 2030. The SDGs are 17 interlinked targets put forth by the United Nations in 2015 as the backbone of its 2030 Agenda for Sustainable Development. According to The Sustainable Development Goals Report for 2022, the SDGs are in “grave jeopardy due to multiple, cascading, and intersecting crises. COVID-19, climate change and conflict predominate.”

Despite admittedly painting a “sobering picture,” the report stresses that the SDGs can be rescued with concentrated global effort in three crucial areas:

  • armed conflicts and the senseless loss of lives and resources that accompany them must be ended in favor of diplomacy and peace – preconditions for sustainability
  • the blueprint laid out by the SDGs must be met with urgency
  • a global economy that works for all must be created to ensure developing countries are not left behind.

Those are no small tasks and there is no denying that moving the planet forward on the path to sustainability will require coordinated worldwide action. Fortunately, the SDG roadmap is clear and as Liu Zhenmin, former Under-Secretary-General for the UN Department of Economic and Social Affairs points out in the 2022 Report, “just as the impact of crises is compounded when they are linked, so are solutions.”

We must rise higher to rescue the Sustainable Development Goals – and stay true to our promise of a world of peace, dignity and prosperity on a healthy planet.

António Guterres
Secretary-General, United Nations

The SDG Publishers Compact Fellows are working to ensure research and education are key parts of the solutions. The purpose of the Fellows is to support the “publishing industry in creating a sustainable future through action.” They do this in part by providing key tools and practical actions that different groups within the scholarly community can take to embed SDGs into research and education and forge a connection with practitioners.  

To help in this effort, researchers, authors, educators, reviewers, and editorial boards are invited to join the SDG Publishers Compact Fellows and the Higher Education Sustainability Initiative (HESI) in a Sustainable Solutions Summit next month. The virtual event will focus on the top recommended actions and trends to better align academic research, education materials, and the sharing of research findings with making the world a better place through connections to the SDGs.



SDG research output is increasing and it is clear that scholarship and science must be driving forces behind the push for the Global Goals. But to succeed, the gap between researchers and practitioners must be closed. Groups like the SDG Publishers Compact Fellows and HESI, and events like the Sustainable Solutions Summit, will be key to leveraging the power of scholarly publishing to help solve the SDGs.

The Predator Effect: Understanding the Past, Present and Future of Deceptive Academic Journals

During his time working at Cabells, predatory publishing practices turned into a near obsession for Simon Linacre – so much so, he wrote a book about it: The Predator Effect. Here he shares details of the book, and how predatory journals could form part of a publishing ethics crisis.


In a recent conversation with a senior academic regarding publishing ethics, the discussion veered between predatory publishing, paper mills, paraphrasing software and the question of whether an article written by AI could be regarded as an original piece of work. Shaking his head, the academic sighed and exclaimed: “Retirement is starting to look pretty good right now!” The conversation demonstrated what a lot of people in scholarly communications feel right now, which is that at this moment in time, we are losing the arms race when it comes to research integrity and publishing ethics.

In the last year, we have seen the number of predatory journals included on Cabells’ Predatory Report database approach 17,000, thousands of articles be retracted by major publishers such as Wiley and IoP, and major scandals, such as one I worked on with Digital Science company Ripeta, where one author was responsible for dozens of plagiarised articles. The concern is that many more articles might have leaked into the scholarly communications system from paper mills, and this coupled with leaps in technology that enable students and authors to buy essays and articles generated by AI without lifting a finger themselves. Now wonder older scholars who didn’t have to deal with such technologies are shaking their heads in despair.

Negative Impact

These issues can be rather abstract as they don’t necessarily translate into tangible impacts for most people, but this also means they can be misunderstood and underestimated. For example, what happens when an individual reads about a cure in a predatory journal and tries to use it and makes the condition of a patient worse? Or what about someone qualifying for a position based on coursework they cheated on? There are numerous instances where a breakdown in ethics and integrity can cause major problems.

More broadly, the whole fabric of trust that society has in academic research risks being undermined with so many options open to bad actors if they wish to buck the system for their own ends. We have seen this with the fateful Wakefield article about the MMR vaccine in the 1990s, the effects of which are still being felt today. That was an anomaly, but if people ceased to believe that published research was trustworthy because of these numerous threats, then we will indeed be in a perilous position.

Digital Solutions

The scale of these problems can be seen in three recent publications, which I discussed in a recent talk at the ConTech 2022 Conference in London:

  • In September, The State of Trust & Integrity in Research (STIR) report was published by Ripeta, which outlined some of the issues facing research integrity, and how greater standardisation and investment in technology is required
  • In October, the State of Open Data (SoOD) report was published by Figshare, Digital Science and Springer Nature. It produced the results of a huge survey of researchers which showed open data sharing was only growing gradually, and policymaking needed to be more joined up and consistent
  • In November, The Predator Effect was published – a short open access ebook detailing the history and impact of predatory publishing practices. 

While each of these publications offers some sobering findings in terms of the problems faced by scholarly communications, they also offer some hope that technology might provide some solutions in the future. In terms of predatory journals, this means using not only using technology as one solution, but using multiple solutions together in a joined up way. As I say in the book:

“Using technology to improve hygiene factors such as legitimate references may be another strategy that, if adopted together and more widely, will have a significant impact on predatory journal output.” (Linacre, 2022)

Concerns around trust in science are real, but so is the anticipation that technology can show how scholarly communications can move forward. As a former publisher, I thought technology could easily solve the problem, but thanks to working at Cabells I understood much more work is required in equipping researchers with the right tools, knowledge and know how to avoid predatory journals. In the past, collaboration in the industry has often been slow and not fully inclusive, but this will have to change if a breakdown in research integrity and publication ethics is going to be avoided.