Innovations in Peer Review: Increasing Capacity While Improving Quality

Peer review is a critical aspect of modern academic research, but it’s no secret that journals are struggling to provide high-quality and timely peer review for submitted manuscripts. It’s clear that changes are needed to increase the capacity and efficiency of peer review without reducing the quality of the review. However, several alternative peer review models are up to the challenge. We’ll discuss the most well-established alternative peer review strategies, identify some commonalities between models, and provide key takeaways for everyone in academia.

The Current State of Peer Review

Before we can discuss new innovations, it’s important to evaluate the modern peer review structure. Peer review serves as a vetting process for journals to filter out research manuscripts that are considered unsuitable for their readership, whether that’s because of poorly defined methods, suspicious or fraudulent results, a lack of supporting evidence or proof, or unconstructive findings. After peer reviewers read and provide their criticism of a manuscript, they’ll generally advise journals to 1) accept a submission as-is, 2) accept a submission with minor revisions, 3) request major revisions before reevaluating the suitability of a paper for publication, or 4) outright reject a manuscript. Manuscripts will often go through two or three rounds of peer review, usually with the same peer reviewers, before a paper is ready for publication.

Most medical journals require at least two experts in a related field to review a manuscript. This is typically done through anonymized peer review, in which the authors don’t know the identity of the reviewers, but the open peer review model (in which the identity of peer reviewers is known to the author, with or without their reviews being publicly available following manuscript publication) has been gaining traction in recent years.

The Problems with Modern Peer Review

As the academic publishing industry rapidly expands and becomes increasingly digital, the current peer review model has been struggling to keep up. Peer review is resource-intensive, especially in money and time. Peer review is voluntary and reviewers are almost universally not compensated for their contributions, leading to a lack of motivation to participate, especially given the time and effort peer review requires. The lack of transparency in peer review has also been increasingly criticized in recent years because it can lead to biased reviews and unequal standards. Despite this, many academicians place unwarranted trust in the validity and efficacy of the peer review process.

On top of all of this, the peer review process is notoriously slow. This is usually attributed to the shortage of qualified peer reviewers, and with good reason: a 2016 survey found that 20% of individuals performed 69% to 94% of reviews. It’s a tough problem to tackle, but there are some innovative new peer review strategies that aim to improve the timeliness and accessibility of peer review, maximize the effective use of peer reviewers’ time, and maintain or improve upon modern quality expectations.

Alternative Strategies for Peer Review

Portable peer review

Overview: Authors pay a company to perform independent, single-blind (ie, anonymized), unbiased peer review, which is then shared with journals at the time of submission. A subset of this is Peer Review by Endorsement (also called Author-Guided Peer Review), in which authors request their peers to review their manuscripts, which are then provided to journals.

Pros: Journals aren’t responsible for coordinating peer reviews; avoids redundancy of multiple sets of peer reviewers evaluating the same paper for different journals

Cons: Additional fee is burdensome for authors, especially as article processing charges become more common; not many journals currently accept externally provided peer reviews; potential bias for Peer Review by Endorsement

Pre–peer review commenting

Overview: Informal community input is given on a manuscript while authors are simultaneously submitting the paper to journals. This input can be either open (eg, publicly available materials for anyone to comment on) or closed (eg, materials are shown only to a select group of commenters). You may be familiar with a common pre–peer review commenting platform without even knowing it: preprints, and many of the same pros and cons apply here.

Pros: Strengthens the quality of a paper before journal evaluation; faster than traditional peer review; typically involves low costs; may include moderators who filter out unconstructive comments

Cons: Allows non-experts to voice incorrect opinions; reduces editorial control; introduces threat of plagiarism or scooping; may make faulty or inadequate science publicly available

Post-publication commenting

OverviewPeer review takes place after the manuscript has already been published by a journal. Editors invite a group of qualified experts in the field to provide feedback on the publication. Manuscripts may or may not receive some level of peer review before publication.

Pros: Reduces time delay for peer review; comments are typically public and transparent; theoretically provides continuous peer review as new developments and discoveries are made, which may support, disprove, change, or otherwise affect research findings

Cons: Requires buy-in from many peer reviewers who are willing to review; faulty or inadequate science may be made publicly available; can become resource-intensive, especially time-intensive

Registered Reports

Overview: Studies are registered with a journal before research is performed and undergo peer review both before and after research is conducted. The first round of peer review focuses on the quality of the methods, hypothesis, and background, and the second round focuses on the findings.

Pros: Papers are typically guaranteed acceptance with the journal; each round of peer review hypothetically requires less time/effort; papers are typically more thorough and scientifically sound; provides research support and limited mentorship, which can be especially valuable for early-career investigators

Cons: Two rounds of peer review are required instead of one; reduces procedural flexibility; logistical delays are common; seen as inefficient for sequential or ongoing research.

Artificial Intelligence–Assisted Review

Overview: Artificial intelligence and machine learning software are developed to catch common errors or shortcomings, allowing peer reviewers to focus on more conceptually-based criticism, such as the paper’s novelty, rigor, and potential impact. This strategy is more widely seen in humanities and social sciences research.

Pros: Increases efficient use of peer reviewers’ time; improves standardization of review; can automate processes like copyediting or formatting

Cons: Requires extensive upfront cost and development time as well as ongoing maintenance; prone to unintentional bias; ethically dubious; requires human oversight

Commonalities and Takeaways

There are a few key similarities and fundamental practices that are found throughout several of the peer review strategies discussed above:

  1. Peer reviewer compensation, either in the form of financial compensation or public recognition/resume material— though this can often cause its own problems
  2. Decoupling the peer review process from the publication process
  3. Expanding the diversity of peer reviewers
  4. Improving transparency of peer review
  5. Improving standardization of peer review, often through paper priority scores or weighted reviewer scoring based on review evaluation ratings/reputation

The key overall takeaway from these new strategies? Change may be slow, but it’s certainly coming. More and more journals are embracing shifts in peer review, such as the growing traction of transferable peer review (i.e., if a manuscript is transferred between journals, any available reviewers’ comments will be shared with the new journal) and the transition from anonymous to open identification of reviewers, and most experts agree that peer review practices will continue to change in upcoming years.

If you’re interested in becoming more involved in leading the evolution of peer review, take some time to research the many proposed alternative peer review strategies. Try to start conversations about new peer review models in academic spaces to spread the word about alternative strategies. If you’re able, try to participate in validated, evidence-driven research to either validate the efficacy of alternative peer review models or demonstrate the inefficiency of our current structure. Change always requires motivated and driven individuals who are willing to champion the cause. The communal push toward revolutionizing peer review is clearly growing—now, it’s up to the community to determine which model will prevail.

The Role of Generative Artificial Intelligence in Peer Review

This year, as the research community’s trust in the peer review system’s efficacy and efficiency has wavered, we’ve seen a sharp rise in the proposal and implementation of alterations to the standard peer review process. As such, it’s not surprising that the community-selected theme for the 2023 Peer Review Week is “Peer Review and The Future of Publishing.” When taken in context with the runner-up topics— “Peer Review and Technology” and “Ethical Issues in Peer Review”—it’s clear that the medical community is uncertain about many of these changes, especially changes that involve new and unproven technology. In this article, we’ll narrow our focus to a specific topic that embodies much of the potential (both positive and negative) of these changes: the role of generative artificial intelligence (AI) in peer review.

Artificial Intelligence in Peer Review

Generative AI’s potential role in peer review is complex, with the capacity for great time-saving efficiency as well as for severe ethical violations and misinformation. In theory, generative AI platforms could be used throughout the peer review process, from the initial drafting to the finalization of a decision letter or a reviewer’s critiques. An editor or reviewer could input a manuscript (either in whole or individual sections) into a generative AI platform and then prompt the tool for either an overall review of the paper or for a specific analysis, such as evaluating the reproducibility of the article’s methods or the language clarity. However, this comes with a myriad of potential benefits and drawbacks.

Arguments in support of generative AI in peer review include:

  • Automation of time-intensive tasks, thereby reducing the extensive turnaround windows for manuscript evaluation
  • The rich potential of AI as a supportive tool, not as a total replacement for editors or reviewers
  • Use of AI to draft and refine decision letters and reviewer comments

Conversely, arguments in opposition to generative AI in peer review include:

  • Potential for unreliable, factually incorrect output
  • Discrimination resulting from large language models’ tendency toward biases
  • Non-confidentiality of valuable research data and proprietary information
  • Murky status of autogenerated content as plagiarism

Current State of Generative AI in Peer Review

The debate on whether generative AI should be permissible for peer review has waged for most of 2023, and in recent months, key funders have announced their stance. Foremost among them is the National Institutes of Health (NIH), the largest funder of medical research in the world. In June of 2023, the NIH banned the use of generative AI during peer review, citing confidentiality and security as primary concerns; a Security, Confidentiality and Nondisclosure Agreement stipulating that AI tools are prohibited was then sent all NIH peer reviewers. The Australian Research Council followed quickly afterwards with a similar ban. Other funding bodies, such as the United States’ National Science Foundation and the European Research Council, currently have working groups developing position statements regarding generative AI use for peer review.

Publishers, however, are placed in a unique position. Some journals have proposed adopting generative AI tools to augment the current peer review process and to automate some processes that are currently completed by editors or reviewers, which could meaningfully shorten the time required to complete a thorough peer review. Currently, few publishers have posted public position statements regarding the use of generative AI during peer review; an exception is Elsevier, who has stated that book and commissioned content reviewers are not permitted to use generative AI due to confidentiality concerns. The future of generative AI integration into journals’ manuscript evaluation workflows remains unclear.

Understanding the 2023 Theme Beyond Generative AI

Beyond the proposed role of generative AI and natural language processing in peer review, the 2023 theme of “Peer Review and The Future of Publishing” encompasses a wide range of current and anticipated shifts in the publishing process. These changes can have a domino effect to sway the community’s opinion on generative AI, potentially swinging the needle regarding its use during peer review. Other related considerations include:

Each of these trends will affect peer review in crucial but unclear ways, which has led to a heightened sense of uncertainty regarding peer review throughout the medical research community. The 2023 theme for Peer Review Week aims to hold space for these concerns and allow stakeholders to collaboratively discuss the most effective routes forward to ensure that peer review is an effective and efficient process.

Countering Systemic Barriers to Equity in the Academic Publishing Process

In recent years, improving diversity has been a core priority of many industries, including scholarly publishing and academia. Almost every large publisher has a dedicated Diversity, Equity, and Inclusion page, and most have published statements dedicating resources toward diversifying their staff, editorial board members, and authors. However, few initiatives have targeted the systemic barriers in place that fundamentally contribute to this inequality. Here, we’ll explore some underlying issues within the overall research publication system that must be addressed in order to achieve equity in academic publishing.

Understanding the Problem

In order to explore potential mechanisms to counter systemic barriers to research publication, we need to start by defining the problem. Systemic barriers describe “policies, procedures, or practices that unfairly discriminate” against marginalized groups, including racial, ethnic, gender, sexual, disability, and religious minority groups. Because of these barriers, authors from minority groups do not have equitable access to high-quality publication avenues as their non-minority counterparts; as a result, almost every academic publishing specialty area suffers from a lack of diverse perspectives and inequality. Likewise, members of minority groups who want to pursue careers in academic publishing industries face additional blockades and challenges than those who are not in minority groups.

There are many systemic barriers that create injustice in academic publishing. In this article, we’ll focus on two barriers that have been the focus of extensive research in recent years, with an exploration of some evidence-supported practices that can help counteract them.

Unequal Access to Education

Unequal access to education, especially due to race, is fundamentally connected to the United States’ history. As Dupree and Boykin (2021) explain, during America’s founding, it was generally illegal for slaves to receive education. Following the abolishment of slavery, the “separate but equal” precedence led to establishment of Black higher education institutions that were woefully unequal to White institution counterparts in quality and accessibility. As integration spread throughout America, minority scholars gained increased access to historically White higher education institutions but faced near intolerable levels of discrimination from students, professors, and administrators. Additionally, academia’s role in racial devaluation through research, such as publication of the biological determinism and the cultural deficit models, cannot be ignored. Similar processes of begrudging integration and enrollment into higher education spaces can be seen across the dimensions of gender, disability, religion, and more.

To this day, higher education institutions are affected by their histories of inequality and the systems that were originally designed to operate within these frameworks of discrimination. Generally, becoming an academic researcher in any field requires at least an undergraduate degree, if not a Master’s or Doctoral degree; as such, limited access to these degrees translates to limited access to research and publication participation.

Evidence-based solutions

Employment & Promotion Inequality

Inequality affects both those who work within academic publishing industry (journal editors, article reviewers, publication specialists, etc.) and the authors seeking publication in academic journals. Within academia, members of minority groups experience discrimination during the interviewing and employment process; this discrimination extends into promotion and tenure opportunities. In the publication industry, the lack of diversity is a known problem, with many initiatives targeted toward countering inequality. Many publishers have released statements acknowledging the inequities in their hiring practices, with Nature recognizing its own role in being “complicit in systemic racism” and publishing a lists of actionable commitments they’ve made toward improving diversity. However, the efficacy of these commitments remains unclear.

Evidence-based solutions

Advocate for funding equality. Many large funding bodies, such as the National Institutes of Health, and universities alike have been recently criticized for inequality in research funding and grant awardees. Because their available funding is minimized, researchers from minority groups are at a disadvantage to demonstrate publication excellence and research experience, which then leads to inequitable tenure and promotion decisions. To counteract this, organizations should evaluate their own funding demographics and overtly advocate for transparency and equality in funding allocation.

Originality in Academic Writing: The Blurry Lines Between Similarity, Plagiarism, and Paraphrasing

Across disciplines, most manuscripts submitted to academic journals undergo a plagiarism check as part of the evaluation process. Authors are widely aware of the industry’s intolerance for plagiarism; however, most of us don’t receive any specific education about what plagiarism is or how to avoid it. Here, we’ll discuss what actually constitutes plagiarism, understand the important differences between similarity and plagiarism, and discuss easy strategies to avoid the problem in the first place.

What actually is plagiarism?

The University of Oxford defines plagiarism as “presenting work or ideas from another source as your own… without full acknowledgement.” This is a fairly fundamental definition, and for most of us, this is the extent of our education on plagiarism.

When we think of plagiarism, we usually imagine an author intentionally copying and pasting text from another source. This form of plagiarism, called direct plagiarism, is actually fairly uncommon. More commonly, plagiarism takes the form of:

  • Accidental plagiarism. Citing the wrong source, misquoting, or unintentionally/coincidentally paraphrasing a source that you’ve never seen before is still considered plagiarism, even when done without intent.
  • Secondary source plagiarism. This is an interesting and challenging issue to tackle. This form of plagiarism refers to authors using a secondary source, but citing the works found in that source’s reference list—for example, finding information in a review but citing the initial/primary study instead, not the review itself. This misattribution “fails to give appropriate credit to the work of the authors of a secondary source and… gives a false image of the amount of review that went into research.
  • Self-plagiarism. There’s still an ongoing debate over the acceptability of reusing one’s own previous work, with differing answers depending on the context. It’s widely agreed that reusing the same figure, for example, in multiple articles without correct attribution to the original publication is unethical; however, it’s less clear whether it’s acceptable to use the same verbatim abstract in a manuscript as you previously published as part of a conference poster. Copyright law can play a major factor in the permissibility of self-plagiarism in niche cases.
  • Improper paraphrasing plagiarism.Some of us have heard from teachers or peers that, to avoid plagiarism, all you need to do is “rewrite it in your own words.” However, this can be misleading, as there’s a fine line between proper and improper paraphrasing. Regardless of the specific language used, if the idea or message being communicated is not your own and is not cited, it’s still plagiarism.

Plagiarism vs similarity

Many publishers, including Elsevier, Wiley, and SpringerNature, perform plagiarism evaluations on all manuscripts they publish by using software iThenticate. However, ‘plagiarism evaluation’ through iThenticate is a bit of a misnomer: iThenticate checks for similarity, not plagiarism. Though the difference between the two is minor, their implications are entirely different.

Whereas plagiarism refers to an act of ethical misconduct, similarity refers to any portion of your paper that recognizably matches text found in previously published literature in iThenticate’s content database. Similarity can include text matches in a manuscript’s references, affiliations, and frequently used/standardized terms or language; it’s natural and nonproblematic, unlike plagiarism.

A similarity report, such as the one generated by iThenticate, is used by editors as a tool to investigate potential concerns for plagiarism. If an iThenticate similarity report flags large sections of text as similar to a previously published article, the editors can then use this as a starting point to evaluate whether the article is, in fact, plagiarized.

Strategies to avoid plagiarism

Being accused of plagiarism—especially direct, intentional plagiarism— can be a serious ethical issue, with long-term implications for your job prospects and career. The best way to avoid this issue is to use preventative measures, such as the ones discussed here.

  1. Educate yourself about what plagiarism truly is and how it occurs. If you’ve made it this far in the article, you’re already making great progress! Consider reviewing the sources cited in this article to continue your education.
  2. Start citing from the note-taking phase. Many times, accidental plagiarism is the result of forgetting the source of an idea or statistic during quick, shorthanded note-taking. To avoid this, start building your reference list from the beginning of your research phase. Consider adding a quick citation for every single line of text—including citing yourself for your own ideas! Reference management software like EndNote and Zotero are great tools for this.
  3. Understand proper and improper paraphrasing. Learn how to correctly paraphrase someone else’s work and the importance of citing your paraphrased text. If you come across a phrase of sentence that perfectly summarizes an idea, don’t be afraid to include it as a direct, cited quotation!
  4. Consider cultural differences in plagiarism policies. This article aligns with the United States’ view of plagiarism as a serious ethical offense. However, this isn’t the case in all countries. In some East Asian countries, for example, the concepts of universal knowledge and memorization to indicate respect leads to a wider cultural acceptance of instances that would be considered plagiarism in America. If you’re unsure about what the expectations are for a certain journal, it’s always recommended to ask the editors.
  5. Use a similarity checker. While a similarity review won’t directly identify plagiarism, it can be great as a final scan for any text you may have copied and pasted with the intention of removing but forgot to erase, and it can help pick up accidental plagiarism! If your institution doesn’t have access to iThenticate, Turnitin, or CrossRef, there are several free similarity scanners available online.

Current and Future Trends of the Academic Publishing Industry’s Environmental Effects

As the academic publishing industry becomes increasingly cognizant of the United Nations’ Sustainable Development Goals (SDGs) and begins to develop best practices for weaving sustainability into our operations, it’s crucial that we acknowledge the environmental effects of our industry. By reviewing those effects along with shifts in the industry, we can project—and influence—our future trajectory toward reduced environmental impact.

Current Environmental Outputs of Scholarly Communications

Anyone involved in scholarly communications knows that we’re currently in a time of rapid change and process development. Print-based academic journals are part of the commercial print sector, and researchers from HP have identified paper waste byproducts resulting from the publication production process as a primary source of its industry’s greenhouse gas emissions. However, over the last twenty years, scholarly publishing has largely shifted toward digital processing and publishing, leading to a complex set of environmental benefits and drawbacks.

Digital publishing and open access are inextricably linked concepts, and there’s much to be said both supporting and criticizing this paradigm shift’s impact on our industry. Digital publishing has massively reduced demand for print versions of materials, from the printed manuscript drafts once mailed to journal editors for evaluation to the finalized journal issues sent to journal subscribers, leading to reduced paper waste. This also results in a reduction in print material mailing/transport emissions and impacts.

These improvements, however, come at the cost of increased email and website use. Though there are doubtlessly many improvements of electronic communication compared with mail—for example, a single email requires around 1.7% of the energy of a single paper letter delivery—there are still consequences to these digital shifts. The physical components of electronics are major contributors to environmental detriment both in their manufacturing requirements and inefficient waste strategies. Data generation and use is also a large area of concern, especially as big data becomes increasingly widespread. This is especially concerning for the academic publishing industry, as big data is rapidly expanding throughout both the research sectors our industry works with and within the scholarly communications industry itself.

Future Trends

As our industry continues to evolve in pace with technological developments and growth in adjacent sectors, such as medical technology and digital publications, we’ll likely continue to see rapid shifts, both in expected and unexpected directions. Here are a few trends we expect will continue to flourish in upcoming years:

Increasing industry recognition and support for of social causes. Recently, many publishers have placed increased focus and attention on diversity and equity in publishing. Relevant industry shifts range from initiatives to improve diverse hiring practices to strategies to financially assist authors from low- and middle-income countries who may not be able to afford rapidly increasing article processing charges, with many publishers offering waivers for qualifying authors. In the last three years, sustainability has become another forefront social issue that publishers are addressing by both promoting awareness and through policy development.

Reduced in-person office presence. Though the industry’s shift toward the work-from-home model was primarily catalyzed by the COVID-19 pandemic, the trend toward remote work seems to be here to stay. This results in reduced office space requirements and, conseenergy consumption (air conditioning, lighting, technology, etc) and paper waste products.

Increased research publication focus on climate change. A literature review found that the number of climate change–focused academic journal publications increased over six-fold between 2005 and 2014; in more recent years, research has continued to grow, with the number of publications steadily growing annually since 1997. The more we support systematic, reproducible environmental research, the better we’ll understand our current crisis and opportunities to counteract climate change.

Increased burden of websites/portals. Digital publishing practices aren’t a panacea for our industry’s environmental impacts. Data and websites generate their own carbon emissions and environmental impacts, and as the industry continues shifting toward digital publishing, we must stay aware of the fact that it has its own drawbacks.

Influx in sheer number of publications requires more resources. In today’s current publishing landscape, authors are rewarded for their number of publications, not quality. This has led to a staggering increase in the number of research manuscripts published each year. Each of these publications require resource use, and as the size of our industry expands, so does our environmental impact.

How you can impact scholarly publishing’s environmental effects

If you want to become more involved in our industry’s efforts to promote sustainability, there are several ways to do so:

  1. Research and consider joining the SDG Publishers Compact Fellows. This group acts to support the UN’s Sustainable Development Goals within the publications industry by providing action tips, resources, and policy initiatives.
  2. Direct interested research staff toward the Intergovernmental Panel on Climate Change. This group has an open requests for volunteer authors to contribute on research manuscripts through a variety of roles, ranging from lead authors, review editors, chapter scientists, and expert reviewers. There are opportunities for non-researchers, too: IPCC also welcomes technical support unit volunteers, who assist with report preparation, organization, and editing.
  3. Advocate for digital publication, carbon neutrality/offset, and sustainable paper use. By acting as a sustainability champion in your workplace, you can potentially affect your employer’s practices within your team and company-wide. Sustainability initiatives tend to have a domino effect—one small action on your part could lead to industry-wide change!

Open Access: History, 20-Year Trends, and Projected Future for Scholarly Publishing

It’s hard to imagine where the scholarly publishing landscape would be today without open access. As we reach two decades from the inception of open access, it’s important to evaluate how this model has revolutionized research and its potential future directions.

A Brief History of Open Access

1991: The beginning of the open access movement is commonly attributed to the formation of arXiv.org (pronounced ‘archive’), the first widely-available repository for authors to self-archive their own research articles for preservation. ArXiv.org is still widely used for article deposition, with over 2 million articles included in January 2023.

1994: Dr. Stevan Harnad’s ‘A Subversive Proposal’ recommended that authors publish their articles in a centralized repository for free immediate public access, leveraging the potential of the up-and-coming internet and combating the rapidly increasing publication costs and slow speed of print publishing (ie, the ‘serials crisis’). Though this was not the first traceable mention of what would become open access publication, it’s widely considered as the start of an international dialogue between scientific researchers, software engineers, journal publication specialists, and other interested stakeholders.

2000-2010: Open access journals began appearing within the publishing landscape. Throughout the decade, a heated back-and-forth debate persisted between open access proponents and traditional non-OA publishershttps://www.bmj.com/content/334/7587/227.

2001: The Budapest Open Access Initiative (BOAI) resulted in a declaration establishing the need for unrestricted, free-to-readers access to scholarly literature. This initiative is considered the first coined use of the phrase ‘open access.’

2003: As a follow-up to BOAI, the 2003 Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities expanded upon the definitions and legal structure of open access and was supported by many large international research institutes and universities.

2013-present: Multiple governments have announced mandates supporting or requiring open access publishing, including the United States, the United Kingdom, India, Canada, Spain, China, Mexico, and more.

2018: cOAlition S was formed by several major funders and governmental bodies to support full and immediate open access of scholarly literature through Plan S.

Current State of Open Access

Today, there are four primary submodels of scholarly open access article publishing:

  • Gold: all articles are published through open access, and the journal is indexed by Directory of Open Access Journals (DOAJ). The author is required to pay an article processing charge.
  • Green: manuscripts require reader payment on the publisher’s website but can be self-archived in a disciplinary open access archive, such as ArXiv, or an institutional open access archive. A time-based embargo period may be required before the article can be archived. The authors are not required to pay an article processing charge.
  • Hybrid: authors have the choice to publish their work through the gold or green open access models.
  • Bronze: a newer and less common option than gold, green, or hybrid open access, bronze open access means that manuscripts are published in a subscription-based journal without a clear license.

Though open access isn’t yet the default for publishing, it’s a widely available option that’s quickly becoming an expected option for journals. Additionally, research funding bodies are increasingly requiring open access publication as a term for funding, such as the Wellcome Trust and the National Institutes of Health.

Since its launch in 2015, the Directory of Open Access Journals (DOAJ) has risen to the forefront as one of the most comprehensive community-curated lists of reputable open access journals. Unfortunately, however, the rise of open access has also enabled a widespread increase in predatory publishing practices, and counteracting predatory publishers is expected to be a primary focus of future open access development.

Open Access Growth Trends

Before diving into the numbers, it’s important to note that open access reporting is unstandardized. Depending on the databases assessed and definitions of open access, document types, and related terms, the reported number of open access articles per year can differ dramatically between reports. However, overarching trends remain relatively consistent across reports.

In 2018, the European Commision of Research and Innovation, an official research group of the European Union, found that 30.9% of open access publications were open access in 2009, which increased to 41.2% in 2016, then slightly tapered off to 36.2% in 2018. As of 2019, 31% of funders required open access publishing of research, 35% encouraged open access publishing, and 33% embraced no overt policy or stance.

In 2022, the Research Information Observatory partnered with the Max Planck Digital Library and Big Data Analytics Group to compile and publish their data paper, “Long Term Global Trends in Open Access.” Their report found that the percentage of articles that are accessible without paywall subscriptions has increased substantially: around 30% of articles published in 2010 were openly accessible, which jumped to around 50% of articles published in 2019.

Future Expectations and Projections for Open Access