Publishing Strategy8 min readUpdated Mar 25, 2026

PLOS ONE's AI Policy: How the World's Largest Journal Handles AI Disclosure

PLOS ONE requires AI disclosure in Methods and during submission, prohibits AI authorship, and enforces compliance across 15,000+ articles per year through author attestation and community scrutiny.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness Scan

Fifteen thousand papers a year. That's what PLOS ONE publishes, more than any other single journal in the world. When a journal operates at that scale, AI policy enforcement becomes a fundamentally different challenge than it is at a journal publishing 200 articles per year. PLOS ONE can't manually screen every manuscript for AI-generated text the way Nature or Cell can. Instead, it relies on a system built around author honesty, peer review flagging, and the collective scrutiny that comes with being fully open access. Here's how it works and what you need to know before submitting.

The PLOS AI policy

PLOS sets its AI policy at the organizational level, covering all PLOS journals:

  1. AI can't be an author. Consistent with ICMJE and COPE guidelines, AI tools don't meet authorship criteria, they can't take accountability or approve manuscripts.
  1. AI use must be disclosed in Methods. If you used any generative AI tool during manuscript preparation, describe it in the Methods section. Specify the tool, version, and purpose.
  1. AI-generated images are prohibited. No figures or visual content produced by generative AI tools.
  1. Authors bear full responsibility. Every listed author must verify the accuracy of all content, including AI-assisted sections.
  1. The submission system asks about AI use. PLOS's online submission process includes questions about AI tool usage, creating a formal record of your declaration.

How the policy applies across PLOS journals

PLOS publishes nine journals. The AI policy is identical across all of them:

Journal
Focus
Articles/year
PLOS ONE
All sciences
~15,000
PLOS Biology
Life sciences
~300
PLOS Medicine
Clinical medicine
~200
PLOS Genetics
Genetics
~400
PLOS Computational Biology
Computational biology
~500
PLOS Pathogens
Infectious disease
~400
PLOS Neglected Tropical Diseases
Tropical medicine
~400
PLOS Water
Water science
~100
PLOS Digital Health
Digital health
~200

The volume disparity is enormous. PLOS ONE publishes roughly 15,000 articles, more than all other PLOS journals combined. This scale affects how the AI policy functions in practice, even though the text is identical.

Scale changes everything about enforcement

At PLOS Biology (~300 papers/year), an editor might personally review the Methods section of every accepted paper for AI disclosure compliance. At PLOS ONE (~15,000 papers/year), that's not possible. Here's how enforcement actually works:

Author self-reporting: The primary mechanism. Authors declare AI use during submission and include it in Methods. PLOS trusts this process, as it does with conflict of interest disclosures and data availability statements.

Peer review flagging: Reviewers who notice AI-generated patterns in text can flag concerns. PLOS ONE uses a "soundness, not significance" review standard, reviewers assess whether the science is technically sound rather than whether it represents a major advance. But AI-related concerns about text quality or integrity fall within the reviewer's purview.

Automated screening: PLOS has invested in technological solutions for detecting potential issues in manuscripts at scale. While the specifics aren't fully public, the organization has acknowledged exploring AI-based tools for manuscript integrity screening.

Post-publication scrutiny: This is where PLOS ONE's open-access model creates a distinctive enforcement mechanism. Every paper is free to read, comment on, and critique. PubPeer, social media, and PLOS's own commenting system all serve as post-publication oversight channels. With 15,000 papers per year freely available, the community's eyes are effectively an extension of editorial oversight.

Writing the disclosure for PLOS ONE

PLOS ONE reviews manuscripts for technical soundness, not impact. The AI disclosure should be straightforward and complete:

Standard disclosure:

"During the preparation of this manuscript, the authors used ChatGPT (GPT-4, OpenAI) to improve the language and clarity of the Discussion section. All AI-generated suggestions were reviewed and edited by the authors, who take full responsibility for the content of the published article."

For a paper with computational analysis:

"The authors used GitHub Copilot (Microsoft) to assist with writing R scripts for the statistical analysis. ChatGPT (GPT-4, OpenAI) was used to improve the readability of the Methods and Results sections. All code was validated against manual calculations, and all text was reviewed by the authors. The authors take full responsibility for the published content."

For a paper by non-native English speakers:

"This manuscript was originally drafted in Mandarin and translated/edited using ChatGPT (GPT-4, OpenAI) for English language clarity. All translated text was reviewed by the authors for scientific accuracy. The authors take full responsibility for the content."

PLOS has been publicly supportive of AI-assisted language editing for non-native English speakers. The organization views language barriers as an equity issue in science, and AI tools that help researchers communicate their work in English are seen positively, as long as they're disclosed.

What requires disclosure at PLOS ONE

Use case
Disclosure required?
Notes
Grammar/spell check
No
Standard tools exempt
ChatGPT for language editing
Yes
Methods section
AI for data analysis code
Yes
Specify which analyses
AI as research subject
No (research method)
Standard Methods
AI-generated figures
Prohibited
Data-derived plots fine
Translation of manuscript
Yes
Name tool and languages
AI for supplementary text
Yes
Part of the submission
AI for data visualization code
Yes
Plotting scripts count
AI for reference formatting
No
Standard tools exempt
AI for responding to reviewers
Not strictly required
Update Methods if manuscript was substantially revised

Consequences of non-disclosure

PLOS follows COPE guidelines for handling publication ethics issues:

During review:

  • Editor contacts corresponding author
  • AI disclosure must be added to Methods
  • Deliberate concealment can lead to rejection
  • PLOS ONE's academic editors (who handle specific manuscripts) decide on a case-by-case basis

After publication:

  • Correction for minor undisclosed language editing
  • Expression of concern for unclear scope
  • Retraction for fabricated data or false claims
  • COPE investigation for systematic issues

The open-access multiplier: Every PLOS ONE paper is freely accessible. A correction or retraction shows up in Google Scholar, PubMed, and Crossref without paywall barriers. For PLOS ONE specifically, the high volume means there are always many eyes on recent publications. Post-publication review platforms like PubPeer are particularly active on PLOS ONE papers because the open-access model makes it easy to read and evaluate them.

PLOS ONE's academic editor model: Unlike journals with a small, full-time editorial team, PLOS ONE relies on thousands of academic editors (practicing researchers who volunteer to handle manuscripts). Each academic editor makes independent decisions about AI disclosure compliance. This decentralized model means enforcement can vary somewhat between editors, though PLOS provides guidelines and training.

The data availability intersection

PLOS was one of the first publishers to mandate data availability for all published papers. This requirement intersects with AI disclosure in important ways:

  • If AI helped generate analysis code, that code should be deposited alongside the data
  • If AI-generated scripts are part of your analysis pipeline, they should be reproducible without AI tool access
  • PLOS ONE's data availability policy means your methods are auditable, which increases the practical consequences of incomplete disclosure

The transparency chain, data availability plus AI disclosure plus open access, creates a level of accountability at PLOS ONE that's actually higher than at many subscription journals, despite the volume difference.

Comparison with other high-volume journals

Feature
PLOS ONE
Scientific Reports
Nature Communications
BMJ Open
Frontiers journals
Publisher
PLOS
Springer Nature
Springer Nature
BMJ Publishing
Frontiers
Articles/year
~15,000
~20,000
~6,000
~2,500
Varies by journal
AI authorship
Prohibited
Prohibited
Prohibited
Prohibited
Prohibited
Disclosure location
Methods
Methods
Methods
Methods
Methods
AI image ban
Yes
Yes
Yes
Yes
Yes
Access model
Gold OA
Gold OA
Gold OA
Gold OA
Gold OA
APC
~$1,805
~$2,190
~$5,790
~$2,900
Varies
Data availability mandate
Yes (strict)
Yes
Yes
Yes
Yes
Review standard
Technical soundness
Technical soundness
Significance + soundness
Technical soundness
Significance + soundness

PLOS ONE and Scientific Reports are the two largest single journals by volume. Both use "technical soundness" as their primary review criterion (rather than requiring significance or novelty). Both have similar AI policies. The main difference is that PLOS ONE's data availability mandate is more strictly enforced, which creates a stronger transparency framework around AI use.

Practical advice for PLOS ONE submissions

For all submissions:

  • Disclose AI use in Methods. Be specific but don't overthink it, PLOS ONE's review standard focuses on soundness, not style.
  • Complete the submission form AI declaration honestly. This creates a record that should be consistent with your Methods section.
  • If you're a non-native English speaker using AI for translation or language editing, say so openly. PLOS is supportive of this use case.

For computational papers:

  • Deposit all code, including AI-assisted scripts, in a public repository
  • PLOS ONE's data availability policy means your code is auditable, make sure AI-generated code actually works
  • If your paper describes an AI method, clearly separate the research description from any writing AI disclosure

For clinical and biomedical papers:

  • Don't process patient data through cloud AI tools
  • Clinical interpretations should be human-generated
  • If your paper includes a systematic review or meta-analysis, disclose any AI assistance in the screening process

For papers in any discipline:

  • PLOS ONE publishes across all sciences, the AI disclosure format is the same regardless of field
  • If you used AI tools during revision, update the Methods disclosure in your revised manuscript
  • Don't assume that PLOS ONE's high acceptance rate means relaxed AI policy enforcement. It doesn't.

Before submission checklist:

  • [ ] AI disclosure in Methods section
  • [ ] Tool name, version, and use case specified
  • [ ] Submission form AI declaration completed
  • [ ] No AI-generated images
  • [ ] Data and code deposited in public repository
  • [ ] AI-generated code validated independently
  • [ ] All co-authors aware of disclosure

A free manuscript assessment can help verify your PLOS ONE submission meets the journal's requirements before submission.

References

Sources

  1. PLOS AI policy
  2. PLOS ONE author guidelines
  3. PLOS data availability policy
  4. ICMJE Recommendations
  5. COPE position statement on AI

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist