PLOS ONE Acceptance Rate
PLOS ONE acceptance rate is about 68%. Use it as a selectivity signal, then sanity-check scope, editorial fit, and submission timing.
Research Scientist, Neuroscience & Cell Biology
Author context
Works across neuroscience and cell biology, with direct expertise in preparing manuscripts for PNAS, Nature Neuroscience, Neuron, eLife, and Nature Communications.
Journal evaluation
Want the full picture on PLOS ONE?
See scope, selectivity, submission context, and what editors actually want before you decide whether PLOS ONE is realistic.
What PLOS ONE's acceptance rate means for your manuscript
Acceptance rate is one signal. Desk rejection rate, scope fit, and editorial speed shape the realistic path more than the headline number.
What the number tells you
- PLOS ONE accepts roughly ~31% of submissions, but desk rejection accounts for a disproportionate share of early returns.
- Scope misfit drives most desk rejections, not weak methodology.
- Papers that reach peer review face a higher bar: novelty and fit with editorial identity.
What the number does not tell you
- Whether your specific paper type (review, letter, brief communication) faces the same rate as full articles.
- How fast you will hear back — check time to first decision separately.
- What open access costs — $1,931 for gold OA.
Quick answer: PLOS ONE's acceptance rate is approximately 31% (2024), down from 68% a decade ago. The decline reflects a larger, more heterogeneous submission pool, not raised editorial standards. PLOS ONE still evaluates scientific soundness only, not novelty. Papers are rejected for methodological problems, incomplete reporting, or failing data sharing requirements, not for being "incremental." If your methods are rigorous and your conclusions match your data, PLOS ONE remains one of the most accessible peer-reviewed venues in science.
PLOS ONE Acceptance Rate Over Time
Period | Acceptance Rate | Desk Rejection Rate | What was happening |
|---|---|---|---|
2006-2009 | ~70% | Low | Small, self-selected submission pool |
2010-2012 | ~65-70% | ~15% | Rapid growth, still mostly appropriate submissions |
2013-2015 | ~55-65% | ~20% | Became largest OA journal; cascade submissions began |
2016-2018 | ~45-55% | ~25% | Competition from Scientific Reports, more desk rejections |
Jan-Jun 2020 | 48.2% | 23.0% | Pre-COVID baseline |
Jul-Dec 2020 | 49.0% | 22.8% | COVID surge, more submissions accepted |
Jan-Jun 2021 | 49.9% | 21.5% | Pandemic peak, highest acceptance in years |
Jul-Dec 2021 | 47.9% | 21.4% | Beginning of post-pandemic normalization |
Jan-Jun 2022 | 41.4% | 22.9% | Screening tightened significantly |
Jul-Dec 2022 | 37.3% | 22.1% | Acceptance dropping fast |
Jan-Jun 2023 | 30.7% | 30.8% | Desk rejection now exceeds acceptance rate |
2020-2023 data from PLOS's official journal information page, reported semi-annually.
Three forces drove the decline from 70% to 31%. First, PLOS ONE tightened its screening, data availability requirements became stricter, and editors got more aggressive about desk-rejecting incomplete submissions. Second, competition from Scientific Reports, Frontiers, and other mega-journals pulled away some well-matched submissions, leaving PLOS ONE with a higher proportion of cascade rejects from specialty journals. Third, submission volume from authors unfamiliar with the soundness-only model kept growing, increasing the mismatch rate. The journal didn't raise its standards, the submission pool changed around it.
By 2012-2015, PLOS ONE became the go-to destination for papers rejected from higher-tier journals. A paper rejected from Nature for "lack of novelty" often landed at PLOS ONE next. But "rejected from Nature for lack of novelty" doesn't always mean "methodologically sound." Sometimes a paper gets rejected from Nature because it's incremental AND has methods problems. PLOS ONE still rejects those. More authors submitting poorly matched submissions means more rejections, and that pattern has only intensified as submission volume has grown.
What the Soundness-Only Policy Actually Means
This is the most important thing to understand about PLOS ONE, and the most frequently misunderstood.
What reviewers are NOT asked:
- Is this finding novel?
- Is this likely to influence the field?
- Is this worth publishing in a journal with broad readership?
What reviewers ARE asked:
- Are the methods appropriate for the research question?
- Is the sample size justifiable?
- Do the conclusions follow from the data, without overclaiming?
- Is the data available for verification?
- Were appropriate ethical approvals obtained?
- Are the statistical analyses correct?
That last item is where a lot of papers fail. PLOS ONE reviewers catch statistical errors thoroughly, wrong test for the data type, missing correction for multiple comparisons, effect sizes reported without confidence intervals, overpowered conclusions from underpowered samples.
The soundness-only policy doesn't mean "lower bar." It means a different bar. A paper with impeccable methods and a boring result passes. A paper with an exciting result but inadequate controls fails.
Stage-by-Stage Acceptance Funnel
What actually happens to 100 manuscripts submitted to PLOS ONE:
Stage | Papers remaining (out of 100) | What happens |
|---|---|---|
Submitted | 100 | Manuscript enters editorial system |
Desk review | 100 | Editor checks scope, completeness, ethics docs |
Desk rejected | -25 | Out-of-scope, missing data statements, obvious gaps |
Sent to peer review | 75 | Assigned to Academic Editor + 2-3 reviewers |
Rejected after review | -13 | Fatal methodology or statistics problems |
Major revision requested | 42 | Most papers that survive review get revision requests |
Rejected after revision | -12 | Authors didn't adequately address reviewer concerns |
Minor revision / accepted | 31 | Final acceptance after 1-2 revision rounds |
Published | 31 | Median 10 days from acceptance to online publication |
The biggest drop isn't peer review, it's the desk. One in four submissions never reaches a reviewer. Among papers that do reach review, roughly 41% (31 out of 75) ultimately get accepted. That's a much friendlier number than the headline 31%, and it's why getting past the desk is the single most important step.
Acceptance Rate by Field and Paper Type
PLOS ONE's overall 31% masks substantial variation:
Category | Estimated acceptance | Why |
|---|---|---|
Biomedical research | 25-30% | Highest submission volume, most competition |
Social sciences | 35-40% | Lower submission volume relative to reviewer pool |
Environmental science | 30-35% | Growing field with strong submission numbers |
Computational / methods papers | 35-45% | Technical soundness is easier to evaluate |
Replication studies | 40-50% | PLOS ONE actively encourages replication |
Negative results | 40-50% | One of the few venues that genuinely welcomes well-designed studies with unexpected results |
These numbers are estimates based on submission patterns and editorial commentary. PLOS ONE doesn't publish field-specific acceptance rates.
The Full Timeline: From Submission to Publication
Stage | Median Days |
|---|---|
Submission to first editorial decision | 17 days |
Submission to first decision (including peer review) | 45 days |
Submission to final decision | 87 days |
Submission to acceptance | 188 days |
Submission to publication | 204 days |
Acceptance to publication | 10 days |
Source: PLOS official journal information page, Jan-Jun 2023.
Budget 6-7 months from submission to having a citable publication. If your timeline is tighter (thesis deadline, grant milestone), consider that PLOS ONE's 45-day median to first decision is fast, but the full cycle including revisions is comparable to many specialty journals. The 10-day acceptance-to-publication is genuinely fast, once your paper is accepted, it's online quickly.
For the full breakdown of what affects your timeline, see the PLOS ONE review time guide.
How PLOS ONE Compares to Other High-Volume Journals
Journal | Acceptance rate | APC | Review model | Best for |
|---|---|---|---|---|
PLOS ONE | 31% | $2,477 | Soundness-only | Solid work that doesn't need a novelty signal |
Scientific Reports | 57% | $2,850 | Soundness + minimal novelty | Lower bar, Nature brand, faster decisions |
PeerJ | ~45% | $1,395 | Soundness-only | Budget-conscious, similar model to PLOS ONE |
Frontiers series | 40-60% | $1,150-$2,950 | Soundness + specialty | Field-specific audiences |
IEEE Access | ~50% | $1,750 | Broad scope | Engineering and computer science |
BMC series | 35-50% | $1,890-$2,890 | Varies by journal | Field-specific within Springer Nature |
PLOS ONE's 31% is the most selective among these journals, but it also carries the strongest brand recognition in the open-access mega-journal space. The practical choice between PLOS ONE and Scientific Reports often comes down to field norms, desired brand association, and whether the Nature Portfolio affiliation matters to your audience. Both are broad and soundness-led, but Scientific Reports has a higher acceptance rate and a different reviewer culture.
Readiness check
See how your manuscript scores against PLOS ONE before you submit.
Run the scan with PLOS ONE as your target journal. Get a fit signal alongside the IF context.
What Rejection Looks Like at PLOS ONE
Rejection type | When it happens | Common triggers | How to avoid it |
|---|---|---|---|
Desk rejection (~25% of submissions) | Within 17 days | Missing ethics approval, no data availability statement, out of scope, incomplete methods | Read the submission checklist line by line. Every missing item is a desk rejection risk. |
Rejection after review (~17% of submissions) | 45-90 days | Flawed statistical analysis, inadequate controls, conclusions that overreach the data | Get a statistics review before submitting. A PLOS ONE soundness scan catches exactly these issues. |
Rejection after revision (~12% of submissions) | 90-188 days | Point-by-point response that dodges reviewer concerns, adding new data that introduces new problems | Address every reviewer point directly. If you disagree, explain why with evidence, don't just ignore it. |
The most painful rejection is the third kind. You've invested 3-6 months, gone through revision, and still don't get accepted. This almost always happens because the revision response was superficial. The authors who survive revision are the ones who treat each reviewer comment as a genuine question that deserves a real answer, even when the reviewer is wrong.
How the Review Process Affects Your Odds
Soundness-only review means reviewers can't reject your paper because they think the question is boring or the results are unsurprising. If the methods are sound and the conclusions match the data, the paper should be accepted. In practice:
- Papers with rigorous methods and modest results have a real home here
- Reviewers still request revisions (expect 1-2 rounds in most cases)
- The most common rejection reason is methodological weakness, not lack of novelty
- Statistical issues are the single fastest path to rejection
If your methods are bulletproof, your stats are appropriate, and your conclusions don't overreach, your acceptance probability is substantially higher than 31%. The bulk of rejections come from papers with technical problems that could have been caught before submission.
A PLOS ONE methods and stats check catches the statistical design and data-sharing gaps that reviewers flag in the majority of rejections, so you're not handing them the reason to say no.
What We've Seen in Manuscripts Targeting PLOS ONE
Having analyzed hundreds of manuscripts being considered for PLOS ONE through our PLOS ONE submission readiness check, we can put the 31% acceptance rate in sharper context.
The researchers who get accepted aren't doing anything exotic. They're doing the basics well: clean methods, appropriate statistics, conclusions that match their data, and data that's actually ready for public sharing. What separates the 31% who get in from the 69% who don't is almost always methodological rigor, not topic choice or scientific impact.
The most common rejection pattern we flag: papers where the statistical test doesn't match the study design. Chi-squared on continuous data. T-tests when the data isn't normally distributed. Missing corrections for multiple comparisons. At more selective journals, these papers get desk-rejected before anyone looks closely. At PLOS ONE, they reach peer review, and reviewers catch them. The soundness-only model means methodological issues are the entire editorial decision, there's no "but the finding is exciting enough to overlook the stats problems."
The second pattern: incomplete data availability. PLOS ONE enforces data sharing more strictly than almost any other journal. "Data available upon request" is not acceptable. You need a public repository with an accession number, or your raw data uploaded as supplementary files. We regularly flag manuscripts that are otherwise strong but haven't prepared their data package. This is a preventable desk rejection.
One insight from published editorial data that changes how you should read the 31% number: 77% of PLOS ONE reviewer reports comment on novelty or significance, despite the explicit policy not to evaluate those criteria. The editor is supposed to filter out those comments, but with 6,000+ volunteer academic editors of variable experience, enforcement is inconsistent. A well-prepared rebuttal letter that politely redirects reviewers to the soundness criteria can be the difference between acceptance and rejection.
What This Means for Your Submission
If your paper has solid methods, conclusions that match your data, and proper ethics documentation, your submission is competing in a subset of that 31% pool, not the whole thing. Papers rejected for missing ethics statements, out-of-scope submissions, and obvious methods failures don't really represent your competition.
Submit if:
- Your methods are rigorous but your finding isn't a landmark discovery
- You have null results, negative findings, or a replication study that the scientific record needs
- Open access is required by your funder, and cost matters ($2,477 is competitive in this tier)
- Your paper was rejected from a higher-tier journal for "lack of novelty" with no criticism of your methods
- Your data is ready for public sharing and your methods section is detailed enough for independent replication
Think twice if:
- Your paper has unresolved methodological issues flagged in a previous rejection
- Your data isn't ready to share publicly, "available upon request" doesn't meet PLOS ONE's strict requirements
- Your institution's promotion criteria heavily weight impact factor, PLOS ONE's JCR IF of 2.6 may underrepresent your work's quality to hiring committees
- You need a fast turnaround in a niche field, the 42-day median masks longer waits of 60-90 days when qualified reviewers are scarce
For the full submission process including what the editors actually check, see the PLOS ONE submission process guide.
For additional context, see the PLOS ONE impact factor, PLOS ONE submission process, and Scientific Reports vs PLOS ONE guides.
Frequently asked questions
PLOS ONE currently accepts approximately 31% of submitted manuscripts. This has declined significantly from around 68% in 2015 as submission volume has grown and the author pool has shifted.
Two main reasons. First, submission volume grew massively as PLOS ONE became the established venue for open-access publishing. Second, more papers now arrive at PLOS ONE after being rejected from higher-tier journals, which means a larger fraction of the submission pool has known methodological issues or weak conclusions.
No. PLOS ONE explicitly doesn't evaluate novelty, significance, or impact. Reviewers assess only whether the study is technically sound, the methods are appropriate, and the conclusions follow from the data. This is the journal's defining editorial policy.
Academic Editors at PLOS ONE are independent volunteer faculty, not staff editors. They handle editorial decisions and select peer reviewers. Over 6,000 Academic Editors serve across PLOS ONE's 200+ subject areas.
Median time to first decision is 42 days. Some papers get decided faster (under 30 days), others wait 60-90 days. See the full breakdown in the PLOS ONE review time guide.
Sources
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.
Checklist system / operational asset
Elite Submission Checklist
A flagship pre-submission checklist that turns journal-fit, desk-reject, and package-quality lessons into one operational final-pass audit.
Flagship report / decision support
Desk Rejection Report
A canonical desk-rejection report that organizes the most common editorial failure modes, what they look like, and how to prevent them.
Dataset / reference hub
Journal Intelligence Dataset
A canonical journal dataset that combines selectivity posture, review timing, submission requirements, and Manusights fit signals in one citeable reference asset.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Before you upload
Want the full picture on PLOS ONE?
Scope, selectivity, what editors want, common rejection reasons, and submission context, all in one place.
These pages attract evaluation intent more than upload-ready intent.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Start here
Same journal, next question
- Is PLOS ONE a Good Journal? Predatory or Legitimate?
- PLOS ONE Submission Guide: What to Prepare Before You Submit
- PLOS ONE Review Time: What to Expect in 2026
- How to Avoid Desk Rejection at PLOS ONE
- PLOS ONE Pre-Submission Checklist: Are You Ready to Submit?
- PLOS ONE 'Under Review': What Each Status Means and Realistic Timelines
Compare alternatives
Supporting reads
Conversion step
Want the full picture on PLOS ONE?
These pages attract evaluation intent more than upload-ready intent.