The Real Acceptance Rates Journals Don't Tell You, and How to Read Them
Acceptance rate sounds like the cleanest statistic in journal publishing. It isn't. The number is often estimated, rarely standardized, and easy to misread without desk rejection, scope, and post-review context.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Journal evaluation
Want the full journal picture?
See scope, selectivity, submission context, and what editors actually want before you decide whether the journal is realistic.
Researchers ask for acceptance rates because they want one clean number that tells them their odds.
Publishing does not work that way.
Acceptance rate is useful, but only if you treat it as a rough strategic signal rather than a literal prediction. Too many authors read "7% accepted" or "40% accepted" and assume that tells them exactly how likely their paper is to land. It doesn't, because the number usually hides at least four different questions:
- How many submissions were completely off-scope?
- How many were desk rejected?
- How many made it to external review?
- How many were transferred, resubmitted, or counted differently across years?
That is why the real acceptance-rate conversation is more interesting than the published one.
Short answer
In the current Manusights tracked-journal dataset, the median acceptance rate is 19% across 131 deduplicated journals with a usable acceptance signal.
That number alone already tells you something important: the middle of the submission market is more selective than many authors assume.
Acceptance-rate band | Number of journals in tracked set | What it usually means |
|---|---|---|
Under 10% | 29 | Prestige brands or journals with brutal triage |
10% to 19% | 35 | Selective but realistic if fit is strong |
20% to 39% | 35 | Competitive mainstream journals |
40% and above | 32 | Broad-scope, volume-heavy, or field-specific outlets |
If you only remember one thing, remember this: most journals are not "easy," and the published acceptance rate rarely tells you why.
What this report covers
This page uses the active journal dataset in apps/web/src/data/journals.ts as of March 23, 2026.
Methodology:
- start with the active tracked journal list in the repo
- deduplicate obvious aliases by normalized journal name
- keep journals with a usable acceptance-rate field
- summarize the resulting 131 unique journals
Some of these rates are official. Some are clearly marked tracked estimates based on editorial statements, publisher materials, and accumulated community reporting. That mixed provenance is exactly why authors should use acceptance rates intelligently rather than reverently.
The headline distribution
Here is the current distribution from the tracked set:
Metric | Value |
|---|---|
Unique journals with usable acceptance data | 131 |
Median acceptance rate | 19% |
Journals under 10% | 29 |
Journals from 10% to 19% | 35 |
Journals from 20% to 39% | 35 |
Journals at 40% or higher | 32 |
The distribution matters more than the single median.
Why? Because it shows the publishing market is not split into "elite journals" and "easy journals." There is a broad middle where acceptance is neither impossible nor generous, and where fit matters much more than authors like to admit.
The journals with the lowest tracked acceptance rates
The bottom of the distribution looks exactly like you would expect, but the details still matter.
Journal | Tracked acceptance signal | What the number is really telling you |
|---|---|---|
Nature Reviews Cancer | ~2% | Extremely selective review journal, not a standard primary-data outlet |
JAMA | ~3% to 5% | Very high triage plus top-tier clinical significance bar |
The Lancet | ~5% | Broad prestige plus enormous off-target submission volume |
New England Journal of Medicine | ~5% | Tiny acceptance window for top clinical evidence |
Cell Metabolism | ~5% | Strong specialty prestige and tight mechanistic bar |
Nature Immunology | ~5% | High editorial triage in a competitive field |
BMJ | ~5% | Broad clinical readership, strong policy and practice filter |
Nature Reviews Molecular Cell Biology | ~5% | Again, review-journal selectivity is a different game |
Science | ~7% | Multidisciplinary prestige plus harsh desk triage |
Nature | <8% | Broad-significance screen dominates the process |
These numbers are real enough to be useful, but they still mislead if read lazily.
A 5% acceptance rate at a review journal is not equivalent to a 5% acceptance rate at a primary-research journal. A 7% rate at Science reflects not just scientific quality but also a flood of manuscripts that were never realistic fits in the first place.
The journals with the highest tracked acceptance rates
At the other end, high acceptance rates do not automatically mean weak quality.
Journal | Tracked acceptance signal | Likely explanation |
|---|---|---|
Astrophysical Journal | ~70% | Field norms, specialist scope, and high-volume astronomy workflow |
RSC Advances | ~60% | Broad chemistry scope and society-publisher scale model |
Scientific Reports | ~57% | Broad-scope, technically sound, high-volume editorial model |
Molecules | ~50% | MDPI chemistry title with volume-driven open-access model |
Sensors | ~50% | Broad engineering scope and high throughput |
Nutrients | ~50% | Field reputation varies, but throughput is clearly high |
Applied Sciences | ~50% | Broad, multidisciplinary, high-volume workflow |
Remote Sensing | ~50% | Large submission base in a defined technical field |
Materials | ~50% | Broad applied materials scope |
Frontiers in Plant Science | ~50% | Large open-access specialist platform |
What these numbers often mean is not "this journal accepts anything." More often they mean:
- the scope is broad enough that fewer submissions are obviously off-target
- the editorial model is built around throughput
- the journal is willing to review a larger share of technically sound papers
That still does not make all high-acceptance journals interchangeable. Editorial rigor, reviewer depth, and community reputation still vary a lot.
Acceptance rate is usually a desk-rejection story in disguise
This is the most common misunderstanding.
Authors hear "Nature accepts under 8%" and imagine external reviewers mercilessly rejecting 92% of submissions. That is not how it works. Nature's own editorial criteria page says only about 8% of submitted manuscripts are accepted and that most submissions are declined without peer review.
That tells you the true bottleneck is editorial triage, not referee attrition alone.
This is why you should read acceptance rate together with desk rejection rate whenever possible. In the current tracked dataset, some of the harshest triage journals show:
- Nature at roughly 70% desk rejection
- Science around 75%
- The Lancet around 80%
- JAMA around 75% to 80%
That changes how you should plan your submission. If the risk is mostly at the desk, then:
- title and abstract framing matter more
- the cover letter matters more
- scope fit matters more
- waiting for reviewer comments that will never come is a mistake
If you want the triage layer directly, read the current desk rejection rates report.
Why many published acceptance rates are fuzzy
There are at least five reasons acceptance-rate numbers get fuzzy fast.
1. Journals often do not publish them
Nature, Science, and many flagship titles publish process information, but not always a standardized current acceptance figure on every public-facing page.
2. The denominator changes
Does the journal count transferred manuscripts? Resubmissions? Invite-only review content? Cascade submissions? Many do not make this transparent.
3. The period is unstable
A rate quoted in an editor interview from two years ago may not describe the current year, especially after a scope shift, a major news cycle, or a new submission surge.
4. Specialty and format effects matter
Reviews, methods papers, letters, and original research often sit in the same brand ecosystem but follow different funnels.
5. Community estimates fill the gaps
Once journals stop publishing numbers clearly, the market fills in the blanks with editor talks, society presentations, forum posts, and publisher marketing. Sometimes that is directionally useful. It is not always clean.
What authors should actually do with an acceptance rate
Use it for calibration, not prophecy.
Good uses:
- choosing a first-choice and second-choice submission sequence
- estimating how much desk-triage risk you are taking
- deciding whether it is worth paying for stronger pre-submission review
- understanding whether the journal is likely to be prestige-curated or volume-oriented
Bad uses:
- estimating your personal probability of acceptance as if the number were individualized
- comparing journals across fields as if acceptance rates were field-neutral
- assuming a higher rate always means an easier or worse journal
The useful question is never "What is the acceptance rate?" by itself.
The useful question is: What kind of journal process produces this rate?
A better framework than acceptance rate alone
Before you submit, score the journal on four dimensions:
Dimension | What to ask |
|---|---|
Editorial triage | How many papers die before review? |
Reviewer depth | If sent out, how demanding are the external reviews likely to be? |
Scope tightness | Is the journal rejecting on quality, fit, or both? |
Post-review odds | Once reviewed, is the journal broadly constructive or still highly attritional? |
This is why a 15% acceptance journal can be smarter to target than a 35% acceptance journal. If the 15% journal is tight in scope but fair once the editor believes in the story, and your paper matches perfectly, your real odds may be better than the headline suggests.
The number journals often do not want you focusing on
A lot of journals are comfortable letting authors fixate on impact factor because impact factor flatters the brand.
Acceptance rate creates a different kind of transparency. It forces authors to ask:
- how many papers are getting filtered out
- whether the journal is mostly selling selectivity or readership
- how much chance there is that this submission becomes a months-long dead end
That is why acceptance-rate reporting remains inconsistent. The number is useful, but it also changes author behavior in ways publishers do not always love.
Bottom line
Real acceptance rates are messier than authors want and more useful than some publishers make them.
The current tracked dataset shows a 19% median acceptance rate across 131 journals, with a wide spread from ultra-selective prestige brands to high-volume broad-scope outlets. The real lesson is not that some journals are hard and some are easy. It is that acceptance rate only becomes meaningful when paired with desk rejection, scope fit, and post-review behavior.
So use the number, but do not worship it. If you want a better decision, read the editorial process behind the percentage.
For the rest of that process, compare this page with average review times across 100 journals, how impact factors are calculated, and Manusights AI Review before you lock a submission sequence.
Sources
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Dataset / benchmark
Biomedical Journal Acceptance Rates
A field-organized acceptance-rate guide that works as a neutral benchmark when authors are deciding how selective to target.
Reference table
Journal Submission Specs
A high-utility submission table covering word limits, figure caps, reference limits, and formatting expectations.
Before you upload
Want the full journal picture?
Scope, selectivity, what editors want, common rejection reasons, and submission context, all in one place.
These pages attract evaluation intent more than upload-ready intent.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Supporting reads
Conversion step
Want the full journal picture?
These pages attract evaluation intent more than upload-ready intent.