Journal Desk Rejection Rates Report 2026: What the Current Manusights Dataset Shows
Desk rejection is not a side statistic. In many journals it is the main editorial filter. This report looks at the current Manusights journal dataset to show where that filter is harshest and what authors should infer from it.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Readiness scan
Find out if this manuscript is ready to submit.
Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.
Desk rejection is often treated like a minor prelude to peer review. In reality, for many journals it is the main decision gate.
That matters because authors still routinely plan submissions around impact factor, scope prestige, or APCs while barely thinking about editorial triage risk. The result is predictable: papers are rejected before reviewers ever see them, and the authors interpret that as bad luck rather than as a visible feature of the journal's operating model.
This report is meant to make that operating model harder to ignore.
What this report covers
This page summarizes the current Manusights tracked-journal dataset as of March 23, 2026.
Methodology:
- start with the active journal dataset in apps/web/src/data/journals.ts
- deduplicate obvious journal aliases by normalized journal name
- keep only entries with a usable desk-rejection-rate field in
recentStats - summarize the resulting 94 unique journals
This is not a claim that every number is officially published by the journal itself. Some are official, some are curated internal estimates grounded in author reports and publisher-facing materials already tracked in the repo. That distinction matters, and you should treat this page as a strategic dataset, not a universal census.
Headline findings
The core numbers are simple:
Metric | Value |
|---|---|
Unique journals with usable desk-rejection data | 94 |
Median desk rejection rate | 30% |
Journals below 20% desk rejection | 11 |
Journals from 20% to 29% | 30 |
Journals from 30% to 49% | 22 |
Journals from 50% to 69% | 18 |
Journals at 70% or higher | 13 |
The most important number here is the median. A 30% median desk rejection rate means editorial triage is not an edge case. It is a routine part of the submission landscape even before you get to the most selective brands.
The journals with the harshest triage
The top end of the distribution is exactly where most authors would expect it to be, but the pattern is still worth seeing clearly.
Journal | Desk rejection rate in tracked dataset | Tracked acceptance signal |
|---|---|---|
Nature Reviews Cancer | ~90-95% | ~2-5% |
Nature Reviews Molecular Cell Biology | ~85-90% | ~5-10% |
The Lancet | ~80% | <5% |
New England Journal of Medicine | ~80% | <5% |
Science | ~75% | <7% |
JAMA | ~75-80% | <5% |
Cancer Cell | ~75-85% | ~8-10% |
Neuron | ~75-80% | ~8% |
The Lancet Oncology | ~75% | ~8% |
Nature | ~70% | <8% |
This is the editorial pyramid most researchers already feel intuitively. The report just makes the intuition numeric.
The journals with lighter desk triage
At the lower end, the story changes.
Journal | Desk rejection rate in tracked dataset | Tracked acceptance signal |
|---|---|---|
Angewandte Chemie - International Edition | ~5-10% | ~8% |
Chemical Engineering Journal | ~5-10% | ~30% |
International Journal of Molecular Sciences | ~5-10% | ~30% |
Physical Review B | ~10-15% | ~35% |
Journal of the American Chemical Society | ~15-20% | ~8% |
RSC Advances | ~15-20% | ~60-70% |
Sensors | ~15-25% | ~50-60% |
Nutrients | ~15-25% | ~50-60% |
Applied Sciences | ~15-25% | ~50-60% |
Remote Sensing | ~15-25% | ~50-60% |
The key mistake would be to read these as "easy journals." Lower desk rejection does not mean low standards. It often means the journal is using peer review, not editorial gatekeeping, as the main sorting mechanism.
What the distribution really says
This report points to three broad journal behaviors.
1. Prestige triage journals
These journals reject aggressively at the editor stage because reviewer attention is scarce and their scope bar is extremely high.
Typical characteristics:
- broad or flagship identity
- harsh novelty filter
- strong preference for cross-field consequence
- short time to first editorial decision
Nature, Science, NEJM, The Lancet, and the major review brands all fit here.
2. Mixed-model selective journals
These journals still desk reject heavily, but not at the most extreme rates.
Typical characteristics:
- meaningful editorial triage
- realistic path to review for strong-fit work
- reviewer burden still used heavily after screening
This is where many specialty flagships live.
3. Peer-review-forward journals
These journals reject less at the desk because the editorial screen is lighter and more of the sorting happens after reviewer input.
This can be good for authors who want a more review-mediated outcome, but it often comes with slower timelines or broader acceptance distributions.
The strategic mistake authors make
Authors often treat desk rejection as a random insult. It is usually not.
A desk rejection is often the predictable result of one of four things:
- the paper is below the journal's significance bar
- the paper is above the journal's technical baseline but outside its scope logic
- the claims are too ambitious relative to the evidence
- the manuscript does not help the editor see why reviewer time should be spent
That means desk rejection rates are more than statistics. They are clues about what the journal is optimizing for.
What a 30% median actually means for planning
If the median tracked journal desk rejects around 30% of submissions, then journal strategy should include desk-rejection planning by default.
That means:
- have a second-choice journal before you submit the first time
- calibrate the cover letter to the editor's scope logic, not just the science
- decide honestly whether your paper is a prestige-target manuscript or a strong specialist manuscript
- pressure-test the abstract and first two figures before submission
Too many teams still do this planning only after the rejection arrives.
Desk rejection and review speed often travel together
One of the more interesting practical patterns in the tracked dataset is that harsh desk triage often accompanies fast first editorial decisions.
That makes sense operationally. Journals that protect reviewer attention aggressively can often decline quickly. Journals that desk reject less may route more manuscripts deeper into editorial and reviewer workflows.
For the timing side of the story, read average review times across 100 journals in 2026.
What this report does not prove
This page does not prove:
- that high desk rejection always means higher quality
- that low desk rejection means a soft journal
- that the tracked number is official for every journal
It also does not tell you whether your specific paper will be desk rejected.
A 75% desk rejection rate is still compatible with a strong-fit manuscript surviving the screen. A 15% desk rejection rate does not protect a weakly framed paper.
How to use this report well
Use desk rejection rates as a triage-risk signal.
That means:
- if the rate is very high, your framing and journal-fit logic must be unusually sharp
- if the rate is moderate, you still need to clear editorial relevance and readability
- if the rate is low, do not become careless about methods, reporting, or scope
This dataset is most useful when combined with:
- journal fit
- review speed
- acceptance rate
- your own manuscript's true readiness
That is why this report pairs naturally with how to get published in a top journal, major revision vs minor revision, and cost of desk rejection.
Before you place a prestige bet, it is worth running Manusights AI Review so the paper is being judged by more than your internal optimism.
Verdict
The desk rejection rate is not a vanity stat. It is often the clearest sign of how much of a journal's selectivity happens before review.
In the current Manusights dataset, the median tracked desk rejection rate is 30%, and the top of the market is far harsher than that. Authors who ignore that reality are often planning only for peer review when the real first contest is editorial triage.
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Supporting reads
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.