Why Manuscripts Get Rejected: The Real Reasons by Stage and Discipline (2026)
Most manuscript rejections fall into predictable, fixable categories. This page breaks down why papers fail at desk review versus peer review, what failure patterns look like by discipline, and what the data actually shows about rejection rates by stage.
Associate Professor, Clinical Medicine & Public Health
Author context
Specializes in clinical and epidemiological research publishing, with direct experience preparing manuscripts for NEJM, JAMA, BMJ, and The Lancet.
Next step
Choose the next useful decision step first.
Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.
Most papers are not rejected because the research is bad. They are rejected because the paper failed to make its case to the right journal in the right way. That distinction matters: a bad research failure is hard to fix before your next study, but a presentation or targeting failure is fixable in days.
Quick answer: Manuscripts fail at two distinct stages, for mostly different reasons. Desk rejections (which account for 30-90% of all submissions depending on the journal) are driven by scope mismatch, weak novelty signaling, and formatting failures. Peer review rejections are driven by methodology gaps, overstated conclusions, and statistical problems. Most desk rejections are preventable. Most peer review rejections are diagnosable before you submit. A pre-submission scan catches both classes of problem in about 60 seconds.
Not all rejections are preventable. Some papers are methodologically sound but land at the wrong journal. Some fields are moving fast enough that last year's contribution is this year's incremental step. Some editors make calls that a different editor at the same journal would reverse. Honest friction is part of the publication process. What this page covers is the large, predictable portion that researchers can actually address.
Why the stage of rejection changes everything
The rejection stage tells you more than the rejection reason.
A desk rejection in under a week means the editor decided your paper did not belong in this journal, before any expert read it carefully. The problem is fit, significance signaling, or readiness, not the quality of the underlying science.
A peer review rejection after six weeks means experts found your paper worth reviewing but concluded it had specific weaknesses. The problem is usually diagnosable from the reviewer comments, and often fixable.
A post-revision rejection (after you revised and resubmitted) means either the revisions did not address the core concern or the paper was borderline and another decision-maker moved it out. This is the most painful, and often the most useful, rejection to analyze.
Treating all three the same way is a mistake researchers make constantly.
Desk rejection: the three real causes
1. Scope mismatch
In a study of 898 rejected manuscripts from a psychiatric journal, 17.4% of desk rejections were attributed to being out of scope. At journals with tighter editorial mandates (high-impact generalist journals, subspecialty journals with strict topical focus), the proportion is higher.
Scope mismatch is not always obvious. A paper can appear to be in scope based on subject matter but still fail the fit test because:
- The journal publishes mechanistic work and your study is descriptive epidemiology
- The journal's readership is basic scientists and your clinical application study lacks translational depth
- The journal covers the topic but recently published a competing paper that your manuscript does not address
At Elsevier journals, the carbon science journal Carbon offers a documented example: papers "containing carbon materials" that focus on properties unrelated to carbon science get desk-rejected even though carbon appears throughout. The journal wants carbon-science papers, not papers that use carbon. This editorial philosophy is specific, not obvious, and not detectable by reading the aims-and-scope paragraph alone.
The failure pattern that trips up experienced researchers: Submitting to the journal that "sounds right" rather than reading the last 12 months of published issues to verify editorial direction.
2. Novelty signaling failure
In the same psychiatric journal study, 51.8% of desk rejections cited lack of novelty or originality, making it the single most common cause of desk rejection by a large margin. This is also the most commonly misunderstood rejection reason.
Editors are not always saying the research itself lacks novelty. They are often saying the manuscript did not make the case for its novelty clearly enough.
An editor reading 40 submissions per week spends 2-5 minutes on initial triage. If the abstract, introduction, and conclusion do not make the contribution explicit and compelling in that window, the paper is desk-rejected on perceived novelty, even if the actual finding is genuinely new.
The failure pattern that trips up technically strong papers: A methods-first narrative structure that buries the contribution. The abstract describes the study design in paragraph one, the methods in paragraph two, and the finding in paragraph three. By then, the editor has already formed an impression.
A paper with a genuine finding should lead with that finding in the abstract's first sentence, position it against what was known, and make the gap explicit. "We show that X, overturning the assumption that Y" is a novelty claim. "In this study, we examined the role of X in Y using [method]" is a method description.
3. Formatting and readiness failures
These account for a smaller but entirely avoidable fraction of desk rejections. Common patterns:
- Figures not meeting resolution or formatting requirements
- Word count exceeding journal limits
- Missing sections the journal requires (ethics statement, data availability statement, CONSORT checklist for RCTs, PRISMA checklist for systematic reviews)
- Reference format that does not match the journal's style
- Simultaneous submission to multiple journals (violates most journal policies)
These are not interesting research problems. They are administrative failures that add friction for the editor and signal that the authors have not read the guide for authors carefully.
Peer review rejection: what experts are actually looking for
Once a manuscript clears desk review, the probability of eventual acceptance rises substantially. At journals where 70% of papers are desk-rejected, papers reaching reviewers have roughly 30-40% acceptance odds. But peer review rejection is still common, and the patterns are diagnosable.
Methodology gaps and causal overclaim
The most frequently cited peer review rejection reason in the Indian Journal of Psychological Medicine study was poor methodology elaboration (50.7% of post-peer-review rejections). In biomedical engineering journals, design flaws and unclear methodology are the leading post-review rejection causes.
The specific failure pattern that appears consistently across clinical and biomedical fields: an observational study using causal language when the design only supports correlation. A cross-sectional study concluding that "X causes Y" when the study can at most establish that X and Y are associated. A retrospective cohort study describing its results as "demonstrating" an effect when confounding is inadequately addressed.
This is not a minor wording problem. Peer reviewers in clinical medicine and epidemiology are trained to flag exactly this disconnect. A strong manuscript explicitly acknowledges design limitations, uses language calibrated to what the study can actually support ("our findings are consistent with," "suggest that," "we observed an association between"), and addresses confounding directly rather than in a brief limitations paragraph.
The specific pattern: An RCT-language conclusion ("this intervention improves outcomes") attached to an observational design. Any reviewer who has served on a clinical journal's editorial board flags this immediately.
Weak study rationale and contribution framing
In the same study, 45.2% of peer review rejections involved weak writing quality, and a large subset of those were problems with the study rationale, not just prose quality. The introduction sets up the gap the paper fills. If the gap is not real, or not described compellingly, or not connected to the actual findings, reviewers reject on "insufficient justification for the study."
A complete introduction establishes: what is known, what is not known, why the unknown part matters, and why this study addresses it. Studies that describe a background topic thoroughly without naming a specific gap get rejected at this step.
Statistical problems
Common statistical errors that cause peer review rejection include: inappropriate tests for the data structure (parametric tests applied to non-normally distributed data without justification), underpowered studies without sample-size justification, multiple comparisons without correction, and p-values misinterpreted as effect sizes.
A documented finding from published analysis of high-impact medical journal rejections: manuscripts with statistical co-authors are not measurably less likely to have statistical errors than those without. Having a statistician on the author list does not protect against statistical rejection if the methods and reporting are weak.
Rejection rates by stage: the data
Stage | Typical range | Top-tier journals |
|---|---|---|
Desk rejection | 30-50% | 60-90% |
Post-peer-review rejection | 15-30% | 30-50% of papers reaching review |
Post-revision rejection | 10-20% of revised papers | Higher at top journals |
Overall rejection rate | 60-80% | 90-95%+ |
What the numbers mean for your strategy:
If most rejections happen at the desk stage, and most desk rejections are about scope and novelty signaling, then the highest-leverage pre-submission activity is verifying journal fit and sharpening the contribution framing, not revising the methods section again.
If you are getting desk rejections consistently, the problem is targeting or signaling. If you are getting peer review rejections consistently, the problem is likely methodology, statistical rigor, or claim strength.
At Nature, desk rejection rates reach 60-75%. Of papers that reach reviewers, roughly 38% eventually accept. At NEJM, desk rejection is 80-90%, but post-review acceptance is closer to 50% for papers that get that far. The editorial triage is doing a lot of work at flagship journals. Getting past the editor is the first and often hardest gate.
Rejection patterns by discipline
Generic rejection advice treats all manuscripts the same. In our pre-submission review work across 200+ journals spanning biomedical, chemistry, physics, and engineering disciplines, each field has failure patterns that are genuinely discipline-specific, not just generic "methodology problems."
Field | Primary desk rejection driver | Primary peer review rejection driver |
|---|---|---|
Clinical medicine | Wrong journal tier for study design; observational study submitted to RCT-focused journals | Causal overclaim; inadequate confounding control; missing CONSORT/STROBE reporting items |
Basic biomedical science | Insufficient mechanistic depth; in vitro-only data submitted to journals requiring in vivo validation | Lack of mechanistic novelty; figure panels that suggest cherry-picking |
Chemistry | Outside journal's chemical scope (e.g., applied materials submitted to synthesis-focused journals) | Incomplete characterization; claims not supported by the analytical data presented |
Physics | Letter-format work submitted to full-article journals without sufficient methodology detail | Theoretical work without experimental confirmation; or experimental work compared only to theory, not to alternative models |
Engineering | Scope positioning (applied work submitted to theory-focused venues and vice versa) | Validation gap: lab-scale results generalized to application without appropriate testing; missing comparison to state-of-the-art baselines |
Social sciences and psychology | Studies that are underpowered by field standards (n < 100 for survey-based studies at major journals) | Lack of pre-registration disclosure; HARKing concerns (hypothesizing after results are known) |
A specific pattern from clinical medicine work: The CONSORT checklist where half the items say "see Methods" without page numbers or paragraph references. Journals with explicit CONSORT requirements expect the checklist to be completed as a document, not treated as a compliance box. A reviewer who is a clinical trialist will flag this in the first paragraph of their review.
A specific pattern from basic biomedical science: A figure panel with 12 representative images from what was presumably a larger dataset, with no statement of how many independent experiments the figure represents, no statistical test on the representative result, and an n in the methods that does not match the figure legend. Reviewers at journals like Cell, Nature Cell Biology, or Journal of Biological Chemistry are specifically trained to look for this disconnect.
What pre-submission review actually catches
In our pre-submission review work with manuscripts targeting journals across these disciplines, the failure patterns that generate the most consistent preventable rejections are the same ones described above. Not because they are hard to find, but because authors are too close to their own work to see them.
The novelty framing problem is invisible to the authors because they know the contribution is real. The causal language problem is invisible because the authors believe their data supports the conclusion. The scope mismatch is invisible because the journal name sounds right.
A structured pre-submission review, whether automated or expert-led, applies the same lens an editor uses in the first five minutes. It finds the scope fit problem before you wait three days for a desk rejection. It finds the methods-section gap before you wait six weeks for a reviewer to find it. It finds the statistical concern before a biostatistician reviewer writes a two-page report about it.
Run a free readiness scan at Manusights. It takes about 60 seconds and checks your manuscript against the editorial standards of your target journal.
Submit if / Think twice if
Submit if:
- Your abstract's first sentence states the finding, not the topic
- You have read at least 10 recent issues of your target journal to verify editorial direction
- The manuscript explicitly names the gap it fills and why the gap matters
- Every statistical test in your methods matches the data structure
- All required reporting checklists (CONSORT, PRISMA, STROBE, ARRIVE) are completed with page numbers
Think twice if:
- Your rationale for the target journal is "it's a good journal in my field" rather than a specific recent paper you can cite
- Your abstract uses "demonstrates that" or "proves that" for an observational study
- The novelty statement is "this is the first study to examine X" without explaining why X matters
- You have n=43 patients and the journal's recent original articles have n=200-500
- Any figure panel uses representative images from an unspecified total sample size
Action plan: what to do after a rejection
A rejection is data. Use it.
If desk rejected within 3-7 days: This is almost always a fit or framing problem. If the editor named a reason, believe it. If no reason was given, the most probable cause is scope mismatch. Check whether 5 recent papers in that journal resemble your paper in design and audience. If they do not, retarget before resubmitting anywhere.
If desk rejected after 2-3 weeks: The editor read more carefully and found a specific disqualifier: overclaimed conclusions, incomplete ethics documentation, an obvious methods gap, or a scope problem that only became clear on closer reading. Fix the specific issue before resubmitting. Changing the target journal without addressing the underlying problem produces the same rejection.
If rejected after peer review: Separate reviewer comments into three categories: (1) factual errors you can correct, (2) experiments or analysis you can add, and (3) fundamental objections to your study design or conclusions. Category 3 items require a decision: can they be addressed with current data, or does the paper need different framing or a different journal? Resubmitting quickly without addressing category 3 leads to the same rejection at the next journal.
Timeframe guidance: For desk rejections, retarget and resubmit within 48 hours to 2 weeks once you have addressed the disqualifying issue. For peer review rejections where the feedback is addressable, aim to resubmit within 4-6 weeks. For rejections with major methodological concerns, take the time needed rather than submitting a weaker paper to a lower-tier journal.
Readiness check
Run the scan while the topic is in front of you.
See score, top issues, and journal-fit signals before you submit.
FAQ
Sources
- Manuscript Rejection: Causes and Remedies (PMC)
- Why Do Manuscripts Get Rejected? Content Analysis from the Indian Journal of Psychological Medicine (PMC)
- Eight Reasons I Rejected Your Article, Peter Thrower (Elsevier)
- Paper Rejection: Common Reasons (Elsevier Language Services)
- Rejection of Manuscripts: Problems and Solutions (PMC)
- Common Statistical and Research Design Problems in Manuscripts Submitted to High-Impact Medical Journals (PMC)
- Rejected Papers in Academic Publishing: Turning Negatives into Positives (Wiley/Learned Publishing)
- Reasons for Peer Review Rejection (CW Authors)
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.
Checklist system / operational asset
Elite Submission Checklist
A flagship pre-submission checklist that turns journal-fit, desk-reject, and package-quality lessons into one operational final-pass audit.
Flagship report / decision support
Desk Rejection Report
A canonical desk-rejection report that organizes the most common editorial failure modes, what they look like, and how to prevent them.
Dataset / reference hub
Journal Intelligence Dataset
A canonical journal dataset that combines selectivity posture, review timing, submission requirements, and Manusights fit signals in one citeable reference asset.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Supporting reads
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.