What a Pre-Submission Peer Review Actually Includes (And What It Doesn't)
Senior Researcher, Oncology & Cell Biology
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Is your manuscript ready?
Run a free diagnostic before you submit. Catch the issues editors reject on first read.
When researchers ask about pre-submission peer review, they usually have one question: what do I actually get for my money? The answer varies enormously depending on the service. A shallow review gives you general impressions and misses the things that cause rejections. A serious one covers six specific areas and tells you exactly what to fix before you submit.
Here's what each component looks like in practice, how to tell the difference between thorough and shallow feedback, how to use what you receive, and when the whole thing is worth it.
Pre-Submission Review vs. Editorial Peer Review: What's Different
These two processes are often confused, and the confusion matters.
Editorial peer review happens after you submit to a journal. The editors manage the process. You don't choose the reviewers. The reviewers don't know you'll see their comments (in single-blind review). The outcome is a formal decision: accept, revise, or reject. You can't argue with the process, only respond to the comments if you get the chance.
Pre-submission peer review happens before you submit. You choose the reviewer or service. The feedback is confidential between you and the reviewer. There's no formal decision; it's information, not a verdict. You decide what to do with it.
The key difference isn't just timing. It's that pre-submission review is advisory while editorial review is decisive. A pre-submission reviewer can say "this conclusion is overclaimed and an editor at your target journal will flag it" without rejecting your paper. You still get to fix it.
That's the whole value proposition: turning an eventual rejection into a revision before the journal ever sees it.
The 6 Components of a Serious Pre-Submission Review
1. Novelty Assessment
The reviewer reads your introduction and results, then answers: is this genuinely new, or does it extend existing work in a way that might read as incremental?
This isn't a literature review exercise. A good novelty assessment identifies the 2-3 papers that are closest to your work and explains specifically how your findings differ from them. It tells you whether your framing of novelty will hold up to scrutiny, and whether editors at your target journal are likely to find the advance compelling.
A shallow novelty assessment says "this work appears novel." A serious one says "Figure 3 in Smith et al. (2023, Nature Communications) shows a similar phenotype in a different cell type; you'll need to explain why your finding in this system adds something conceptually new, not just confirmatory."
2. Methodology Critique
This is where most papers have fixable problems. The reviewer goes through your methods section systematically and flags issues that peer reviewers commonly catch.
What a serious methodology critique covers:
- Sample sizes and statistical power (was this study adequately powered for the primary endpoint?)
- Controls (are there appropriate positive and negative controls for each key experiment?)
- Reproducibility (are methods described in enough detail to replicate, or are critical parameters missing?)
- Appropriate assays (is the technique you used the right one for the claim you're making?)
- Missing orthogonal validation (did you show the same result with two independent approaches, or rely on a single method?)
A surface-level critique says "methods appear appropriate." A serious critique says "the rescue experiment in Figure 4 uses only one concentration; reviewers at your target journal typically require a dose-response to support a mechanistic claim."
3. Figure-by-Figure Comments
Figures carry the scientific argument. Editors look at them before they finish the abstract. Reviewers build their assessment around them. And yet most pre-submission review skips this entirely.
A proper figure-by-figure review covers:
- Is the figure legible at the resolution submitted?
- Does each panel say what the caption claims it says?
- Are statistical comparisons correctly presented (error bars labeled, n values shown, appropriate test used)?
- Does the figure sequence tell a logical story, or does it require reading the text to make sense of?
- Are there obvious missing panels that reviewers will request?
The last point is often the most valuable. If Figure 3 shows a knockdown effect and any reasonable reviewer is going to ask "what happens when you rescue expression?", you're better off knowing before submission than after.
4. Statistical Analysis Check
This is a dedicated component, separate from the methodology critique. The statistics check goes through your reported analyses specifically and evaluates:
- Are the statistical tests appropriate for the data type and distribution?
- Are p-values accompanied by effect sizes and confidence intervals?
- Is the correction for multiple comparisons applied correctly?
- Are the error bars reporting standard deviation or standard error (and is it stated clearly)?
- Are sample sizes (n) defined consistently throughout (biological vs. technical replicates)?
Statistical errors are the single most common reason papers get flagged in peer review at rigorous journals. They're also among the most fixable before submission. A dedicated statistics check catches these before they're in a rejection letter.
5. Journal Fit Analysis
This component is specific to pre-submission review; it has no equivalent in editorial peer review. The reviewer evaluates whether your paper is aimed at the right journal and explains why.
A serious journal fit analysis covers:
- Does this paper's scope match what this specific journal publishes?
- Is the novelty threshold appropriate for this journal's impact factor and editorial standards?
- Would this paper be a competitive submission at this journal, or would it be a long shot?
- If the fit is poor, what's the right alternative target?
This is the component that prevents the most frustrating kind of rejection: the desk rejection where you waited 9 days to find out your paper was never suited for the journal in the first place.
6. Revision Roadmap
The revision roadmap turns all of the above into an action list. It prioritizes the changes by importance: what's essential before submission, what's strongly recommended, and what's optional.
A revision roadmap isn't a summary of the critique. It's a prioritized list in plain language:
- Critical (do this before submitting): "Add rescue experiment to Figure 4"
- Recommended (strengthens the submission): "Shorten Introduction by one paragraph; the third paragraph is redundant"
- Optional (minor improvements): "Figure 2A legend is unclear; specify what the dashed line represents"
This is the format that makes feedback actionable. Without it, authors often implement the minor suggestions and miss the critical ones.
What a 10-Page Review Looks Like vs. a Shallow One
Length isn't the only indicator of quality, but it's a proxy. A thorough review of a typical 5,000-word manuscript with 6-8 figures will be 8-12 pages of structured feedback. Here's the difference in practice:
Shallow (2-3 pages):
- General comments about introduction length
- One or two typos flagged
- A note that "statistical methods appear appropriate"
- Summary: "This is a solid study and should be well-received at your target journal"
Thorough (8-12 pages):
- Specific claim in the introduction identified as overclaimed relative to existing literature, with citation
- Methodology: 4 specific experiments that need additional controls, each explained
- Figure 3: panel B needs quantification, n values are inconsistent across panels
- Statistics: two tests used incorrectly for the data type, one missing multiple comparisons correction
- Journal fit: "the IF 15.7 bar at Nature Communications requires a broader-interest framing; your current discussion is too specialized for NC and would fit better at Cell Reports or a specialist journal"
- Revision roadmap: 7 critical items, 4 recommended items
The shallow review makes you feel good. The thorough one makes your paper better.
How to Use the Feedback
A pre-submission review isn't a pass/fail. It's information. Here's how to approach it:
Start with the critical items. These are the things the reviewer believes would trigger rejection or a major revision request. Don't argue with them in your head; ask whether an independent expert at your target journal would say the same thing.
Evaluate everything, implement selectively. You're not obligated to do everything a pre-submission reviewer suggests. If the reviewer's recommended revision would take six months and you don't think it changes the conclusion meaningfully, you can skip it and be prepared to defend that decision in a response letter.
Use the journal fit assessment honestly. If the reviewer says your paper isn't competitive for your first-choice journal, that's worth taking seriously. A rejected paper costs you 4-7 months if you're submitting to Nature Communications, or 3-5 months if you're submitting to Cell Reports. Getting an honest read before submission is worth that time savings.
What It Costs and When It's Worth It
Professional scientific pre-submission review ranges from $500 to $2,000 depending on manuscript complexity, field, and turnaround time. That's a real cost. Here's when it's worth it.
Worth it when:
- You're targeting a journal with under 30% acceptance rate and desk rejection reveals nothing
- You're an early-career researcher who hasn't yet developed a peer reviewer's eye for your own work
- You've already revised once and you're not sure what's still not working
- Your paper is on the borderline between two journals and you need an honest assessment of fit
Not worth it when:
- Your paper has fundamental scientific problems. Fix the science before worrying about positioning.
- You're submitting to a journal with over 50% acceptance rate
- You've already received detailed peer review comments from another journal that you've addressed
See a sample review to evaluate the depth of feedback before committing. And if you're ready to talk through whether your manuscript is a good candidate for pre-submission review, get in touch.
Sources and Further Reading
- COPE guidelines on the responsibilities of peer reviewers: publicationethics.org
- Nature Portfolio reviewer guide: nature.com/nature/for-referees
- Is pre-submission review worth it?
- How to avoid desk rejection
The Bottom Line
Pre-submission peer review is most useful when it covers the specific issues editors and reviewers screen for at your target journal , not generic writing feedback. Our diagnostic is built around the criteria that editors at high-IF journals use at the desk stage, not just readability.
See also
Free scan in about 60 seconds.
Run a free readiness scan before you submit.
Related Journal Guides
Apply these insights to specific journals you're considering:
More Articles
Pre-Submission Review for Nature Medicine: What Reviewers Actually Look For
10 min readPublishing StrategyManuscript Review for Cardiology Journal Submissions: What Reviewers Expect
10 min readPublishing StrategyPre-Submission Check for CNS Journals: What Nature Neuroscience and Neuron Reviewers Evaluate
10 min readFind out before reviewers do.
Anthropic Privacy Partner - zero retention