What Do Peer Reviewers Look For? The Actual Criteria by Journal Tier (2026)
Peer review criteria aren't the same across journals. At Nature and Cell, reviewers are gatekeeping significance. At PLOS ONE, they're checking soundness only. Here's what your target journal's reviewers are actually evaluating.
Associate Professor, Clinical Medicine & Public Health
Author context
Specializes in clinical and epidemiological research publishing, with direct experience preparing manuscripts for NEJM, JAMA, BMJ, and The Lancet.
Readiness scan
Find out if this manuscript is ready to submit.
Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.
How to use this page well
These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.
Question | What to do |
|---|---|
Use this page for | Building a point-by-point response that is easy for reviewers and editors to trust. |
Start with | State the reviewer concern clearly, then pair each response with the exact evidence or revision. |
Common mistake | Sounding defensive or abstract instead of specific about what changed. |
Best next step | Turn the response into a visible checklist or matrix before you finalize the letter. |
Quick answer: Peer reviewers at top-tier journals (Nature, Cell, NEJM) are primarily evaluating significance and novelty. Reviewers at mid-tier journals (IF 5-15) are mainly evaluating methodology and data quality. Reviewers at megajournals like PLOS ONE are explicitly instructed to evaluate soundness only, not novelty. These are three different jobs. Most advice about peer review treats them as one.
The standard advice on what peer reviewers look for goes like this: originality, methodology, clarity, significance to the field. You can find this list on Enago, Editage, and most journal editorial pages.
That list isn't wrong. It's just incomplete in a way that trips up most researchers. The same paper, evaluated by reviewers at two different journals, will be rejected for entirely different reasons. The question "what do peer reviewers look for?" only makes sense if you specify which journal tier you're asking about.
Before you submit, run your manuscript through a pre-submission review at Manusights to get a readiness score and a journal-fit verdict that accounts for tier-specific criteria, not just a generic checklist.
Reviewer criteria at a glance
Journal tier | Examples | Primary review question | Secondary question |
|---|---|---|---|
IF 30+ (elite) | Nature, Cell, NEJM, Lancet | Does this change how the field thinks? | Is the evidence strong enough to support it? |
IF 10-30 (high-impact) | Nature Communications, JACS, PNAS | Is this a meaningful advance in the field? | Are methods and controls adequate? |
IF 5-10 (mid-tier) | PLOS Biology, Genome Biology, JACS | Are the methods rigorous and conclusions supported? | Is the literature complete? |
IF 2-5 (selective broad) | Scientific Reports, BMC series | Is the methodology sound? | Are reporting checklists complete? |
Megajournals (IF 1-3) | PLOS ONE, Frontiers series | Is the science technically sound? | Novelty is explicitly not evaluated |
The tier-stratified view no one explains
Peer reviewer criteria differ meaningfully based on where a journal sits in the tier hierarchy. Here is what each tier actually evaluates, based on official reviewer guidelines and patterns from 750+ journal-specific desk-rejection and reviewer feedback records.
Tier 1: High-impact journals (Nature, Cell, Science, NEJM, Lancet, IF 30+)
Reviewers here are gatekeeping one thing above all else: is this paper going to change how people think about the field?
At Nature, manuscripts are rejected outright "on grounds of specialist interest, lack of novelty, or insufficient conceptual advance" before reviewers even assess the methodology in detail. If the paper passes that significance threshold, reviewers then verify that the evidence is strong enough to support the claims. Technical rigor is assumed, not the primary question.
Cell Press reviewers are asked explicitly whether the paper "establishes a new principle or describes a variation on known biology." The distinction matters. Variation on known biology gets rejected. New principle gets reviewed seriously.
At NEJM, the operating bar is clinical immediacy: the paper must present evidence that could change what a practicing clinician does tomorrow. Not eventually. Not in principle. Tomorrow.
What this means in practice: At this tier, underselling your contribution is a common and expensive mistake. A hedged "may contribute to" abstract at a Nature-tier journal reads as insufficient advance. The reviewers are asking themselves a single question: why does this matter now? Your paper needs to answer it in the first paragraph.
Named failure pattern: A paper submitted to Nature that presents a mechanistic finding as "building toward" a therapeutic application rather than as a complete mechanistic insight will struggle. The reviewers don't want the building-toward version. They want the complete mechanistic insight, even if therapeutic application is years away.
Tier 2: Mid-tier disciplinary journals (IF 5-15)
At journals in this range, the significance question is still relevant, but it no longer automatically dominates the review. Reviewers spend most of their report on methodology.
PMC research on rejection reasons at mid-tier journals documents that methodological concerns including inadequate controls, weak statistical analysis, and conclusions outrunning the data account for the majority of post-peer-review rejections in this tier. Novelty concerns are more common at desk rejection but less dominant in reviewer reports.
Elsevier's structured reviewer guidelines for this tier explicitly ask reviewers to evaluate whether statistical methods are appropriate and well described, whether conclusions are appropriately supported, and whether the literature section identifies key citations or leaves them missing.
What this means in practice: Your paper can be well-motivated and genuinely novel and still get rejected because a reviewer's report focuses on your sample size justification, your choice of statistical test, or the absence of a control condition. These aren't gatekeeping questions at Nature. At IF 10, they are.
Named failure pattern: In our pre-submission review work with manuscripts targeting mid-tier clinical and biomedical journals, the most consistent post-review rejection trigger is conclusions that outrun the data. The abstract claims X. The results show a trend toward X with a borderline p-value. A reviewer with a methods lens catches that immediately. The fix isn't to downgrade the conclusion after rejection. It's to align the abstract and results section before submission.
Tier 3: Megajournals and open-access broad journals (PLOS ONE, Scientific Reports, IF 2-5)
This tier has the clearest documented standard, and it's also the one most misunderstood by researchers submitting for the first time.
PLOS ONE explicitly instructs reviewers that "the journal does not use peer review to determine whether research reaches a threshold of importance" and that "ground-breaking significance is not required for publication." Reviewers are asked to evaluate seven criteria, all of which are soundness-based: original results, not published elsewhere, technically performed to a high standard, conclusions supported by data, written in standard English, meeting ethics standards, adhering to reporting guidelines.
Novelty is not on the list. Trying to impress PLOS ONE reviewers with an impact argument wastes space and can read as trying to paper over methodological gaps.
Named failure pattern: Papers submitted to PLOS ONE that spend two paragraphs in the introduction establishing novelty, then arrive at a Methods section that doesn't fully describe the statistical approach or omit reporting guidelines compliance, are rejected on exactly the criteria that would have been fixable with one extra pass. The reviewer doesn't care about the impact argument. They do care that your CONSORT checklist is complete.
What all reviewers evaluate, regardless of tier
Across all tiers, three things matter regardless of journal:
1. Conclusions match evidence. Every tier, every reviewer. The abstract's claims have to be supported by the actual data. This is the single most consistent rejection reason documented in the peer review literature, from PMC studies of rejection reports to Elsevier's structured reviewer forms.
2. Methods are reproducible. Can another lab repeat this? Reviewers at every tier check whether the methodology section contains enough detail to replicate the work. This is a baseline expectation, not an advanced criterion.
3. The paper is in scope. Scope mismatch at the desk is so common it barely qualifies as peer review feedback, but scope misread still shows up in reviewer reports. A reviewer who has to spend time explaining that your work belongs in a different journal is not giving you useful revision guidance.
What the best services get wrong about peer review
Sites like Enago and AJE list the same five criteria: significance, methodology, novelty, ethics, writing quality. That list is accurate for an average across all journals. It is not useful for any specific journal.
The gap: if you're submitting to a Nature-tier journal, "methodology" is table stakes. If you're submitting to PLOS ONE, "significance" is irrelevant. Treating the list as universal leads researchers to either over-invest in significance arguments where reviewers don't care, or under-invest in methodological detail where reviewers scrutinize it hardest.
Submit if / Think twice if
Submit to a high-impact journal (IF 30+) if:
- Your paper establishes a principle, not a variation on one
- The advance is comprehensible to researchers outside your immediate subfield
- The evidence is complete enough to support strong mechanistic or clinical claims
Think twice about a high-impact journal if:
- The paper is a solid extension of recent work with a known mechanism
- The study is well-executed but limited to one model system or one patient population
- Your abstract hedges with "may suggest" or "contributes toward"
Submit to a mid-tier journal (IF 5-15) if:
- Your methodology is airtight and defensible in detail
- Your conclusions are exactly calibrated to your data
- The novelty is clear within the field, even if it won't reorganize it
Think twice about a mid-tier journal if:
- Your Methods section is thin or leaves out sample size justification
- Your statistical analysis would require significant revision under scrutiny
- Your paper was recently rejected from a high-tier journal with mixed reviewer feedback (the methodology concerns will resurface)
Submit to PLOS ONE-tier if:
- Your work is technically sound and adds to the scientific record, even without high impact
- You want reproducibility and completeness rewarded over novelty
Think twice about PLOS ONE if:
- Your reporting guidelines compliance (CONSORT, PRISMA, ARRIVE) isn't complete
- Your data availability statement is vague or missing
- You're counting on impact framing to carry a paper with methodological gaps
Readiness check
Run the scan to see how your manuscript scores on these criteria.
See score, top issues, and what to fix before you submit.
How to use this before you submit
In our pre-submission review work with manuscripts across clinical medicine, biomedical research, and public health, the most time-efficient intervention is a tier-calibrated read of your abstract and results section before submission. Specifically:
For high-tier targets: read your abstract as if you were a Nature editor seeing 50 papers this week. Does it answer "why does this matter to the field, now" in the first two sentences?
For mid-tier targets: read your Methods section as if you were a skeptical reviewer. Can you reproduce the study from what's written? Is every statistical choice justified?
For PLOS ONE-tier: check your reporting guidelines compliance against the relevant checklist (CONSORT, PRISMA, etc.) and your data availability section against the journal's requirements.
A pre-submission manuscript scan flags which of these tier-specific criteria your paper is likely to struggle with before reviewers see it.
FAQ
What do peer reviewers look for in a manuscript?
It depends on the journal tier. At Nature, Cell, and NEJM, reviewers evaluate significance and conceptual advance first. At journals with IF 5-10, methodology and rigor take center stage. At PLOS ONE, reviewers evaluate technical soundness only and are explicitly instructed not to consider novelty. Most generic advice conflates these into one list, which leads researchers to oversell novelty to methodological journals or undersell it to high-impact ones.
What is the most common reason peer reviewers reject manuscripts?
At high-impact journals, insufficient novelty or conceptual advance is the leading rejection reason. At mid-tier journals, methodological concerns dominate: inadequate controls, weak statistical analysis, or conclusions that outrun the data. At megajournals like PLOS ONE, scope mismatch and ethics compliance are more common grounds than scientific novelty.
Do peer reviewers check for plagiarism?
Not directly. Plagiarism screening is automated by journal submission systems before papers reach reviewers. Reviewers do flag self-plagiarism and text recycling when they recognize it, and they may raise concerns about undisclosed overlap with prior work. But the primary plagiarism check is a software gate, not a reviewer responsibility.
How long does peer review take?
Median peer review times range from 4 weeks at fast-turnaround journals to 3-4 months at major disciplinary journals. Nature and Cell tend to move faster than their prestige might suggest (8-12 weeks from submission to first decision). Specialty journals are often slower. SciRev community data shows wide variation even within the same journal.
Can I find out what reviewers said before submitting?
Indirectly, yes. SciRev.sc collects author-reported review experiences including common feedback themes. PubPeer hosts post-publication reviewer-style comments. For journals that use open review (eLife, some PLOS titles), published reviewer reports are publicly accessible. Manusights synthesizes patterns from 750+ journal-specific desk-rejection and reviewer feedback records to flag likely reviewer concerns before you submit.
For journal-specific desk rejection patterns, see desk rejection rates by journal and the how to avoid desk rejection guide.
Sources
- PLOS ONE Reviewer Guidelines - PLOS ONE's official reviewer criteria, including explicit exclusion of novelty from evaluation
- Nature Portfolio Peer Review Policies - Nature's editorial criteria and grounds for rejection
- Elsevier Structured Peer Review - Elsevier's structured reviewer question banks
- Manuscript Rejection: Causes and Remedies (PMC) - Analysis of rejection reasons across journal types
- Cell Press Peer Review - Cell Press reviewer criteria and conceptual advance standards
- Enago Peer Review Checklist - Enago's guide to evaluating manuscripts as a peer reviewer
- Top 10 reasons for desk rejection without review (PMC) - PMC analysis of pre-review rejection patterns
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.
Checklist system / operational asset
Elite Submission Checklist
A flagship pre-submission checklist that turns journal-fit, desk-reject, and package-quality lessons into one operational final-pass audit.
Flagship report / decision support
Desk Rejection Report
A canonical desk-rejection report that organizes the most common editorial failure modes, what they look like, and how to prevent them.
Dataset / reference hub
Journal Intelligence Dataset
A canonical journal dataset that combines selectivity posture, review timing, submission requirements, and Manusights fit signals in one citeable reference asset.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Final step
Find out if this manuscript is ready to submit.
Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Not ready to upload yet? See sample report
Where to go next
Supporting reads
Conversion step
Find out if this manuscript is ready to submit.
Anthropic Privacy Partner. Zero-retention manuscript processing.