Manuscript Preparation9 min readUpdated Apr 15, 2026

What Peer Reviewers Do in the First 10 Minutes: A Behavioral Guide (2026)

Peer reviewers don't read your manuscript cover to cover. They form a provisional accept-or-reject judgment in the first 10 minutes, and the rest of the review largely confirms that initial read. The sequence differs by journal tier, and understanding it changes how you should structure your manuscript.

Associate Professor, Clinical Medicine & Public Health

Author context

Specializes in clinical and epidemiological research publishing, with direct experience preparing manuscripts for NEJM, JAMA, BMJ, and The Lancet.

Readiness scan

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.

Get free manuscript previewAnthropic Privacy Partner. Zero-retention manuscript processing.See sample report
Working map

How to use this page well

These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.

Question
What to do
Use this page for
Building a point-by-point response that is easy for reviewers and editors to trust.
Start with
State the reviewer concern clearly, then pair each response with the exact evidence or revision.
Common mistake
Sounding defensive or abstract instead of specific about what changed.
Best next step
Turn the response into a visible checklist or matrix before you finalize the letter.

Quick answer: Reviewers form a provisional accept-or-reject judgment in the first 10 minutes. At Nature-tier journals, they read the abstract and then look at the last figure before the methods. At methods-focused mid-tier journals, they read the abstract and go straight to the methods. At PLOS ONE, it is the abstract for scope fit, then methods for completeness. The anchoring effect means the rest of the review largely confirms that first read.

The standard advice tells you to write a strong abstract and clear figures. That is correct, but it misses the mechanism. The reason those elements matter so much is not just that they set context. It is that reviewers at high-volume journals have learned to use them as a fast filter. The first 10 minutes of a review is a sorting decision, not a reading decision. Understanding what a reviewer is sorting for, at your target journal's tier, changes which parts of your manuscript need the most work.

Before you submit, run your manuscript through Manusights pre-submission review to get a readiness verdict that applies the same first-impression logic: abstract clarity, figure strength, and methods-design adequacy checked before peer review does it.

The reading sequence by journal tier

The pattern below is drawn from published reviewer guidelines and observations from our reviewer network across 750+ manuscript evaluations. Individual reviewers vary, but the tier-level pattern is stable.

Journal tier
Examples
What they read first
What they check second
First-pass question
IF 30+ (elite)
Nature, Cell, Science, NEJM
Abstract
Last figure or conclusion figure
Does the conclusion justify the hype?
IF 10-30 (high-impact)
Nature Communications, PNAS, JACS
Abstract
Figure 1, then overall figure set
Is the advance real and clearly shown?
IF 5-10 (mid-tier)
PLOS Biology, Genome Biology, J Clin Invest
Abstract
Methods section
Can this design support the claim?
IF 2-5 (selective broad)
Scientific Reports, BMC series
Abstract
Methods and reporting checklists
Is this scientifically sound?
Megajournals (IF 1-3)
PLOS ONE, Frontiers series
Abstract for scope fit
Methods for completeness
Does it meet the seven criteria?

This is not what reviewers are supposed to do. It is what they actually do, based on the official guidance each publisher provides and the reading patterns that emerge from expert judgment under time pressure.

What a Nature-tier reviewer does in the first 10 minutes

Nature's editors assign manuscripts to specialist reviewers with deep field knowledge. Those reviewers are not reading your paper the way a student reads a paper. They are using a trained pattern-recognition scan.

Here is the actual sequence.

They read the abstract in full, usually in under two minutes. They are not verifying anything yet. They are forming a question: "Is this claim interesting enough to spend serious time on?" If the abstract does not articulate a clean, specific finding, the review effectively begins with a negative lean. Not a rejection, but a posture of skepticism that the rest of the read will have to overcome.

Then they skip to the last figure. Not Figure 1. The last substantive figure, or the data figure that most directly supports the abstract's conclusion. The reason is calibration: they want to know whether the conclusion is earned before they invest in the methods. If the final figure looks weak, preliminary, or insufficient relative to the abstract's claim, the provisional judgment moves toward rejection immediately.

Then they may glance at Figure 1, which at high-impact journals should establish the biological or conceptual question in a single visual unit. If Figure 1 is a schematic or a data-light orientation figure, it loses much of its value. The reviewers who've seen 200 papers from this journal want Figure 1 to show the first piece of real evidence.

Methods come later in this tier. The reviewer at Nature knows the field well enough to spot a design problem from the figure data even before reading the methods text. The methods are where they go to confirm concerns, not to develop them.

The entire first-pass read at this tier takes roughly 20 to 30 minutes.

What a mid-tier methods-focused reviewer does first

At journals with an IF between 5 and 15, the reviewer's job is fundamentally different. These journals are not deciding whether the finding is important enough to change the field. They are deciding whether the evidence is strong enough to support the claim.

Elsevier's reviewer guidance tells reviewers to prioritize the methods section for experimental manuscripts, because certain flaws here are considered critical and warrant rejection. This is official guidance, not a behavioral quirk.

In practice, the sequence looks like this. Abstract first, same as above. But then, straight to the methods. A reviewer at a Journal of Clinical Investigation-tier journal is asking: what model system, what controls, what comparator, what sample size? If the experimental design cannot support the stated conclusion, the reviewer knows this before looking at any figure. The figures can only be evaluated in light of the design.

This tier produces the most common source of reviewer rejection letters that feel arbitrary to authors: "The study design is insufficient to support the authors' conclusions." It does not feel arbitrary to the reviewer. They spotted the design problem in the first paragraph of the methods section and the rest of the review confirmed it.

The first-pass time at this tier is typically 30 to 45 minutes, with most of the early read concentrated in the abstract and methods.

What a PLOS ONE reviewer checks first

PLOS ONE instructs reviewers to evaluate manuscripts against seven editorial criteria, and explicitly tells reviewers that novelty is not one of them. This is one of the clearest cases where official reviewer guidance reveals the actual first-pass question.

The first-pass at PLOS ONE goes like this. Abstract for scope fit: is this within the journal's scope and does it address a real scientific question? Then methods for completeness and compliance: are reporting checklists satisfied, are ethical approvals in place, are the methods described in enough detail to be replicated? Reviewers at PLOS ONE are checking a checklist before they are evaluating a story.

This has a specific implication for manuscripts targeting PLOS ONE. A manuscript that reads like a Nature submission, with its significance and conceptual advance foregrounded, is not necessarily better positioned here. The reviewer is not looking for that framing. A reviewer trained on PLOS ONE criteria will move through the methods section carefully looking for reporting compliance items that a Nature-oriented manuscript might have buried in supplementary material.

The provisional judgment and the anchoring effect

In our reviewer network, the pattern that comes up most consistently is this: reviewers form a provisional lean within the first read of the abstract and Figure 1, and the rest of the review is largely confirmatory.

This is not unique to peer review. It is well-documented in expert judgment across domains. In decision-making research, this is called anchoring: a first impression sets a reference point, and subsequent information is interpreted relative to it, not from a blank slate. The implication for manuscript structure is direct. The abstract and the opening figure are not just context-setters. They are the provisional judgment itself.

When I have seen reviewers write reports that later turned out to be negative, the rejection reason was almost always present in their first 10 minutes of reading, even if the formal report cited a methodological concern in section 3 or a figure panel issue in the supplementary. The framing of the abstract had already shaped how those sections were read.

The honest caveat: this is based on network observations and behavioral research applied to expert judgment, not a controlled study of reviewer eye-tracking data. Individual reviewers vary significantly. Some reviewers are linear readers who go section by section. Some reviewers read figures only. The anchoring pattern describes a central tendency, not a law.

What this means for your abstract and Figure 1

For high-impact journals, the abstract has one job: state a clean, specific finding, not a hypothesis confirmed, not an approach validated, a finding. The format that works is: problem (one sentence), what you did (one sentence), what you found (two sentences), why it matters (one sentence). Everything else is decoration.

"We found that X inhibits Y through Z, explaining why [phenotype] occurs in [context]" is a finding. "We investigated the role of X in Y and characterized its mechanism" is a description of what you did, not what you found. Reviewers at Nature-tier journals have read both versions ten thousand times. The second version starts with a negative lean.

Figure 1 at elite journals should show the first piece of real evidence for your claim, not a cartoon of your model or a diagram of your experimental pipeline. The reviewer is calibrating against your abstract. If Figure 1 shows a schematic, they have nothing to calibrate against yet and their attention starts to drift.

For methods-focused journals, the abstract still needs a clear finding. But the methods section's first paragraph needs to be tight: the model system, the core experimental approach, and the comparison that powers the main claim should all be legible within the first 200 words of the methods. If the reviewer has to read four paragraphs before understanding what you actually measured, you have already created a friction point that shapes the rest of the read.

What reviewers check at each subsequent stage

After the first-pass read, the review moves into a more systematic evaluation. The sequence varies by tier, but the general structure is consistent across Wiley's step-by-step reviewing guidance and comparable publisher guidance from Springer and Taylor & Francis.

Methods (in detail): Reviewers check experimental design adequacy, controls, statistical approach, and whether the conclusions are supported by the evidence. At mid-tier journals, this is the section that generates the most rejection-driving concerns.

Figures and data: Each figure is evaluated for whether it does what the caption claims and whether the statistical presentation is appropriate. Reviewers note panel quality, label completeness, and whether key controls are shown or merely referenced.

Results narrative: Reviewers check whether the results text describes what the figures show, or whether it overclaims relative to the data. A single sentence that says "these data demonstrate..." when the figure shows a trend with wide error bars is a common friction point.

Discussion: At high-impact journals, the discussion is read for whether the authors correctly place their finding in the field, without either underselling or overclaiming. At methods-focused journals, reviewers check whether the limitations section is honest about the study's design constraints.

References: Reviewers do not read the reference list systematically, but they do notice missing key citations, especially papers from their own lab or from the field's defining studies. A missing reference to a directly relevant prior finding can signal to a reviewer that the authors have an incomplete literature command.

What authors optimize vs. what reviewers actually check

Understanding where authors spend revision energy versus where reviewers spend evaluation time helps explain why some rejections feel inexplicable. Authors polish discussion prose; reviewers often make up their minds before reading it.

Manuscript element
Where most author effort goes
Where reviewer attention actually concentrates
Abstract
Last-written, often rushed
First thing read; provisional judgment formed here
Figure 1
Intro/orientation schematic
Calibration against abstract's claim
Methods
Detailed but dense
First paragraph for design legibility; mid-tier reviewers read this carefully
Results
Carefully narrated
Cross-checked against figure panels for overclaiming
Discussion
Heavily polished
Scanned for proportionality; limitations honestly stated
Supplementary
Overflow for everything
Rarely read on first pass; major concerns are spotted in main text

The mismatch in attention is the structural reason that manuscripts with excellent discussions still get rejected on methods grounds, and manuscripts with careful prose still get desk-rejected on abstract grounds.

For a direct read on how your manuscript handles this attention split, see the peer review process guide and the what peer reviewers look for page, which covers tier-stratified criteria in more detail.

In our pre-submission review work: named failure patterns from the first 10 minutes

In our pre-submission review work across 750+ manuscripts, three patterns generate the most consistent negative provisional judgments in the first-pass read across all tiers.

An abstract that reports an approach, not a finding. "We developed a method to assess X in Y, which provides a new tool for the field" is not a finding. Reviewers at every tier form a provisional lean based on whether the abstract states something that was learned, not something that was built. If your abstract can be summarized as "we did X," it will be read differently than if it can be summarized as "we showed Y."

A Figure 1 that does not support the abstract's central claim. The mismatch between what the abstract promises and what the first data figure delivers is the most consistent early rejection trigger we see. An abstract that claims a mechanism is revealed, but Figure 1 that shows only a correlation or a colocalization, leaves the reviewer with an immediate credibility gap. They will read the rest of the paper looking for where the mechanism evidence actually lives, and if it is in Figure 4 with no setup, the structure itself becomes a rejection reason.

A methods section that buries the key experimental comparison. Reviewers at methods-focused journals are looking for the core comparison in the first substantive paragraph of the methods. If the primary comparison (drug vs. vehicle, mutant vs. wild type, intervention vs. control) is left unclear until page 8 of the methods, the reviewer has to work to understand what was actually done. That work creates a friction that shapes everything that follows.

Readiness check

Run the scan to see how your manuscript scores on these criteria.

See score, top issues, and what to fix before you submit.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample report

Submit if / Think twice if

Submit when:

  • Your abstract states a specific finding, not a process description
  • Figure 1 shows real data directly supporting your central claim
  • The methods' first paragraph makes the core comparison legible in under 30 seconds
  • The abstract's conclusion and the paper's final figure are in calibrated proportion to each other

Think twice if:

  • Your abstract ends with "these findings suggest" rather than stating what was found
  • Figure 1 is a schematic, a workflow diagram, or an overview figure with no quantitative data
  • The manuscript's main advance is buried in Figure 4, with Figure 1 as scene-setting
  • The methods section opens with four paragraphs of reagent sourcing before describing what was done
  • The paper's claim and the figure resolution are mismatched (big claim, small n, no replication)

A pre-submission review at Manusights applies this same first-impression logic before your manuscript reaches a reviewer. You get a readiness score and a flag on the specific mismatch patterns that create negative provisional judgments.

References

Sources

  1. PLOS ONE Reviewer Guidelines: https://journals.plos.org/plosone/s/reviewer-guidelines
  2. Elsevier Reviewer Guidance, How to Review: https://www.elsevier.com/reviewers/how-to-review
  3. Wiley Step-by-Step Guide to Reviewing a Manuscript: https://authors.wiley.com/Reviewers/journal-reviewers/how-to-perform-a-peer-review/step-by-step-guide-to-reviewing-a-manuscript.html
  4. Nature Portfolio Editorial Policies, Peer Review: https://www.nature.com/nature-portfolio/editorial-policies/peer-review

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.

Open the reference library

Final step

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Get free manuscript preview

Not ready to upload yet? See sample report

Internal navigation

Where to go next

Get free manuscript preview