Publishing Strategy7 min readUpdated Apr 21, 2026

Average Review Times Across 100 Journals in 2026: What the Tracked Data Shows

Review speed is one of the most misread journal signals. Fast decisions can mean efficient editorial systems, harsh desk triage, or both.

Author contextSenior Researcher, Oncology & Cell Biology. Experience with Nature Medicine, Cancer Cell, Journal of Clinical Oncology.View profile

What to do next

Already submitted? Use this page to interpret the status and choose the next step.

The useful next step is understanding what the status usually means, how long the wait normally runs, and when a follow-up is actually reasonable.

See The Next StepAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness ScanOr check your bibliography for retracted citations

Quick answer: The average review times across 100 journals in 2026 do not point to one clean global norm. In the tracked benchmark, the median normalized first-decision time is 37.5 days, but the distribution is lopsided. A fast journal can be fast because editors screen aggressively. A slow journal can be slow because reviewers are scarce, the field expects long reports, or the queue is simply heavy. The useful number is not the headline average. It is the editorial model hiding underneath it.

What this benchmark actually covers

In our analysis of the tracked 100-journal set, we did five things:

  1. started from the active Manusights journal dataset
  2. removed obvious aliases
  3. ranked the cleaned set by current tracked journal prominence
  4. kept the top 100 unique journals
  5. normalized each journal's first-decision wording into a day-based benchmark

That gives a comparison layer, not a perfect reconstruction of every editorial workflow. "First decision" is not standardized across publishers.

Headline benchmark numbers

Metric
Value
Journals analyzed
100
Median normalized first-decision time
37.5 days
Mean normalized first-decision time
52.4 days
Journals under 14 days
8
Journals from 14 to 29 days
24
Journals from 30 to 59 days
27
Journals at 60 days or longer
41

The median and mean tell different stories. The center sits around five to six weeks, but the slow tail is real enough to pull the mean upward.

Fast, middle, and slow bands

Speed band
What the data usually means
What authors should infer
Under 14 days
Professional editors, heavy desk triage, or both
Good for fast answers, not always good for peer review odds
14 to 59 days
Mixed editorial handling plus real reviewer recruitment
Often the most normal operating range
60 days or longer
Reviewer scarcity, academic-editor drag, or deep queueing
High opportunity cost if your timeline is tight

That is the central reading mistake authors make: they treat all first-decision clocks as if they describe the same process.

The fastest journals in the tracked set

Journal
Tracked first-decision wording
Nature Biotechnology
4 days median to first editorial decision
Neuron
4 days to first decision
Nature Immunology
5 days median to first editorial decision
Cell Reports
5 days median to first editorial decision
Nature
7 days median to first decision
Nature Methods
7 days median to first editorial decision
Science Advances
1 to 4 weeks to first editorial decision
Nature Communications
about 9 days to first editorial decision

These are not just "efficient journals." They are mostly journals with strong in-house editorial systems and high willingness to decline quickly.

The slowest journals in the tracked set

Journal
Tracked first-decision wording
Chemical Society Reviews
about 150 to 200 days median
Chemical Reviews
about 120 days to first decision
Renewable and Sustainable Energy Reviews
about 120 to 180 days median
Astronomy and Astrophysics
about 120 to 150 days median
Advanced Energy Materials
about 100 to 140 days median
Applied Catalysis B: Environment and Energy
about 100 to 140 days median
Cancer Research
about 100 to 130 days median
Diabetes Care
about 100 to 130 days median

At the slow end, months matter. If you are applying for a job, closing a thesis chapter, or working against a grant deadline, that delay can be more important than the small prestige difference between two otherwise reasonable targets.

What we see when authors misuse review-time data

In our analysis of journal-timing pages and related submission strategy work, review-time data is most often misused in three ways.

Authors confuse fast triage with fast peer review. Editors actually screen very quickly at many prestige journals. A 4-day or 7-day first decision often says more about triage than about reviewer turnaround.

Authors compare one fast journal against one slow journal without comparing fit. We see this constantly. A faster first decision is not useful if the faster journal is simply a worse home for the manuscript.

Authors optimize for speed in isolation. We use this benchmark most effectively when it is paired with fit, desk rejection behavior, and the likely revision burden.

That is why this page should be read together with journal-specific guidance, not as a one-number ranking.

What to pair with review-time data

Metric to pair with review time
Why it matters
Acceptance rate
Separates fast answers from realistic publication odds
Desk rejection behavior
Explains whether the fast clock is mostly editor triage
CiteScore
Helps place the journal's broader citation reach
SJR
Adds a prestige-weighted citation signal
H-index
Shows whether the journal has long-run field footprint

This is the easiest way to avoid overreading speed. A journal with a fast first decision and a harsh editorial filter behaves very differently from a journal with a similar clock and a more review-heavy workflow.

Why the mean is slower than the median

The benchmark mean is higher because the slow end stretches far out. In plain terms, the right tail is doing a lot of work.

A good way to read that:

  • many journals cluster around one month to six weeks
  • a meaningful minority rise from 60 days into the 100 to 150 day range
  • the slow journals are numerous enough to make the average feel worse than the midpoint

That is why authors should not plan from the mean alone.

Readiness check

While you wait, scan your next manuscript.

The scan takes about 1-2 minutes. Use the result to decide whether to revise before the decision comes back.

Check my next manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.Or verify a citation in 10 seconds

Why year-specific context still matters

One reason authors overread a single review-time number is that they forget journals change from year to year as submission volume, reviewer supply, and editorial staffing shift.

Public annual-report data from one journal does not replace a 100-journal benchmark, but it does show how unstable a single title's clock can be over time.

Year
Social Problems first-decision report
What it illustrates
2015
29 days
a sub-month baseline is possible in one editorial setup
2016
43 days
the clock can move materially in one year
2017
51 days
a journal can rise from the prior year without changing identity
2018
36 days
editorial systems can recover quickly
2019
28 days
the same journal can fall back under one month
2020
20 days
heavy operational shifts can compress first decisions
2021
22 days
faster handling can persist for more than one cycle
2022
19 days
one title can end up far below its own slower years

That single-journal example is not meant to define the whole market. It is here to make one planning point clear: a journal's timing in one year should never be treated as a permanent property. In that annual-report series, the first-decision clock fell from 51 days in 2017 to 19 days in 2022. That is a larger swing than the gap between many journals authors treat as decisively different.

What fast review times do and do not tell you

Fast first decisions often tell you:

  • the editorial office is organized
  • the journal is comfortable making early no decisions
  • the scope filter is tight

Fast first decisions do not automatically tell you:

  • that external reviewers were quick
  • that the journal is easy to publish in
  • that the full path to publication will be short

Nature and Cell family journals are the clearest example. Their clocks are fast because editorial screening is fast.

Submit if / Think twice if

Use review-time data heavily if:

  • you have a real deadline
  • you are choosing between two journals with similar readership fit
  • you need quick feedback more than maximum prestige

Think twice if:

  • the faster journal is only faster because it desk rejects aggressively
  • the slower journal is meaningfully better aligned with the manuscript
  • you are using speed as a substitute for honest fit assessment
  • the paper still needs structural work that will hurt it anywhere

What this page does not claim

This benchmark is useful, but it is still a simplification.

It does not claim that:

  • every journal reports first decision in the same way
  • every 7-day first decision means the same editorial event
  • every 120-day first decision means deeper or better review

It does claim that the tracked distribution is wide enough that review time should be part of submission strategy rather than an afterthought.

The practical lesson

The number you should care about is not just "how long until I hear back?"

The better question is:

What kind of decision is this journal usually fast at?

If the answer is "fast at screening out weak or misfit submissions," that can still be useful. It is just a different kind of usefulness than fast external review.

If you need the broad benchmark, this page gives it. If you need the manuscript-level decision, the smarter move is to pair it with desk-rejection risk, fit, and the likely reviewer burden before submitting.

Before you enter a long queue, a manuscript readiness check can catch fit and framing problems that no fast journal will rescue.

Frequently asked questions

This report analyzes 100 unique journals selected from the active Manusights tracked set after alias cleanup and normalization of each journal's first-decision wording into a comparable day-based benchmark.

The median normalized first-decision time across the 100-journal benchmark set was 37.5 days, while the mean was slower because the long-delay tail is substantial.

The fastest first-decision signals in the tracked set include Nature Biotechnology and Neuron at 4 days, then Nature Immunology and Cell Reports at 5 days, with Nature and Nature Methods near 7 days.

The slowest tracked journals in this benchmark include Chemical Society Reviews at roughly 150 days, then Chemical Reviews and Renewable and Sustainable Energy Reviews around 120 days.

Not necessarily. Fast timelines often reflect strong editorial triage rather than unusually fast external peer review. Speed becomes useful only when read together with desk rejection behavior, fit, and the likely review depth.

References

Sources

  1. Manusights peer review timelines resource
  2. Nature journal information
  3. Nature Communications journal metrics
  4. Science information for authors
  5. Social Problems 2022 annual report
  6. SciRev

Best next step

Use this page to interpret the status and choose the next sensible move.

The better next step is guidance on timing, follow-up, and what to do while the manuscript is still in the system. Save the Free Readiness Scan for the next paper you have not submitted yet.

Guidance first. Use the scan for the next manuscript.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Status Guide