Average Review Times Across 100 Journals in 2026: What the Tracked Data Shows
Review speed is one of the most misread signals in journal strategy. Fast decisions can mean efficient editorial systems, harsh desk triage, or both. Slow decisions can reflect reviewer scarcity, field norms, or simply queue design.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
What to do next
Already submitted? Use this page to interpret the status and choose the next step.
The useful next step is understanding what the status usually means, how long the wait normally runs, and when a follow-up is actually reasonable.
Authors ask about review time as if it were a single number with a single meaning.
It is not.
A fast first decision can mean:
- a strong professional-editor system
- a journal that desk rejects quickly
- a venue with tight scope and fast triage
A slow first decision can mean:
- reviewer scarcity
- academic-editor bottlenecks
- field norms with long referee cycles
- a journal that sends more papers deeper into review before deciding
That is why review time matters, but only if you interpret it properly.
What this report covers
This report summarizes a 2026 benchmark built from the active Manusights journal dataset in apps/web/src/data/journals.ts.
Methodology:
- start with the tracked journal set in the repo
- deduplicate obvious journal aliases by normalized name
- rank the deduplicated set by tracked impact factor
- take the top 100 unique journals
- normalize the
timeToDecisionstring by extracting the first usable time value in days
This normalization is intentionally simple. It gives a cross-journal benchmark for first-decision speed, not a perfect reconstruction of every journal's editorial workflow.
Headline findings
Across the 100-journal benchmark set:
Metric | Value |
|---|---|
Journals analyzed | 100 |
Median normalized first-decision time | 37.5 days |
Mean normalized first-decision time | 52.4 days |
Journals under 14 days | 8 |
Journals from 14 to 29 days | 24 |
Journals from 30 to 59 days | 27 |
Journals at 60 days or longer | 41 |
The biggest strategic takeaway is simple: the distribution is not centered on "a few weeks." It is centered closer to five to six weeks, with a long slow tail.
The fastest journals in the current benchmark
Journal | Tracked first-decision field |
|---|---|
Nature Biotechnology | 4 days median to first editorial decision |
Neuron | 4 days to first decision |
Nature Immunology | 5 days median to first editorial decision |
Cell Reports | 5 days median to first editorial decision |
Nature | 7 days median to first decision |
Nature Methods | 7 days median to first editorial decision |
Science Advances | 1-4 weeks to first editorial decision |
Nature Communications | ~9 days to first editorial decision |
This list is not random. Most of these journals are either:
- professional-editor venues
- heavily triaged journals
- both
That is why fast time-to-first-decision should not automatically be read as "reviewers moved fast." Often the editor moved fast.
The slowest journals in the current benchmark
Journal | Tracked first-decision field |
|---|---|
Chemical Society Reviews | ~150-200 days median |
Chemical Reviews | ~120 days to first decision |
Renewable & Sustainable Energy Reviews | ~120-180 days median |
Astronomy & Astrophysics | ~120-150 days median |
Advanced Energy Materials | ~100-140 days median |
Applied Catalysis B: Environment and Energy | ~100-140 days median |
Cancer Research | ~100-130 days median |
Diabetes Care | ~100-130 days median |
These journals are different from each other in prestige and field, but they share a common point: a first decision can take months rather than weeks.
What the distribution suggests
The benchmark breaks naturally into three broad speed bands.
1. Rapid triage and fast editorial systems
Under about two weeks.
These journals often combine:
- strong in-house editorial operations
- tight scope filters
- fast desk decisions
2. Moderate first-decision environments
About two to eight weeks.
This is where many journals live. It is often long enough to include meaningful editor handling and reviewer recruitment, but not so long that the paper disappears into a deep queue immediately.
3. Slow-queue journals
Sixty days and up.
These journals often rely on:
- harder reviewer recruitment
- longer referee norms
- heavy volume
- academic-editor workflows
Why the mean is much slower than the median
The median in this benchmark is 37.5 days.
The mean is 52.4 days.
That gap matters because it shows a long right tail. A meaningful share of journals are much slower than the center of the distribution, and those slow journals drag the average upward.
In practical terms: many authors will experience a first decision in around a month or six weeks, but a large minority of journals still behave more like two- to four-month systems.
Why fast journals are not automatically author-friendly
This is the main interpretive trap.
A seven-day decision at Nature is not equivalent to a seven-day reviewer turnaround at a mid-tier specialty journal. It is often a fast editorial screen.
That is why you need to read speed together with desk rejection behavior.
For example:
- Nature in the tracked dataset is at 7 days median to first decision and around 70% desk rejection
- Nature Communications is around 9 days to first editorial decision and is also strongly triaged
- Neuron is fast as well, but that speed sits inside a high-selectivity environment
Fast can mean efficient. It can also mean you are being declined faster.
What authors should actually do with review-time data
Use it for planning, not wishful thinking.
If speed matters because of deadlines
Prefer journals where:
- the first-decision timeline is genuinely shorter
- the field norm supports faster reviewer recruitment
- the journal does not have an obviously slow queue history
If quality of fit matters more than speed
A slower but better-fit journal may still be the rational choice.
If the paper is fragile or borderline
A fast desk-reject environment may punish overreach quickly. Sometimes that is useful. Sometimes it is an expensive prestige reflex.
What this dataset does and does not mean
It means:
- first-decision time varies much more than authors often assume
- a large share of journals still run on multi-month clocks
- professional-editor systems dominate the fast end
It does not mean:
- every journal reports time the same way
- every first decision reflects the same editorial stage
- a fast journal gives a better overall author experience
The BMJ and several other journals in the tracked set illustrate this ambiguity well. Some first-decision strings clearly blend desk-decision speed with peer-reviewed median timing. That is why the normalization here is useful for comparison but not the final word on any one venue.
The practical submission lesson
If speed genuinely matters, ask two questions before you submit:
- how fast is the journal's first decision really
- how much of that speed is just desk triage
Those are not the same question.
This report is most useful when paired with desk rejection rates report 2026, peer review process complete guide, and what happens after your paper is accepted.
And if the paper is still structurally weak, a faster queue will not rescue it. In that situation, the higher-value move is usually a pre-submission Manusights AI Review.
Verdict
Across this 100-journal benchmark, the median first-decision timeline is 37.5 days, but the average is much slower because the long-delay tail is real.
The fastest journals are usually fast because they run aggressive editorial systems, not because peer review magically became frictionless. The slowest journals reflect the opposite problem: reviewer scarcity, field norms, and heavier queue drag. Use the number, but read the system behind it.
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Dataset / benchmark
Biomedical Journal Acceptance Rates
A field-organized acceptance-rate guide that works as a neutral benchmark when authors are deciding how selective to target.
Reference table
Journal Submission Specs
A high-utility submission table covering word limits, figure caps, reference limits, and formatting expectations.
Best next step
Use this page to interpret the status and choose the next sensible move.
The better next step is guidance on timing, follow-up, and what to do while the manuscript is still in the system. Save the Free Readiness Scan for the next paper you have not submitted yet.
Guidance first. Use the scan for the next manuscript.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Supporting reads
Conversion step
Use this page to interpret the status and choose the next sensible move.
Guidance first. Use the scan for the next manuscript.