Manuscript Preparation12 min readUpdated Mar 17, 2026

Pre-Submission Review for Neuroscience Journals: What Reviewers Actually Scrutinize

Neuroscience manuscripts face heightened scrutiny on reproducibility, statistical methods, and sample sizes. Here is what editors and reviewers at top neuroscience journals actually look for.

Research Scientist, Neuroscience & Cell Biology

Author context

Works across neuroscience and cell biology, with direct expertise in preparing manuscripts for PNAS, Nature Neuroscience, Neuron, eLife, and Nature Communications.

Readiness scan

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.

Run Free Readiness ScanAnthropic Privacy Partner. Zero-retention manuscript processing.Open Journal Fit Checklist
Working map

How to use this page well

These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.

Question
What to do
Use this page for
Building a point-by-point response that is easy for reviewers and editors to trust.
Start with
State the reviewer concern clearly, then pair each response with the exact evidence or revision.
Common mistake
Sounding defensive or abstract instead of specific about what changed.
Best next step
Turn the response into a visible checklist or matrix before you finalize the letter.

Decision cue: Neuroscience is in the middle of a reproducibility reckoning. A landmark 2022 Nature study showed that brain-wide association studies require thousands of participants to produce reproducible findings, while the median neuroimaging study has a sample size of about 25. Editors at Nature Neuroscience, Neuron, and the Journal of Neuroscience are screening for these issues. If your manuscript has the statistical vulnerabilities that have plagued the field, a reviewer will find them.

Check your neuroscience manuscript readiness in 60 seconds with the free scan.

Why neuroscience manuscripts face unique scrutiny

Three field-specific issues make neuroscience manuscripts harder to get past editorial review than many other disciplines:

The reproducibility problem is well-documented

When 70 research groups analyzed the same neuroimaging dataset, each group produced different results. This finding, published in Nature in 2020, changed how editors evaluate neuroscience methodology. Reviewers now ask not just "did you get a result?" but "would another group get the same result with the same data?"

The practical implication: if your methods section does not describe your analytical pipeline in enough detail for another lab to reproduce the analysis, reviewers will flag it. "We used SPM12 for fMRI analysis" is no longer sufficient. Which preprocessing steps? Which statistical model? Which correction for multiple comparisons?

Sample sizes are under pressure

The median neuroimaging study sample size is about 25 participants. For simple sensory or motor tasks, this can be adequate. For complex brain-behavior associations (personality traits, psychiatric symptoms, cognitive abilities), recent evidence suggests that thousands of participants may be needed for reproducible results.

Editors at top journals now check sample size justification more carefully than they did even 5 years ago. A study with n=20 claiming brain-behavior associations will face immediate skepticism. Either justify the sample size with a power analysis, or acknowledge the limitation honestly and frame the findings appropriately.

Multiple comparisons are aggressively scrutinized

A 2016 PNAS paper demonstrated that standard fMRI analysis pipelines could produce statistical artifacts without appropriate correction for multiple comparisons. The rate of false positive results was much higher than the nominal 5% in some widely used analysis packages.

Since then, reviewers evaluate multiple comparisons correction carefully. Uncorrected results, liberal cluster-forming thresholds, and unreported comparison counts are red flags that experienced neuroscience reviewers catch immediately.

What editors at top neuroscience journals check first

Nature Neuroscience

Nature Neuroscience screens for: conceptual advance in understanding the brain (not just a new dataset), multi-level evidence (from molecules to circuits to behavior), and methodological rigor that withstands reproducibility concerns. The desk rejection rate is roughly 70 to 80%. The editorial question is: "Does this change how we think about the brain?"

Neuron

Neuron wants mechanistic insight into neural function. A descriptive finding without mechanistic explanation is weaker than one that explains why the brain does what it does. Electrophysiology, optogenetics, and computational modeling that explain circuit function are strong. Correlational observations without causal manipulation are weaker.

Journal of Neuroscience

JNeurosci has a broader scope than Nature Neuroscience or Neuron but still screens for technical rigor. The journal has been at the forefront of requiring transparent statistical reporting and has published guidelines on sample size, effect size reporting, and appropriate use of statistics.

The neuroscience pre-submission checklist

Methodology and reproducibility

  • the analytical pipeline is described in enough detail for another lab to reproduce the analysis
  • software packages are named with version numbers
  • preprocessing steps are listed sequentially with parameters
  • statistical models are specified (not just "we used an ANOVA")
  • multiple comparisons correction is explicitly stated and justified
  • raw data or preprocessed data are available in a public repository (OpenNeuro, NITRC, Figshare)
  • analysis code is deposited in a public repository (GitHub with Zenodo DOI)

Sample size and power

  • sample size is justified (power analysis, practical constraints, or pilot data)
  • if the sample is small, limitations are acknowledged honestly
  • if claiming brain-behavior associations, the sample size issue is addressed directly
  • effect sizes are reported alongside p-values

For neuroimaging studies specifically

  • the MRI acquisition parameters are fully reported (field strength, coil, sequence parameters, spatial resolution)
  • the preprocessing pipeline is described step by step
  • the correction for multiple comparisons is appropriate (no uncorrected thresholds without justification)
  • region of interest (ROI) analyses are pre-specified or identified as exploratory
  • the statistical thresholding approach is justified
  • unthresholded statistical maps are available or can be made available

For electrophysiology studies

  • recording parameters are specified (electrode type, sampling rate, filtering)
  • spike sorting or signal processing methods are described with enough detail for reproduction
  • trial numbers are reported
  • statistical tests are appropriate for the data type and recording protocol

For animal studies

  • ARRIVE 2.0 guidelines are followed
  • sample sizes are justified
  • blinding and randomization procedures are described
  • exclusion criteria are pre-specified

Where pre-submission review helps most in neuroscience

The issues that cause neuroscience manuscript rejections are often not about the science being wrong. They are about the methods being insufficiently described, the statistics being inappropriately applied, or the claims being overclaimed relative to the sample size.

A free readiness scan catches the most visible issues: claim strength, methodology gaps, citation problems, and journal fit. For neuroscience manuscripts specifically, citation verification is valuable because the field moves fast and citing superseded methods or retracted findings is a real risk.

The $29 AI Diagnostic provides the full assessment with verified citations from 500M+ live papers, figure-level feedback, and journal-specific scoring. For a manuscript targeting Nature Neuroscience or Neuron, the diagnostic evaluates readiness against the specific editorial standards of those journals.

For the highest-stakes submissions, Manusights Expert Review ($1,000 to $1,800) connects you with a reviewer who has published in and reviewed for your target neuroscience journal. A reviewer who knows what Nature Neuroscience editors screen for in the first read can identify framing and positioning issues that no automated tool can catch.

References

Sources

  1. Reproducible brain-wide association studies require thousands of individuals (Nature, 2022)
  2. Revisiting doubt in neuroimaging research (Nature Neuroscience, 2022)
  3. Controversy in statistical analysis of fMRI data (PMC, 2017)
  4. Reproducibility in neuroimaging analysis: challenges and solutions
Navigate

On this page

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Final step

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Run Free Readiness Scan

Need deeper scientific feedback? See Expert Review Options

Internal navigation

Where to go next

Run Free Readiness Scan