Manuscript Preparation6 min readUpdated Apr 2, 2026

How to Write a Methods Section That Survives Review and Supports Reproducibility

A methods section does not exist to prove that you did something complicated. It exists to let a skeptical reader understand exactly what you did, why you did it, and whether the results can be trusted.

Author contextSenior Researcher, Oncology & Cell Biology. Experience with Nature Medicine, Cancer Cell, Journal of Clinical Oncology.View profile

Readiness scan

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr sanity-check your Results section in 5 seconds
Working map

How to use this page well

These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.

Question
What to do
Use this page for
Getting the structure, tone, and decision logic right before you send anything out.
Most important move
Make the reviewer-facing or editor-facing ask obvious early rather than burying it in prose.
Common mistake
Turning a practical page into a long explanation instead of a working template or checklist.
Next step
Use the page as a tool, then adjust it to the exact manuscript and journal situation.

Quick answer: The methods section is where authors most often mistake familiarity for clarity. Because you ran the assay, built the pipeline, or designed the cohort, the procedure feels obvious to you. Reviewers do not have that privilege.

That means the methods section is not housekeeping. It is one of the core trust-building sections of the paper.

Short answer

A strong methods section answers six questions fast:

Question
What readers need to know
What was studied?
Participants, materials, datasets, organisms, or samples
How was it designed?
Study design, controls, sampling, randomization, blinding
What exactly was done?
Procedures, instruments, software, parameters, timing
What was measured?
Outcomes, variables, endpoints, definitions
How was it analyzed?
Statistical methods, assumptions, missing data, software
What oversight applied?
Ethics approval, consent, preregistration, data or code availability

If your methods section cannot answer those cleanly, reviewers will start worrying long before they get to the Discussion.

What current publisher guidance emphasizes

Recent Nature Portfolio editorials and journal instructions are unusually clear about what goes wrong in methods reporting.

Nature Climate Change wrote in 2025 that clear methods reporting is essential for reliable and reproducible science and can also prevent an extended review process. It also noted that its Methods section appears at the end of the manuscript, is published online only, and allows roughly 3,000 words, while still expecting a brief methods description and discussion of caveats in the main text.

Scientific Reports gives concrete article-level limits that show how journals think structurally:

  1. main text ideally 4,500 words, excluding Abstract, Methods, References, and figure legends
  2. abstract no more than 200 words
  3. title no more than 20 words
  4. ethics and informed-consent statements should appear in the Methods section where relevant

Nature Cell Biology and other Nature titles go further by emphasizing:

  • a mandatory Data Availability space
  • a Statistics and Reproducibility section
  • reporting summaries for many article types
  • code details sufficient for others to follow the conclusions

Nature Methods has also argued recently for logically organized methods, no artificial fear of detail, and step-by-step protocol support when needed.

These are not cosmetic preferences. They show what reviewers and editors increasingly treat as baseline.

The job of the methods section

The methods section has three jobs at once.

1. Let reviewers assess validity

Reviewers are testing whether the result can be believed.

2. Let readers understand what the result means

Interpretation depends on design choices, exclusions, preprocessing, and endpoint definitions.

3. Support reproducibility or at least transparent reusability

Even when exact replication is unrealistic, the logic and procedure must still be inspectable.

When authors treat methods as a compressed ritual paragraph, all three jobs suffer.

A structure that works for most empirical papers

Use the order below unless the target journal requires something else.

1. Study design

Start with the design logic:

  • randomized trial, cohort, case-control, cross-sectional, qualitative, computational benchmark, etc.
  • prospective or retrospective
  • primary and secondary endpoints where relevant

If you bury the design until late in the section, reviewers have to reverse-engineer the paper.

2. Participants, samples, datasets, or materials

Be concrete about:

  • who or what entered the study
  • inclusion and exclusion criteria
  • recruitment or sampling source
  • dates or collection period when relevant
  • final analytic sample size and attrition

3. Experimental or analytic procedures

This is where underreporting often becomes fatal.

State:

  • what was done
  • in what order
  • using what platform, instrument, reagent, model, or software
  • under what settings or thresholds

If a method is standard, you may cite the original source, but do not assume citation replaces explanation.

4. Outcome definitions

Many manuscripts fail because the methods never define the exact outcome the Results later analyze.

Specify:

  • how variables were operationalized
  • any transformations or derived measures
  • primary versus exploratory outcomes

5. Sample size justification

This is one of the most common reasons for methods-section rejection: no power analysis or sample size justification. A peer-reviewed guide in the Indian Journal of Anaesthesia states that "not performing power analysis for sample size calculation is usually considered a good reason for article rejection."

For clinical trials: report the power calculation, the assumed effect size, the alpha level, and the resulting sample size requirement. For observational studies: justify why the available sample size is sufficient. For computational studies: explain why the dataset size supports the claims. Even when exact power analysis isn't applicable (qualitative research, case studies), state why the sample or dataset is appropriate for the conclusions drawn.

6. Statistical analysis

This should not be a boilerplate afterthought.

At minimum, spell out:

  • which tests or models were used
  • what assumptions mattered
  • whether tests were one- or two-sided
  • how multiple comparisons were handled
  • how missing data were handled
  • what software and version were used

If the paper uses modeling, machine learning, or complex preprocessing, the statistical and computational workflow may need its own subheadings.

6. Ethics, data, and code statements

Where relevant, include:

  • IRB or ethics approval
  • informed consent
  • preregistration
  • data availability
  • code availability

Reviewers increasingly see these as part of methods quality, not administrative extras.

What reviewers often flag

Use this as a pre-submission table.

Reviewer concern
What usually caused it
"Methods are insufficiently described"
Missing procedural details, thresholds, software versions
"Sample selection is unclear"
Inclusion/exclusion logic not specified cleanly
"Statistics are underdescribed"
Tests named without model logic, assumptions, or data handling
"Reproducibility is hard to assess"
Code, data, and parameter reporting are too thin
"It is unclear what changed from prior work"
Prior methods cited, but current modifications not stated

1. Hiding critical details in the supplement by default

Supplementary methods can be useful, but if the main logic of the study depends on details that appear nowhere prominent, reviewers may suspect the manuscript is trying to protect a weak design behind opacity.

2. Writing as if all readers already know the field's conventions

Nature explicitly asks authors to write so that significance and background are understandable outside the narrow specialty. The same principle applies to methods. Reviewers from adjacent areas still need a workable explanation.

3. Reporting statistics too vaguely

Saying "data were analyzed using standard methods" is not reporting. It is evasion.

4. Omitting negative procedural detail

If participants were excluded, data were filtered, outliers were removed, or preprocessing thresholds were applied, say so clearly. Hidden curation choices are exactly what reviewers worry about.

5. Reusing old methods text without adapting it to the current paper

This creates both clarity and ethics problems. If the workflow changed, the old text misleads. If it did not change, you still need to make the present paper self-contained enough to read.

How much detail is enough?

Enough detail means a knowledgeable reviewer could answer these questions:

  • could this study be repeated in principle
  • could I explain the pipeline to a colleague without guessing missing steps
  • do I understand the inferential boundaries of the result

That is the standard. Not "did we mention the platform name."

Readiness check

Run the scan to see how your manuscript scores on these criteria.

See score, top issues, and what to fix before you submit.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr check whether a cited paper supports your claim

How to write the statistics subsection well

This deserves special emphasis because many methods sections collapse here.

Use a template like:

  1. define the analysis population
  2. define the primary outcome
  3. state the main model or test
  4. specify adjustments, covariates, or stratification
  5. describe missing-data handling
  6. describe multiplicity handling if relevant
  7. give software and version

If the paper involves machine learning, also report:

  • train/validation/test logic
  • tuning procedure
  • evaluation metrics
  • leakage prevention
  • computational environment when material

A practical writing sequence

Do not write the methods section last-minute from memory.

A better workflow:

  1. export the actual protocol, notebook, script comments, or lab record
  2. build the section from the real workflow
  3. remove nonessential clutter without removing decision-relevant detail
  4. ask a colleague outside the immediate project to read only the Methods and explain the study back to you

If they cannot, the section is still too implicit.

The methods section and desk rejection

Authors often assume methods only matter after review. That is wrong.

A weak methods section can trigger desk rejection because it makes the manuscript feel immature, undercontrolled, or difficult to assess. Even before full review, editors use the section as a proxy for rigor and seriousness.

That is why this section pairs well with how to get published in a top journal, how to write an academic abstract, and statistical review common red flags.

Before submission, a final manuscript readiness check is a good way to catch unclear procedures, unsupported claims, and figure-method mismatches that the authors no longer see.

Verdict

A methods section survives review when it makes the study legible, not when it sounds sophisticated.

Give reviewers the design, the procedures, the definitions, the analysis logic, and the oversight details they need to test trust. Anything less is an invitation to revision.

Frequently asked questions

A strong methods section usually includes study design, materials or participants, procedures, outcome definitions, statistical analysis, ethics or approvals where relevant, and enough operational detail for readers and reviewers to understand and assess the work.

Enough for a knowledgeable reader to understand what was done and assess reproducibility. The exact level depends on the field and journal, but vague placeholders and hidden procedural steps are common reasons for reviewer pushback.

You can cite prior methods, but the current paper still needs enough detail to be interpretable and reproducible on its own. Citation is not a substitute for clarity.

Statistical details should be explicit, not implied. Reviewers usually want to see how variables were defined, what models or tests were used, what assumptions mattered, how missing data were handled, and what software or code environment was used when relevant.

The biggest mistake is underreporting. Authors often assume that common techniques or standard workflows do not need full explanation, but reviewers often interpret that omission as a threat to rigor or reproducibility.

References

Sources

  1. 1. Scientific Reports submission guidelines
  2. 2. Nature Climate Change: Making the most of the Methods
  3. 3. Nature Cell Biology: Methodical about Methods
  4. 4. Nature Methods: Reporting methods for reusability

Final step

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Check my manuscript