Manuscript Preparation10 min readUpdated Apr 15, 2026

How to Prepare a Manuscript for Journal Submission: The 6-Dimension Checklist (2026)

Formatting checklists won't get your paper through triage. Editors screen for six things: journal fit, claim calibration, methods completeness, figure quality, citation integrity, and reporting compliance. Here is how to check each one before you submit.

Associate Professor, Clinical Medicine & Public Health

Author context

Specializes in clinical and epidemiological research publishing, with direct experience preparing manuscripts for NEJM, JAMA, BMJ, and The Lancet.

Readiness scan

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.

Get free manuscript previewAnthropic Privacy Partner. Zero-retention manuscript processing.See sample report
Working map

How to use this page well

These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.

Question
What to do
Use this page for
A working artifact you can actually apply to the manuscript or response package.
Start with
Fill the template with real manuscript-specific details instead of leaving it generic.
Common mistake
Copying the structure without tailoring the logic to the actual submission.
Best next step
Use the artifact once, then cut anything that does not affect the decision.

Quick answer: Preparing a manuscript for journal submission is not primarily a formatting exercise. Editors screen for six dimensions: (1) journal fit, (2) claims calibrated to your study design, (3) methods complete enough to replicate, (4) figures that support the text with defensible statistics, (5) citations that are current and include no retracted papers, and (6) compliance with the reporting checklist for your study type. Formatting matters, but failing any of these six dimensions is more likely to end your submission before peer review starts.

In our pre-submission review work with manuscripts across clinical medicine, public health, and the life sciences, the most common failure is not poor formatting. It is a mismatch between what the study can actually show and what the abstract claims. Authors spend days reformatting references and hours on figure resolution while the conclusions section makes a causal claim that a cross-sectional design cannot support. The editor reading at triage sees the claim problem in the abstract. The reference style does not matter at that point.

The six dimensions below are what preparation actually means. Work through them in order. The Manusights free scan runs through similar checks automatically, but the manual version below will teach you what to look for even in journals or study types outside our coverage.

Dimension 1: Journal Fit

What to check: Does the journal publish work with your study design, in your field, at your level of novelty? Read the aims and scope on the journal's website, not the description in an aggregator. Then read the last three issues to see what they actually published versus what the stated scope claims.

What failure looks like: A clinical epidemiology study submitted to a basic science journal because the IF is higher. A retrospective cohort study submitted to a journal that explicitly publishes prospective cohorts only. A single-center case series submitted to a journal that states "multicenter studies preferred" in its scope notes.

How to fix it: Write one sentence describing your study: design type, population, main finding, clinical or scientific significance. Hold that sentence against the journal's aims. If you cannot name a paper from the last two volumes that resembles yours in design and scope, you are probably in the wrong place.

Scope mismatch is the most common desk rejection trigger across publishing. Taylor & Francis author services and Elsevier both list it as the top reason. Editors know within 60 seconds of reading the abstract whether the paper is in the right place.

Dimension 2: Claim Calibration

What to check: Does every conclusion in your abstract and discussion match what your study design can actually support? This is not a writing exercise. It is a logic check.

Randomized controlled trials can support causal claims with appropriate caveats about generalizability. Cross-sectional studies can support associations, not causation. Retrospective cohort studies can support direction of association, not mechanism. Systematic reviews can synthesize evidence, not generate it.

What failure looks like: An abstract that uses "demonstrates" or "proves" for a retrospective study. A discussion that extrapolates a mouse model finding to human clinical practice with one sentence of hedging. A single-institution audit presented as generalizable evidence.

How to fix it: Read your abstract and underline every active verb in the conclusions. For each verb, ask: does the study design allow this claim? Replace "demonstrates" with "suggests" and "shows that X causes Y" with "is associated with Y" if the design is observational. Then check whether the discussion adds new claims not in the abstract. A discussion that escalates beyond the abstract is a specific reviewer frustration pattern.

The Springer Nature common rejection reasons page lists overstated conclusions as a top reason papers are returned before external review. Editors trained to catch this do so fast.

Dimension 3: Methods Completeness

What to check: Can a researcher in your field replicate your study from the methods section alone, without contacting you? That is the threshold.

For clinical studies: IRB or ethics committee approval number, trial registration number (ClinicalTrials.gov or equivalent), participant inclusion and exclusion criteria, outcome definitions, statistical analysis plan including how missing data were handled.

For laboratory studies: reagent sources and catalog numbers (or a statement that they are available on request), equipment model numbers, software version and source code availability.

For computational or data science studies: data availability statement, code repository link, software versions, random seed if applicable.

What failure looks like: A methods section that describes what was done but not how. "Statistical analysis was performed using SPSS" with no version number, no test selection rationale, and no description of how missing data were handled. An animal study with no ethical approval statement. A clinical trial that appears to be registered after enrollment started.

How to fix it: Treat the methods section as a protocol that a graduate student in your field with no knowledge of your project could follow. If it requires you to fill in gaps verbally, it is not complete. For clinical research, late or missing trial registration is not fixable at the revision stage. Ethics approval must cover the study as conducted. These are preparation items that cannot be added retroactively.

Dimension 4: Figure Quality

What to check: Do your figures (a) support the specific claims made in the text, (b) include properly defined error bars and sample sizes in the legend, (c) use the right statistical tests for the data type, and (d) meet minimum resolution requirements?

What failure looks like: A figure showing error bars with no legend definition of whether they represent standard deviation, standard error, or 95% confidence intervals. These three communicate fundamentally different things and reviewers will flag any ambiguity immediately. A bar chart used for data that should be a box-and-whisker plot because the distribution is skewed. A figure that shows a clear trend in one direction while the results text reports a non-significant p-value: this is not a problem reviewers overlook.

Resolution requirements vary: most print journals require 300 DPI for photographs and 600 DPI minimum for line art. TIFF or EPS formats are generally preferred over JPEG for line art because JPEG compression introduces artifacts that worsen at journal production sizes.

How to fix it: For every figure panel, write one sentence describing what it shows and check it against the results paragraph that cites it. If you cannot match them cleanly, the figure is not doing its job. Have someone not on the paper read the legends alone without the text. If they cannot understand what is being shown and what the error bars mean, the legend is incomplete.

Dimension 5: Citation Integrity

What to check: Is your reference list current, relevant, and free of retracted papers?

Currency matters: a literature review that treats papers from 2019 as "recent" in a fast-moving field will draw reviewer comments about awareness of current work. A 2024 meta-analysis that does not cite a major 2023 systematic review in the same space is a problem the editor can spot without deep knowledge of the field.

Retracted papers are a specific credibility risk. The Retraction Watch database has over 50,000 entries. Citing a paper that was retracted before your submission date suggests the literature review was not thorough. Most major reference managers now flag retracted papers, but the flag is not always visible in the default workflow.

What failure looks like: A reference list that stops at 2021 in a field with active recent publications. Citing the original version of a paper that has since been substantially corrected. Citing a paper as supporting your claim when the abstract of that paper does not support it (reviewers check).

How to fix it: Run your reference list through Retraction Watch or enable the Zotero retraction check before exporting. For clinical fields, also verify that any guideline citations are the current version: clinical guidelines are updated on irregular cycles and an outdated guideline citation is a specific problem for practice-relevant research.

Dimension 6: Reporting Checklist Compliance

What to check: Does your study type require a reporting checklist? If yes, have you completed it and attached it to your submission?

This is not optional at most major journals. Missing or incomplete reporting checklists are a common reason for desk returns, not rejections, but a return-to-author adds weeks to your timeline.

Study type
Required checklist
Source
Randomized controlled trial
CONSORT + flow diagram
Observational study (cohort, case-control, cross-sectional)
STROBE
Systematic review or meta-analysis
PRISMA
Animal research
ARRIVE 2.0
Diagnostic accuracy study
STARD
equator-network.org
Clinical trial protocol
SPIRIT
equator-network.org

What failure looks like: A completed CONSORT checklist where roughly half the items read "see Methods section" without a page or line number. A PRISMA checklist where the preferred reporting items section is empty. A CONSORT flow diagram that accounts for a different number of participants than the results text. Editors at journals that enforce these guidelines review them specifically before sending to peer review.

How to fix it: Download the checklist for your study type from the source URL above. Complete it item by item, adding page and line numbers. If an item is not applicable, write N/A and a one-line explanation. A checklist that shows genuine engagement with each item is faster to review than one that defers to the manuscript body for every response.

Reporting checklist compliance is one area where the COPE guidelines are explicit: journals are expected to enforce relevant standards, and authors are expected to know what applies to their study type.

Pre-Submission Checklist

Work through this before uploading. Each item maps to a rejection risk.

Journal Fit

  • [ ] Confirmed journal scope covers your study design and field
  • [ ] Read at least 3 recent issues; identified 2+ published papers with similar design
  • [ ] Word count, abstract structure, and article type match the journal's instructions

Claim Calibration

  • [ ] Abstract conclusions use language consistent with the study design (causal language only for RCTs)
  • [ ] Discussion does not escalate claims beyond what the abstract states
  • [ ] All findings are presented in relation to pre-specified outcomes (for clinical research)

Methods

  • [ ] Ethics or IRB approval number included; approval predates data collection
  • [ ] Clinical trial registration number included; registration predates enrollment
  • [ ] Missing data handling described
  • [ ] Software versions, reagent sources, or code availability stated as appropriate

Figures

  • [ ] Error bars defined in every legend (SD, SEM, or 95% CI stated explicitly)
  • [ ] Statistical test in legend consistent with results text
  • [ ] Figures meet minimum resolution requirements for the journal
  • [ ] Every figure is cited in the results text in the order it appears

Citations

  • [ ] Reference list checked against Retraction Watch
  • [ ] All cited guideline documents are current versions
  • [ ] No papers cited as supporting a claim where the cited paper does not actually support it

Reporting Compliance

  • [ ] Correct reporting checklist completed with page/line numbers
  • [ ] Flow diagram included for clinical trials (CONSORT)
  • [ ] Ethics statement present for human and animal research
  • [ ] Data availability statement included

After working through this checklist, run the free Manusights scan to catch anything you may have missed. The scan checks claim calibration, citation flags, and figure-text consistency automatically.

Submit If / Think Twice If

Submit if:

  • The study design supports the conclusions in the abstract without additional hedging
  • Ethics approval and trial registration (if applicable) were obtained before data collection
  • The reporting checklist is complete with specific page/line numbers throughout
  • You can name two recently published papers in the target journal that share your design and scope
  • Every figure legend defines its error bars and the statistics are consistent with the results text

Think twice if:

  • Your IRB approval was obtained after data collection started: this is not fixable with a note in the methods
  • Your trial was registered after enrollment began: prospective registration is a requirement, not a preference, at most clinical journals
  • The discussion adds causal claims that the abstract does not make: this is a reviewers-will-notice problem, not a reviewers-might-notice problem
  • Your reference list has not been updated since you started writing the first draft: if the paper took 12 months to write, that literature review may now be incomplete
  • You completed the CONSORT or PRISMA checklist by writing "see Methods" for more than three items: this is the pattern that triggers a return-to-author

Readiness check

Run the scan to see how your manuscript scores on these criteria.

See score, top issues, and what to fix before you submit.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample report

Frequently Asked Questions

What is the most common reason manuscripts are rejected at the desk?

Scope mismatch is the most common trigger, but it is usually the most preventable. The second most common is claim overclaiming: conclusions that the study design cannot support. Both are fixable before submission if you check them deliberately.

Do I need a reporting checklist like CONSORT or PRISMA?

If you ran a randomized trial, you need CONSORT. If you ran an observational study, you need STROBE. If you did a systematic review or meta-analysis, you need PRISMA. If you used animal models, you need ARRIVE. Most journals in these areas now require the completed checklist at submission, not just compliance with the guidelines. Missing it can trigger a desk return.

How far in advance do I need to prepare a manuscript?

Most preparation steps can be done in a week, but two things cannot: IRB or ethics approval and clinical trial registration. Both must happen before data collection begins. If either is missing at submission, no amount of revision will fix it.

What should I check in my figures before submission?

Three things: resolution (300 DPI minimum for print, 600 DPI for line art), that error bars are defined in the legend (standard deviation vs. standard error vs. 95% CI), and that all statistical tests reported in the text are consistent with the figure data. A figure that shows a trend but reports a non-significant p-value in the legend is a reviewer red flag.

How do I check for retracted papers in my reference list?

Search each reference in the Retraction Watch database or use the Retraction Watch integration in reference managers like Zotero. Citing a retracted paper is not automatically disqualifying if the citation predates the retraction, but including one that was retracted before your submission date is a credibility problem editors notice.

References

Sources

  1. CONSORT Statement: reporting standard for randomized controlled trials
  2. STROBE Statement: reporting standard for observational studies
  3. PRISMA Statement: reporting standard for systematic reviews and meta-analyses
  4. ARRIVE Guidelines: reporting standard for animal research
  5. COPE: Committee on Publication Ethics: ethical submission and authorship standards
  6. Retraction Watch: database of retracted papers
  7. Springer Nature: Common Rejection Reasons
  8. Elsevier: Paper Rejection Common Reasons
  9. Taylor & Francis: 5 Top Reasons for Desk Rejection
  10. NIH PMC: Manuscript Rejection Causes and Remedies
  11. Editage: Manuscript Submission Checklist

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.

Open the reference library

Final step

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Get free manuscript preview

Not ready to upload yet? See sample report

Internal navigation

Where to go next

Get free manuscript preview