Manuscript Preparation10 min readUpdated Apr 15, 2026

How to Improve a Manuscript Before Submission: The 6-Dimension Method (2026)

Most manuscript improvement advice is too generic to act on. This guide maps improvement to the six dimensions editors actually use during triage, with named failure patterns and a one-pass fix protocol for each.

Associate Professor, Clinical Medicine & Public Health

Author context

Specializes in clinical and epidemiological research publishing, with direct experience preparing manuscripts for NEJM, JAMA, BMJ, and The Lancet.

Readiness scan

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.

Get free manuscript previewAnthropic Privacy Partner. Zero-retention manuscript processing.See sample report
Working map

How to use this page well

These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.

Question
What to do
Use this page for
Getting the structure, tone, and decision logic right before you send anything out.
Most important move
Make the reviewer-facing or editor-facing ask obvious early rather than burying it in prose.
Common mistake
Turning a practical page into a long explanation instead of a working template or checklist.
Next step
Use the page as a tool, then adjust it to the exact manuscript and journal situation.

Quick answer: Improving a manuscript before submission means working through six specific dimensions: journal fit, claim calibration, methods completeness, figure quality, citation integrity, and reporting compliance. Generic advice like "tighten your writing" or "check your references" does not prevent rejection because it does not address the actual triage criteria editors use. This guide maps each dimension to what strong and weak look like, with a named failure pattern and a one-pass fix for each.

Run a free readiness scan at Manusights to get a rapid assessment of where your manuscript stands across all six dimensions before you invest hours in revisions.

In our pre-submission review work with clinical and epidemiological manuscripts targeting NEJM, JAMA, BMJ, and The Lancet, the dimension that fails most consistently is not writing quality. It is claim calibration: conclusions that overshoot what the study design can actually support. We see this in roughly 60% of the manuscripts we review, and it is the most common trigger for both desk rejection and reviewer requests for major revision at top clinical journals. The good news is that it is also the most straightforward to fix once you know what to look for.

The six-dimension method below addresses each failure mode in sequence, starting with the one that eliminates the most manuscripts before anyone reads the methods section.

The six dimensions at a glance

Dimension
What editors look for
Most common failure
Journal fit
Does this paper belong here?
Wrong tier or wrong scope
Claim calibration
Do conclusions match the data?
Conclusions outrun study design
Methods completeness
Can this be replicated?
Missing parameters, no sample size rationale
Figure quality
Do figures support the claims?
Panels misaligned with abstract claims
Citation integrity
Are references real and relevant?
Stale, fabricated, or misused citations
Compliance
Does the paper follow reporting standards?
Missing CONSORT/STROBE/PRISMA checklist

Dimension 1: Journal fit

What strong looks like

The target journal has published at least three papers in the last two years with a similar population, study design, and question type. The scope statement on the journal website uses language that maps directly to your research question. If the journal uses tiered impact criteria (Nature, Cell, Lancet), the paper's finding is not just incremental: it opens a new mechanistic question or changes current clinical practice.

The failure pattern: "adjacent fit"

Adjacent fit is the most common journal targeting error. The paper is clearly scientific, clearly competent, and clearly wrong for this journal. The research question falls within the broad subject area (oncology, cardiology, immunology) but not within the editorial priorities. Editors describe this as "the paper is fine, but it's not for us." It triggers desk rejection within 48 hours at most top journals.

The specific version I see in clinical manuscripts: submitting a single-center retrospective cohort study to a journal that only publishes multi-center prospective trials in that clinical area, or sending a disease mechanism paper to a journal that publishes exclusively intervention trials.

How to fix it in one pass

Before revising a word of the manuscript, spend 20 minutes on this check: go to the journal website and read the last 12 months of table of contents. Filter to your study type. Count how many papers match your design and question type. If you find fewer than two, the journal is probably the wrong target regardless of the impact factor. Redirecting to the right journal requires no manuscript changes and saves weeks.

Dimension 2: Claim calibration

What strong looks like

Every conclusion sentence is traceable to a specific data element in the results. The language of the conclusion accurately reflects the evidence level: observational studies use "associated with," not "causes"; pilot studies use "warrants further investigation," not "demonstrates efficacy"; single-center studies qualify their generalizability. The abstract, discussion, and conclusion sections make the same claim in consistent language.

The failure pattern: "design-conclusion gap"

The design-conclusion gap is the most consistently cited problem in peer reviewer comments at high-impact journals. The study uses an observational design, a convenience sample, or a small pilot, and the conclusion reads as if the findings generalize to all patients, all contexts, or all mechanisms. The specific trigger for editors: a prospective cohort study concluding that an intervention "reduces mortality" when no experimental assignment was present. A causal verb in an observational conclusion is a fast path to desk rejection at JAMA, NEJM, and BMJ.

We have found this error in manuscripts from otherwise rigorous research groups. The problem is usually not carelessness. Authors have been working on the study for years and the causal interpretation feels obvious. But the evidence hierarchy does not care about what feels obvious.

How to fix it in one pass

Read every sentence in the Discussion and Conclusion that makes a claim about what the findings mean. For each sentence, ask: what study design would I need to use that verb? If the verb is "demonstrates," "proves," "shows that X causes Y," or "is effective," you need an RCT with adequate power and pre-registration. If you have an observational study, replace those verbs with "is associated with," "is consistent with," or "suggests." This is not hedging for politeness; it is accurate scientific communication.

Dimension 3: Methods completeness

What strong looks like

An independent researcher can reproduce your study using only what you wrote, without contacting you. The methods section specifies: the study population and all inclusion/exclusion criteria, the primary outcome and how it was measured, the statistical analysis plan including software version and the covariates in each model, the sample size calculation with the assumed effect size and power, and the randomization method if the study is an RCT.

The failure pattern: "soft methods"

Soft methods means the procedures are described in general terms without enough detail for replication. Common specific gaps: "statistical analysis was performed using SPSS" (which version?), "patients with significant comorbidities were excluded" (which ones? how defined?), "we used logistic regression" (what were the covariates? how did you handle missing data?), "a sample of 120 patients was enrolled" (what power did that give you? was this a convenience sample?).

Soft methods manuscripts arrive at peer review with a predictable fate: reviewers request a major revision asking for exactly these details. The revision costs two to four months. The details were there all along; the authors just did not write them down.

How to fix it in one pass

Run the replication test on your own methods section. Give it to a colleague who works in a related area but was not involved in the study. Ask them to highlight every place they would need to contact you or make an assumption to reproduce the study. Every highlighted sentence is a gap. For clinical trials, download the CONSORT checklist and verify your methods address every required item.

Dimension 4: Figure quality

What strong looks like

Each figure panel directly shows the data that supports a claim made in the abstract or a major result statement. Axes are labeled with units. For experiments with biological replicates, n is specified per group and a measure of dispersion (SD, SEM, or 95% CI) is shown. For clinical outcome data, Kaplan-Meier curves show the number at risk at each time point. The figure resolution meets the journal's technical specifications (usually 300 DPI minimum for print).

The failure pattern: "panels without anchors"

Panels without anchors means the figure shows data but no sentence in the text directly interprets it. This happens most in multi-panel figures where panels C and D were added during revision but the results section was never updated to describe them. Editors and reviewers notice when figures and text do not align. The result is either a desk rejection ("the paper is not ready") or a reviewer comment that reads like: "Figure 3C and 3D are not mentioned in the results. What do these show?"

A related failure: the figure shows a statistically significant difference, but the effect size is too small to be clinically meaningful, and there is no mention of this limitation in the discussion. Reviewers at clinical journals increasingly call this out directly.

How to fix it in one pass

For every figure panel, find the sentence in the results section that directly interprets it. If you cannot find one, either write the sentence or remove the panel. For every claim in the abstract about experimental results, find the figure that shows the data. If the abstract claims a difference that no figure demonstrates, add the figure or revise the abstract.

Dimension 5: Citation integrity

What strong looks like

Every citation in the methods section cites the original validation paper for that method, not a paper that merely used it. Every claim in the introduction about background epidemiology or prior findings cites a primary source, not a review that cited a primary source. The reference list is current: for rapidly evolving fields, reviews older than five years in the introduction are a signal that the authors have not engaged with recent literature.

The failure pattern: "citation drift"

Citation drift means the manuscript cites papers that do not actually support the claim they are cited for. This happens in three ways. First, telephone citations: paper A cites paper B, which cites paper C, which actually made the original claim, but the authors cited paper A because it was the most recent. The original evidence is often weaker than the chain of citations implies. Second, misattributed statistics: "30% of patients with X experience Y (Ref. 12)" where reference 12 is a different population, a different definition of X, or a different time period. Third, fabricated citations: this is now detectable. Tools including Retraction Watch and citation databases flag papers that cite retracted work or use references that do not exist.

Journals increasingly use automated citation verification at submission. A manuscript with even one unfindable reference triggers manual review and delays processing.

How to fix it in one pass

For every claim that uses a specific statistic, date, or finding, verify that the cited paper actually reports that finding. Open the cited paper, search for the number or claim, and confirm it. This takes longer than any other step in this guide but it is not optional. For clinical manuscripts, also verify that no cited paper has been retracted since you wrote the introduction.

Dimension 6: Compliance with reporting guidelines

What strong looks like

The manuscript includes a completed reporting checklist from the appropriate guideline as supplementary material. Each item on the checklist maps to a page and line number in the manuscript. The authors' statement in the cover letter notes which guideline was followed.

The main guidelines by study type:

  • Clinical trials: CONSORT
  • Observational studies (cohort, case-control, cross-sectional): STROBE
  • Systematic reviews and meta-analyses: PRISMA
  • Animal studies: ARRIVE

COPE guidelines apply to authorship declarations and conflict of interest reporting for all study types.

The failure pattern: "checklist compliance without checklist depth"

Many authors include a compliance checklist but complete it superficially. The CONSORT checklist item for "allocation concealment" gets "yes" checked even though the paper says only "patients were randomly assigned" without specifying whether allocation was concealed from the clinician or the outcome assessor. The PRISMA checklist item for "search strategy" gets "yes" checked even though only the final search string is reported, without the databases searched or the date the search was run.

Reviewers and editors with methodology expertise check the checklist items against the manuscript. A checklist that says "yes" to items the manuscript does not actually address is worse than no checklist, because it signals that the authors are not familiar with what the items require.

How to fix it in one pass

Download the checklist for your study type and fill it out by finding the specific page and line number in your manuscript that addresses each item. If you cannot find a page and line number, the information is missing from the manuscript. Add it. When every item has a real location in the manuscript, submit the completed checklist as supplementary material.

Before-you-submit checklist

Run the Manusights readiness scan to get an automated assessment across all six dimensions. Then verify these manually:

Journal fit

  • [ ] Found 3+ papers published in this journal in the last 24 months with a similar design and question type
  • [ ] Scope statement on journal website directly matches my research question
  • [ ] If a high-impact journal: confirmed the finding opens a new question or changes practice

Claim calibration

  • [ ] Every conclusion sentence uses language appropriate to the study design
  • [ ] No causal verbs in observational studies
  • [ ] Abstract, discussion, and conclusion make the same claim in consistent language

Methods completeness

  • [ ] A colleague not involved in the study can read the methods section and reproduce the study
  • [ ] Sample size calculation is present with effect size and power assumptions
  • [ ] Software, version, and statistical model covariates are specified

Figure quality

  • [ ] Every figure panel has a corresponding sentence in the results section
  • [ ] Every abstract claim has a supporting figure
  • [ ] n per group and dispersion measure are shown for all experimental data

Citation integrity

Compliance

  • [ ] Reporting guideline identified (CONSORT, STROBE, PRISMA, ARRIVE as applicable)
  • [ ] Completed checklist with page and line numbers ready for supplementary submission
  • [ ] Authorship and COI statements complete per COPE guidelines

Submit if / Think twice if

Submit if:

  • All six dimensions pass the checks above
  • The free readiness scan shows low desk-reject risk
  • A knowledgeable colleague outside the study team has read the methods and found no gaps

Think twice if:

  • You have not confirmed at least three papers with a similar design published in the target journal recently (strong signal of scope mismatch, not just bad luck)
  • The conclusion section still uses causal language and the study is observational (reviewers at top journals will not overlook this)
  • The methods section has items that require explanation beyond what is written (the explanation belongs in the manuscript, not in a cover letter or author response)
  • You have not checked citation accuracy for at least the key statistics driving your main claims (citation errors in high-visibility spots are detectable and damage editorial trust)
  • The reporting checklist has items marked "yes" that you cannot immediately find in the manuscript

Readiness check

Run the scan to see how your manuscript scores on these criteria.

See score, top issues, and what to fix before you submit.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample report

FAQ

What is the most common reason manuscripts are rejected before peer review?

Wrong journal target is the single most preventable cause of desk rejection. Editors estimate that 30-50% of what they desk-reject could have been peer-reviewed at a different journal. The second most common cause is scope mismatch: the paper answers a question the journal does not prioritize, even if the science is sound.

How do I know if my manuscript conclusions are too strong?

Compare each conclusion sentence against your study design. If the conclusion uses language suggesting general applicability ("X causes Y," "X is effective") but the study is a single-center observational study or a small RCT, the language is outrunning the design. Reviewers flag this consistently. Rephrase to reflect the actual evidence level: "These findings suggest" rather than "This demonstrates."

What makes a methods section adequate for replication?

A methods section passes the replication test if an independent researcher could reproduce your study using only what you wrote, without contacting you. Common gaps: software version numbers missing, randomization method not specified, inclusion/exclusion criteria incomplete, statistical model covariates not listed, or sample size justification absent.

How do I check if my figures are publication-ready?

Run three checks: (1) Do the figure panels directly show the data that supports the main claim in the abstract? (2) Are the axes labeled with units? (3) For biological experiments, is n specified per group and is a measure of dispersion shown? Many journals now require individual data points for small-sample experiments, not just summary bars.

Which reporting guidelines apply to my paper?

Clinical trials: CONSORT. Observational studies (cohort, case-control, cross-sectional): STROBE. Systematic reviews and meta-analyses: PRISMA. Animal studies: ARRIVE. Diagnostic accuracy studies: STARD. Qualitative research: COREQ. Check the EQUATOR Network (equator-network.org) for the full list and downloadable checklists.

Ready to check your manuscript now? Upload at Manusights for a free readiness scan. You will get a score, a desk-reject risk signal, and the top issues in your manuscript in about 60 seconds.

References

Sources

  1. CONSORT Statement: http://www.consort-statement.org
  2. STROBE Statement: https://www.strobe-statement.org
  3. PRISMA Statement: https://www.prisma-statement.org
  4. ARRIVE Guidelines: https://arriveguidelines.org
  5. COPE Guidelines: https://publicationethics.org
  6. Retraction Watch: https://retractionwatch.com
  7. EQUATOR Network (reporting guidelines hub): https://www.equator-network.org

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.

Open the reference library

Final step

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Get free manuscript preview

Not ready to upload yet? See sample report

Internal navigation

Where to go next

Get free manuscript preview