The Manusights Method

Presubmission is a discipline, not a category of help.

Eight jobs sit between draft-done and submit-clicked. Most authors do them in the wrong order, under deadline, and learn the right order only after a desk rejection. This page is the order, the standards, and the opinions, written down. The unit of work is the submission, not the document. The 35+ active peer reviewers who trained the Manusights engine on their real peer reviews co-developed this framework.

The method in seven lines

  1. Pick the right journal, then re-pick after the first peer review.
  2. Anticipate the peer review before you submit, not after.
  3. Verify every reference yourself. Hallucinations are everyone’s problem now.
  4. Make methods defensible before you make sentences pretty.
  5. Format to the journal you are sending to, not the one you sent to last time.
  6. Polish after the science is settled, not before.
  7. Rewrite the cover letter for every target journal. Treat it as a separate manuscript.

Why this is a method

Most AI tools for academic authors are point solutions: a grammar checker, a citation finder, a lit search engine, a plagiarism scanner. Each is useful. None of them tells you what to do, or in what order, or what good looks like. The presubmission stage of manuscript preparation is treated as a category of help, not as a job with a sequence.

Elsevier’s 2025 researcher survey (3,234 researchers, 113 countries) found 58 percent of researchers now use AI in their research, up from 37 percent the year before. 58 percent also say AI is saving them time today. The adoption is here. The discipline is not. Only 22 percent of those same researchers report that they trust the AI tools they are already using.

This is the gap. Authors arrive at submission with a polished manuscript that does not fit the journal, or a strong manuscript with three hallucinated citations, or a perfectly formatted manuscript that has not anticipated the obvious reviewer objection. The work is real, the tools exist, but the order is wrong.

The method below is the order. The 35+ peer reviewers who co-developed the Manusights engine read manuscripts at top-tier journals every week. The patterns repeat. The patterns are listed below. Where Manusights does one of these jobs directly, the link is inline. Where another tool does it well, we will say so.

The eight jobs

01

Pick the right journal

What good looks like.
Three named target journals, ranked. Each has a one-sentence reason ("the editorial scope statement explicitly names this method", "the IF + acceptance rate matches the field", "the cascade option is realistic on rejection"). One desk-rejection counterfactual planned for each candidate.
What fails.
Submitting to the highest-IF journal in the field because it is the highest-IF journal in the field. Most desk rejections are scope or fit mismatches, not bad science.
Why this matters.
A survey of pediatric academic faculty (PMC9256847, 2020) found 80 percent submit a paper to three or more journals before acceptance, with a mean of 3.7 submissions per accepted manuscript. 30 to 85 percent of those submissions are desk-rejected without peer review; the top journals desk-reject 75 to 92 percent. Each rejection cycle costs one to two weeks of decision time and a full reformatting pass before resubmission. Picking the wrong venue first is the most expensive mistake in presubmission.

Run the Journal Fit check

02

Anticipate the peer review

What good looks like.
For every section, three things written down: the strongest claim you make, the specific evidence that supports it, the most likely reviewer objection. Then evidence patched into the manuscript before submission, not after Reviewer 2.
What fails.
Treating peer review as something that happens to you. The reviewer comments are predictable for any specific journal in any specific field. You can write them yourself, six weeks earlier.
Why this matters.
91 percent of authors report their last paper was improved by peer review. The improvement comes from reviewers asking questions the author had not thought through. Asking those questions before submission is the same work, on a different timeline, with the reject-resubmit cycle removed.

Run the presubmission scan

03

Verify every reference exists

What good looks like.
Every DOI in the bibliography resolves. No retractions in the cited literature. Self-citation rate matches what is normal for the field. AI-suggested references manually checked against PubMed or the journal source.
What fails.
Trusting an LLM to find your citations. ChatGPT-4o fabricates roughly one in five citations on academic tasks. The Lancet flagged a rise of fraudulent or fabricated citations in published manuscripts in 2026, reaching 1 in 277 papers.
Why this matters.
A single hallucinated DOI in a manuscript is the kind of thing that destroys a reviewer's patience in the first ten minutes. Reference integrity is no longer a copyediting concern. It is the first proxy a reviewer uses for whether to trust the rest of the work.

Run the Reference Integrity check

04

Make the methods defensible

What good looks like.
Sample size justified. Statistical test choices stated and matched to the data type. Controls reported per claim. Reporting guideline followed for the study type (CONSORT for trials, PRISMA for systematic reviews, STROBE for observational, ARRIVE for animal work, MIBBI for the rest).
What fails.
Methods sections that say "as previously described" and stop there. Stats sections that say "p < 0.05" without specifying what test, what comparison, or what correction.
Why this matters.
The EQUATOR Network catalogues 500+ reporting guidelines. Most journals require them. Reviewers reach for them in the first read. A manuscript that fails its reporting guideline at the methods section is rarely saved by clarity in the results.

Run the Stats Audit

05

Format to the journal

What good looks like.
The journal's instructions to authors followed line by line. Word count under the cap. Figure captions in the journal's format. References in the journal's style. Title page in the journal's layout.
What fails.
Submitting in the previous target journal's format. Word counts that ignore the figure-and-table limits. Reference style off by a punctuation choice the editorial-office staff will fix or flag.
Why this matters.
This is the largest documented time leak in the field. A PLOS One survey of 372 researchers across 41 countries (Hartley et al. 2019) found a median of 14 hours per manuscript spent on formatting, 52 hours per researcher per year, and 88 percent of authors dissatisfied with the process. A separate Stanford survey of 203 authors (Sinha et al. 2019) found reformatting delays submission by more than two weeks for most manuscripts, and up to three months for 20 percent. Treat formatting as a trust signal you give the editor before sentence one; treat it also as the single largest hour-cost in your year.
06

Polish the language

What good looks like.
Abstract structure follows Problem → Gap → Method → Finding → Impact. Sentence length varies. The first sentence of each paragraph carries the paragraph's argument. Acronyms defined once.
What fails.
Treating language polish as the first job instead of the fifth. Submitting an unedited draft to a copyediting service before the science is reviewed.
Why this matters.
Most AI tools in this category are language polishers. The work is real but it is the cheapest of the eight jobs. Polish before fit, and you have polished a manuscript that the journal will not read. Polish after the science is settled and the polish is investment that compounds.
07

Write the cover letter

What good looks like.
Three to five sentences. Names the editor or section editor when known. States the contribution in the journal's own framing. Explains why this journal specifically. Lists any companion submissions, ethics declarations, or prior reviewer comments transparently. Rewritten for every target journal.
What fails.
A cover letter that re-states the abstract. A cover letter that begins "We are pleased to submit". A cover letter that fights the journal scope statement rather than aligning with it. A single cover letter sent unedited to three journals in a cascade.
Why this matters.
The cover letter is the first thing the editor reads. Editorial guidance from journal staff and editing services consistently identifies the cover letter as one of the most under-invested presubmission jobs, especially in cascade resubmissions where the manuscript travels journal-to-journal with the original letter unedited. We could not find a quantitative survey of cover-letter rewrite behavior; the strongest claim we will defend is the qualitative one: the editor sees this before the abstract, and an editor reading "we are pleased to submit" reads the manuscript with a sharper eye.
08

Carry context across drafts

What good looks like.
One canonical record of the manuscript across rounds. Reviewer comments from R1 mapped to the issues they correspond to. What changed between drafts visible at a glance. Response-to-reviewer text grounded in the actual edits.
What fails.
Treating R1 as a new project. Rewriting the manuscript without tracking what changed against R0. Writing the response letter from scratch six months after the original review.
Why this matters.
Nature and Wellcome's researcher culture surveys flag the revision round as one of the highest-stress moments in research. The continuity work is the work nobody currently helps you do. It is also where Manusights' project portal was designed from the start.

Open your manuscript projects

Opinions we hold

A method has opinions. These are ours. We will defend them in print.

Fit before polish.

Language polish on a manuscript heading to the wrong journal is wasted work. Pick the venue, then polish to its register.

The cover letter is a separate manuscript.

Rewrite it for every target journal. Editor World's editorial team calls this the single highest-return presubmission hour. We agree.

Verify references before you submit, not after.

A hallucinated DOI in a reviewer's hands is fatal. Verification is automatable. Manuscripts that arrive verified read as more careful, period.

Treat AI critique as a peer reviewer, not a copyeditor.

A copyeditor smooths sentences. A peer reviewer asks whether the claim is defensible. Anchor on the second job; the first is downstream.

Run the desk-rejection counterfactual.

If this manuscript is desk-rejected, what is the next journal, and what changes? Decide before submission. The cascade strategy without a plan is the cascade strategy with a delay.

Where the method breaks

The method assumes a manuscript that is approximately ready, in approximately the right shape, for approximately the right journal. It does not save a study with a fundamental design flaw, and it does not invent a contribution that is not there. A presubmission framework is downstream of the science.

The method also assumes the author has time. Presubmission compresses badly under deadline. The reason fit-before-polish is the first rule is exactly that authors under deadline reverse the order and polish a manuscript that is heading to the wrong place. If you have less than 48 hours to submission, run the journal-fit check and the reference-integrity check and skip the rest until the second round.

Finally, the method assumes AI critique is treated as a peer reviewer, not as a copyeditor or as a ghostwriter. The output is most useful as a question generator: every flagged issue is a question to answer in the final draft, not a sentence to copy in.

Who co-developed this

The Manusights engine was trained during the early-marketplace phase on real peer reviews written by a network of 35+ active reviewers, including current reviewers for Nature, Cell, and Science. The patterns above are theirs, distilled and put in order. We do not list individual names on this page because reviewer rosters change and a named claim must hold. The 35+ figure is verified; named reviewers participate by invitation in the closed program.

The framework is versioned. This is the May 2026 version. We will publish updates when the patterns change.

Apply it

The method is free to use and the page above is the canonical reference. If you want the work done in the Manusights flow, start with the presubmission scan and the iterative project portal will carry context across drafts.

Last reviewed: May 2026. Cite as: Manusights. The Presubmission Method. https://manusights.com/presubmission.