The Manuscript Submission Crisis: Why Getting Published Is Harder in 2026
Journal submissions surged dramatically in late 2025. Desk rejection rates are rising. Review times are stretching. Here is what is happening, why, and how to adapt your submission strategy.
Associate Professor, Clinical Medicine & Public Health
Author context
Specializes in clinical and epidemiological research publishing, with direct experience preparing manuscripts for NEJM, JAMA, BMJ, and The Lancet.
Readiness scan
Find out if this manuscript is ready to submit.
Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.
How to use this page well
These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.
Question | What to do |
|---|---|
Use this page for | Getting the structure, tone, and decision logic right before you send anything out. |
Most important move | Make the reviewer-facing or editor-facing ask obvious early rather than burying it in prose. |
Common mistake | Turning a practical page into a long explanation instead of a working template or checklist. |
Next step | Use the page as a tool, then adjust it to the exact manuscript and journal situation. |
Decision cue: The second half of 2025 saw dramatic increases in manuscript submissions at many journals. The primary driver: large language models make drafting manuscripts faster and easier, which means more papers are being submitted, including many that would not have been written without AI assistance. This surge has real consequences for researchers submitting legitimate work: more competition for the same number of review slots, longer wait times, and editors who are more aggressively filtering at the desk.
The researchers who will succeed in this environment are the ones who submit better-prepared manuscripts, not more manuscripts. Check whether your paper is ready before adding to the queue.
What is happening
Submission volume is up sharply
Multiple journals reported significant increases in submissions during 2025. The trend accelerated in the second half of the year as AI writing tools became more sophisticated and more widely adopted. The exact numbers vary by journal, but the pattern is consistent across fields.
This is not just more researchers writing more papers. It is also AI making it easier to produce manuscripts that look publishable from the outside, even when the underlying science is thin, the methods are weak, or the citations are fabricated.
Desk rejection rates are rising in response
Editors cannot review more papers without more reviewers, and reviewer availability has not kept pace with submission volume. The response is predictable: more aggressive desk rejection. Papers that might have been sent for review two years ago are now being triaged out.
This does not mean the editorial standard has changed. It means the editorial filter is being applied more strictly because the volume demands it. A paper that is "probably good enough for review" no longer makes the cut when there are 30 other papers competing for the same reviewer's time.
Reviewer fatigue is real
Peer reviewers are volunteers. The same pool of qualified reviewers is being asked to evaluate a larger number of manuscripts. Many are declining more review invitations. This stretches review times and reduces the quality of feedback when reviewers do accept.
For authors, this means:
- longer wait times from submission to first decision
- reviews that are sometimes less thorough than expected
- more difficulty finding reviewers for specialized topics
AI-generated content is eroding trust
Editors and reviewers are increasingly skeptical of manuscripts that show signs of AI generation: unusually smooth prose, generic introductions, fabricated citations, and claims that sound confident but lack specificity. The 2025 finding of 100+ hallucinated citations in NeurIPS-accepted papers raised alarm across academic publishing.
This skepticism affects all authors, not just those who use AI irresponsibly. A manuscript with perfect English and smooth transitions may trigger closer scrutiny simply because it matches the pattern of AI-generated text.
What this means for your next submission
The bar for desk clearance is higher
With more papers competing for the same editorial bandwidth, the first read matters more than ever. Your abstract, first figure, and cover letter need to communicate significance immediately. A paper that requires a slow read to appreciate will be triaged out in favor of one that communicates its value in 5 minutes.
Citation integrity matters more
Editors are more suspicious of references because they know AI tools fabricate them. Having verifiable, accurate citations is no longer just good practice. It is a trust signal. A manuscript with 15+ verified references sends a different message than one with references that might or might not be real.
The Manusights AI Diagnostic ($29) verifies every citation against CrossRef, PubMed, OpenAlex, Semantic Scholar, bioRxiv, and medRxiv (500M+ papers). In the current environment, this is not just useful. It is a competitive advantage over manuscripts that have not been verified.
Journal fit is not optional
When editors are triaging more aggressively, scope mismatch is the fastest path to desk rejection. Submitting to a journal that does not publish your type of work wastes everyone's time and yours. The free readiness scan includes a journal-fit verdict that checks scope alignment in 60 seconds.
Quality beats quantity
The old strategy of submitting to many journals in sequence (starting high, working down after rejections) is more costly in 2026 than it was in 2023. Each rejection cycle takes 3 to 6 months. With longer review times, the cost per rejection is increasing.
The new strategy: invest in preparation before the first submission. A paper that is thoroughly prepared and correctly targeted has a higher probability of acceptance on the first attempt, which saves months of rejection-resubmission cycling.
How to adapt
Before submission
- Run the free readiness scan to check journal fit, methodology, citation integrity, and overall readiness. This takes 60 seconds and catches the issues that drive desk rejection.
- If the scan surfaces concerns, use the $29 diagnostic for a full assessment with verified citations and figure-level feedback. In an environment where editors are more aggressive at the desk, the $29 investment in preparation is worth more than ever.
- For career-critical submissions to selective journals, expert review ($1,000 to $1,800) from a reviewer who knows what those editors are looking for can make the difference between desk rejection and peer review.
During submission
- disclose AI use per your target journal's policy (see Journal AI Policies 2026)
- ensure every citation is verifiable
- submit to the right journal the first time (retargeting after rejection costs more months)
After rejection
- treat each rejection as diagnostic information, not just bad luck
- fix the actual problems before resubmitting (see Manuscript Review After Rejection)
- do not assume a lower-tier journal will accept the same paper unchanged
The researcher who succeeds in this environment
In a market flooded with AI-assisted manuscripts of variable quality, the researchers who succeed are the ones whose papers are:
- obviously well-prepared (not AI-generated boilerplate)
- correctly targeted to the right journal
- methodologically sound with verifiable claims
- clearly significant in the first 5 minutes of reading
This has always been true. What has changed is that it matters more now because the competition for editorial attention is fiercer.
Pre-submission review does not guarantee acceptance. But in an environment where editors are triaging more aggressively and trust in submitted manuscripts is lower, a paper that has been verified, calibrated, and prepared stands out from one that has not.
Check your paper now. 60 seconds. Free.
Sources
On this page
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Dataset / benchmark
Biomedical Journal Acceptance Rates
A field-organized acceptance-rate guide that works as a neutral benchmark when authors are deciding how selective to target.
Reference table
Journal Submission Specs
A high-utility submission table covering word limits, figure caps, reference limits, and formatting expectations.
Final step
Find out if this manuscript is ready to submit.
Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Need deeper scientific feedback? See Expert Review Options
Where to go next
Supporting reads
Conversion step
Find out if this manuscript is ready to submit.
Anthropic Privacy Partner. Zero-retention manuscript processing.