Manuscript Preparation8 min readUpdated Apr 21, 2026

Journal AI Policies in 2026: What Authors Need to Know Before Submission

83% of high-impact journals now have AI policies. Here is what you must disclose, what is prohibited, and how to stay compliant across different journals.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Readiness scan

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal
Working map

How to use this page well

These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.

Question
What to do
Use this page for
Getting the structure, tone, and decision logic right before you send anything out.
Most important move
Make the reviewer-facing or editor-facing ask obvious early rather than burying it in prose.
Common mistake
Turning a practical page into a long explanation instead of a working template or checklist.
Next step
Use the page as a tool, then adjust it to the exact manuscript and journal situation.

Quick answer: Journal AI policies 2026 are not standardized. If you used ChatGPT, Claude, Gemini, Copilot, or any other generative AI tool during manuscript preparation, assume you may need to disclose it. The policies differ by publisher: some direct authors to the Methods or Acknowledgments section, some require workflow disclosure at submission, and some add figure-specific rules. Getting this wrong can delay processing or create an integrity problem later.

If AI touched your manuscript, assume disclosure is required unless the journal explicitly says otherwise. The safe author posture in 2026 is to document the tool, the exact job it performed, and the human verification step that followed.

This page is most useful as a final compliance check before submission, not as a substitute for reading the current policy on your actual target journal.

Check your manuscript readiness, including AI compliance signals, with the free scan.

Publisher-by-publisher AI policy comparison

Every publisher handles AI differently, and the differences matter when you're preparing a submission. Writing assistance is broadly tolerated, figure generation is broadly restricted, and disclosure is universally required. The variation is in the details, and those details determine whether your manuscript gets processed smoothly or bounced back.

Publisher
AI for Writing
AI for Figures
Where to Disclose
AI as Author?
Nature Portfolio
Allowed with disclosure
Restricted, must disclose, can't be sole method
Methods or Acknowledgments
No
Cell Press
Allowed with disclosure
Prohibited unless fully disclosed and justified
Acknowledgments
No
AAAS (Science)
Allowed with disclosure
Prohibited for primary data figures
Methods, must name exact tools
No
Elsevier
Allowed with disclosure
Restricted, requires disclosure
At submission AND in manuscript body
No
Wiley
Allowed with disclosure
Restricted, disclosure required
Acknowledgments or Methods
No
AMA (JAMA)
Allowed with disclosure
Restricted
At submission
No
NEJM
Allowed with disclosure
Restricted
At submission
No
ACS
Allowed with disclosure
Restricted, case-by-case
Author information section
No
IEEE
Allowed for editing only
Prohibited for original figures
Separate AI disclosure statement
No
Oxford University Press
Allowed with disclosure
Restricted, journal-specific rules
Varies by journal
No
PLOS
Allowed with disclosure
Developing policy, disclose to be safe
Methods section recommended
No

The biggest difference that catches people: Elsevier requires disclosure in two places (submission form and manuscript text), while most others need it in one. If you're resubmitting across publishers, don't just swap the journal name, rewrite the disclosure to match the format.

Where policies actually diverge in practice

The high-level rule sounds similar everywhere: disclose AI use and keep humans accountable. The practical differences are more annoying than that:

  • Nature Portfolio allows AI help in manuscript preparation, but expects disclosure and explicitly blocks AI from authorship.
  • Elsevier allows AI support in manuscript preparation, but expects disclosure in the manuscript and ties usage back to each journal's Guide for Authors.
  • JAMA Network tells authors to report AI use that created or edited manuscript content in the Acknowledgment section or Methods section when relevant.
  • ACS requires disclosure of AI-generated text or images and is unusually explicit about where that disclosure belongs.

That is why generic "we used AI for editing" language is often too weak. A compliant statement needs the tool, the task, and the human review step.

In our pre-submission review work

In our pre-submission review work, the authors who get into trouble are usually not the ones doing wild things with AI. They are the ones assuming all publisher policies are basically the same and dropping in vague disclosure language at the last minute.

The practical mistake is simple: a paper gets retargeted across publishers, but the disclosure does not get retargeted with it. That is how a statement that looked acceptable at one journal becomes incomplete or misplaced at the next one.

What you must do before submitting

Every journal's policy is different in the details, but these five steps cover the compliance requirements across all major publishers:

  • Check your target journal's specific AI policy. Policies are not standardized. Some journals require disclosure in methods, others in acknowledgments, others in a separate submission form field. Check the author guidelines for your specific target journal, not just the publisher umbrella page.
  • Document which AI tools you used and how. Be specific: "We used ChatGPT (OpenAI, GPT-4) to assist with language editing of the discussion section. All output was reviewed and revised by the authors." That's acceptable at most journals. "We used AI to help write the paper" is too vague.
  • Verify every citation and factual claim AI touched. AI tools hallucinate citations, fabricate statistical claims, and generate confident-sounding statements that are factually wrong. A 2025 analysis found over 100 hallucinated citations in papers accepted at a top machine learning conference. Every reference must be verified against the actual source. A citation verification scan catches fabricated references before reviewers do.
  • Don't use AI-generated images without disclosure. Some journals explicitly prohibit AI-generated figures unless disclosed and justified. If you used DALL-E, Midjourney, or similar tools to create any visual content, check whether your target journal permits it.
  • Don't claim AI-generated text as entirely your own. If an AI tool wrote substantial portions of the manuscript, this must be disclosed. Not disclosing is a form of plagiarism under most journal policies. The risk is not just rejection but retraction and reputational damage if discovered post-publication.

What actually gets you in trouble

Abstract policy language doesn't convey the real consequences. These are composites of cases that integrity committees have dealt with.

The hallucinated reference list. A biomedical research team used ChatGPT to draft their introduction and let it suggest supporting citations. Twelve of forty-three references didn't exist, plausible-looking combinations of real author names, real journals, and fabricated titles. Two reviewers independently flagged the issue. The manuscript was rejected, the corresponding author received a formal warning, and the flag went on the author's submission record.

The AI-generated figure with visual artifacts. A group submitted a review article with AI-generated schematic diagrams containing nonsensical text labels, impossible anatomical geometry, and repeating noise patterns. The production team caught it during typesetting. Complete figure replacement plus a six-week publication delay.

The discussion section that said nothing specific. A reviewer returned a manuscript with a single comment: "This reads like it was generated by a language model. Every paragraph makes broad claims without connecting to the specific results of this study." The paper eventually published, but only after two additional revision rounds.

Problem
How it's caught
Typical consequence
Hallucinated references
Reviewer spot-checks or automated verification
Rejection + integrity flag
AI figure artifacts
Production team or image screening tools
Publication delay, figure replacement
Generic AI-written discussion
Peer reviewer judgment
Major revision or rejection
Missing disclosure
Editor cross-check or whistleblower report
Post-publication correction or retraction

A safe disclosure template

For most writing-assistance use cases, the safe template is short:

We used [tool name, version if known] for [specific job, such as language editing or outline revision]. All text, citations, and scientific claims were reviewed and approved by the authors, who take full responsibility for the final manuscript.

Then adapt the placement and detail level to the target journal. The sentence is not the hard part. Matching the journal's workflow is.

The practical AI workflow for 2026

The principle: AI accelerates each step, but a human must own the output at every stage.

Workflow step
What AI can do
What it can't do
Disclosure needed?
Literature search
Surface papers, identify themes
Judge relevance to your specific study
No
Outline
Suggest structures, check argument logic
Know what your reviewers care about
No
Drafting
Polish language, improve clarity
Write scientifically precise claims
Yes, name the tool and sections it touched
Citation check
Flag potential issues
Guarantee a reference says what you claim
Yes, if AI suggested any references
Figure prep
Draft layouts, suggest visualizations
Produce publication-ready scientific figures
Yes, if any generative tool was used
Compliance check
Help draft disclosure language
Determine what your specific journal requires
N/A, this is the disclosure step itself

The safe rule for drafting: write methods and results yourself (these require domain precision AI can't match), then use AI for language polishing on the introduction and discussion. Rewrite anything that sounds generic.

Disclose or not? A practical decision framework

Always disclose AI use if:

  • The tool drafted, rewrote, or substantially edited any manuscript text (Nature, Elsevier, AAAS, and Cell Press all require this)
  • AI generated or suggested any references, and you've verified every one actually exists
  • You used AI for figure creation, data visualization, or image generation (some journals prohibit this entirely)
  • Your target journal has an AI checkbox or text field in the submission portal (skipping it is a procedural flag)
  • You're unsure whether your use counts, the cost of disclosing is zero, the cost of not disclosing is retraction

You don't need to disclose if:

  • You used standard spell-check or grammar tools that aren't AI-powered (e.g., built-in Word spell-check)
  • AI was used only for coding or data pipeline work that's already described in your methods section
  • The journal's policy explicitly excludes the type of use you made (read the actual policy, not the publisher umbrella page)

When policies conflict across journals: If you're resubmitting a rejected paper to a different publisher, rewrite the disclosure to match the new journal's format. Nature wants it in Methods or Acknowledgments. Elsevier wants it at submission and in the manuscript. JAMA wants it at submission. Do not reuse the old disclosure unchanged. Adapt it.

Submit If / Think Twice If

Submit if:

  • you need a fast cross-publisher map of what changes between major journal AI policies
  • you are about to submit or resubmit and need to place the disclosure correctly
  • you want a safer default wording before entering the submission portal

Think twice if:

  • you are treating this page as a substitute for the author instructions on your actual target journal
  • you plan to reuse the same AI disclosure unchanged across publishers
  • the manuscript still contains unverified AI-suggested citations, figures, or claims

Readiness check

Run the scan to see how your manuscript scores on these criteria.

See score, top issues, and what to fix before you submit.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal

Pre-submission compliance checklist

Question
Safe author action
Which tool was used?
Name the exact product and version if known
What job did it do?
State whether it helped with drafting, editing, summarizing, coding, figures, or literature assistance
Where did humans review it?
Confirm that authors checked all claims, citations, and manuscript text manually
Where does the journal want the disclosure?
Match the policy exactly: methods, acknowledgments, cover letter, or submission form
Did AI affect images or data presentation?
Check whether the journal restricts or forbids AI-generated visuals
Is the disclosure written before you enter the submission portal?
Write it in advance, don't scramble during upload
Has every AI-suggested citation been verified?
Check each reference against the actual source, not just that it exists

Last verified: April 2026 against published AI policies from Nature Portfolio, Elsevier, Springer Nature, AAAS (Science), AMA (JAMA), NEJM, Cell Press, Wiley, ACS, IEEE, Oxford University Press, and PLOS. Journal AI policies change frequently, always confirm the current policy on your target journal's author guidelines page before submission.

Frequently asked questions

Yes, most major publishers allow limited AI use in manuscript preparation, but they expect disclosure and keep authors fully responsible for the content. The details vary by publisher and journal.

That varies. Some publishers direct authors to the Methods or Acknowledgments section, while others also require disclosure in submission forms or other journal-specific workflow fields.

No. Major publishers treat AI tools as nonauthors because they cannot take accountability for the work.

Disclose the tool, the task it performed, and the human verification step that followed, then match the wording and placement to the target journal's current author instructions.

References

Sources

  1. Nature Portfolio AI policy
  2. Using AI responsibly in scientific publishing
  3. Elsevier generative AI policies for journals
  4. JAMA AI disclosure requirements
  5. AAAS/Science editorial policies
  6. ACS Publications AI policy
  7. Wiley ethics guidelines on AI use

Final step

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Check my manuscript