Publishing Strategy7 min readUpdated Mar 25, 2026

Nature's AI Policy: Disclosure Rules, Image Bans, and What Authors Must Know

Nature requires AI disclosure in Methods, prohibits AI authorship and AI-generated images across all Springer Nature journals, with a copy editing exemption.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness Scan

Nature was one of the first major journals to formalize its stance on AI tools, and the policy it published has become the template that dozens of other publishers reference. If you're submitting to any Nature Portfolio journal, you need to understand not just what's allowed but where the lines are drawn, because they're stricter than many researchers expect.

The core policy

Nature's AI policy rests on three rules:

  1. AI tools can't be authors. Large language models and other AI tools don't meet authorship criteria because they can't take accountability for published work. Only humans who made substantive intellectual contributions qualify.
  1. All AI use must be disclosed. If you used ChatGPT, Claude, Copilot, or any other LLM during manuscript preparation, you must describe this in the Methods section. The disclosure should specify which tool was used and how it was used.
  1. AI-generated images are banned. You can't include images produced by generative AI tools (DALL-E, Midjourney, Stable Diffusion, etc.) in your manuscript. This applies to figures, graphical abstracts, and any other visual content.

There's one notable exemption: copy editing assistance from AI doesn't require disclosure. If you used Grammarly or a similar tool purely for grammar and spelling, Nature doesn't need to know.

What the disclosure looks like in practice

Nature requires AI disclosure in the Methods section, not buried in acknowledgments or a footnote. The disclosure should be specific enough that a reader understands the scope of AI involvement. Generic statements like "AI tools were used during writing" aren't sufficient.

A properly formatted disclosure would look something like:

"During preparation of this manuscript, the authors used ChatGPT (OpenAI, GPT-4) to improve the clarity of the Discussion section. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the final published text."

This placement matters. The Methods section is the part of the paper where other researchers look to understand how work was done. Putting AI disclosure there signals that Nature treats AI use as a methodological choice, not just an administrative detail.

The image ban explained

Nature's prohibition on AI-generated images isn't about aesthetics. It's about data integrity. In scientific publishing, images are evidence. A microscopy image, a gel photo, a clinical scan, these are data points that readers use to evaluate claims. AI-generated images can't serve this function because they don't represent real observations.

The ban extends beyond obvious cases. You can't use AI to:

  • Generate schematic figures from text descriptions
  • Create "enhanced" versions of real images using generative models
  • Produce synthetic data visualizations that weren't derived from actual data

AI-assisted image processing tools that enhance existing real images (like denoising or contrast adjustment) occupy a gray area. Nature's guidance suggests these should be disclosed but aren't automatically prohibited, as long as the underlying image data is real and the processing doesn't alter the scientific content.

Scope: it's broader than you think

Nature's AI policy isn't limited to the flagship journal. It applies across the entire Nature Portfolio, which includes:

  • All Nature-branded journals (Nature Medicine, Nature Genetics, Nature Methods, Nature Biotechnology, Nature Communications, etc.)
  • All Springer Nature publications (Scientific Reports, BMC-series journals, etc.)
  • Conference proceedings published through Springer Nature

This means roughly 3,000+ journals follow the same core AI rules. If you've submitted to any Springer Nature journal, you've already been subject to this policy whether you knew it or not.

Timeline and evolution

Nature's AI policy has evolved rapidly:

Date
Development
January 2023
Nature publishes initial editorial on AI and authorship, stating LLMs can't be authors
Early 2023
Formal policy added to author guidelines across Nature Portfolio
Mid 2023
Image generation ban added explicitly
2024
Policy refined with clearer disclosure requirements and copy editing exemption
2025-2026
Policy stable; enforcement integrated into submission workflow

The speed of this rollout was unusual for academic publishing, where policy changes typically take years. Nature moved within months of ChatGPT's public release to establish rules, and the core principles haven't changed since.

How Nature enforces the policy

Nature doesn't use AI detection software to screen manuscripts. The editors have publicly acknowledged that current detection tools aren't reliable enough for editorial decisions. Instead, enforcement relies on:

  • Author attestation. During submission, authors confirm they've followed the AI disclosure policy. This is a trust-based system, similar to how journals handle conflict of interest disclosures.
  • Peer review. Reviewers may flag text that appears AI-generated, though this is informal rather than systematic.
  • Post-publication scrutiny. If undisclosed AI use is identified after publication, Nature treats it as a breach of publication ethics, similar to undisclosed conflicts of interest.

The penalty for violating the policy isn't automatic retraction. Nature's editors have indicated they'd handle cases individually, considering the extent of AI use and whether it affected the scientific content. But a violation would damage your relationship with the journal and could trigger a formal investigation under COPE guidelines.

What this means for different types of AI use

AI use case
Allowed?
Disclosure needed?
Grammar and spelling checks (Grammarly, etc.)
Yes
No
Language polishing for non-native speakers
Yes
Yes (Methods)
Rewriting sections for clarity
Yes
Yes (Methods)
Generating first drafts of text
Yes, but risky
Yes (Methods)
Literature search and summarization
Yes
Yes (Methods)
Code generation for data analysis
Yes
Yes (Methods)
AI-generated figures or images
No
N/A
AI as a listed author
No
N/A
Statistical analysis with AI assistance
Yes
Yes (Methods)

The "yes, but risky" for first draft generation deserves explanation. Nature technically allows it as long as authors take full responsibility for the final text. But in practice, submitting AI-generated drafts with minimal human revision is likely to produce text that reviewers notice, and that undermines confidence in the intellectual contribution of the authors.

How Nature's policy compares to other top journals

Nature's approach is moderate compared to some peers. Science (AAAS) initially banned all AI-generated text in January 2023 before relaxing to a disclosure-based model in November 2023. The Lancet takes a more restrictive position, limiting AI to "readability and language" improvements only. NEJM requires disclosure in both the cover letter and manuscript body.

Nature's copy editing exemption is more generous than most. Many journals require disclosure of any AI use, including basic grammar tools. Nature's position is that mechanical language correction doesn't constitute a meaningful intellectual contribution and therefore doesn't need to be tracked.

The image ban is universal across elite journals. No top-tier scientific journal currently allows AI-generated images, and this is unlikely to change as long as the scientific integrity concerns remain unresolved.

Practical advice for Nature submissions

Before submission:

  • Audit your manuscript for any AI-generated content. If you used ChatGPT at any stage, even for brainstorming, decide whether disclosure is warranted.
  • Check all figures and images. If any were created or modified using generative AI tools, replace them.
  • Draft your Methods section disclosure statement before finalizing the manuscript.

During submission:

  • Include the AI disclosure in the Methods section, not in supplementary materials.
  • Be specific about which tools you used and which parts of the manuscript they touched.
  • If you're unsure whether your use requires disclosure, disclose it anyway. Over-disclosure is always safer than under-disclosure.

Common mistakes to avoid:

  • Don't list ChatGPT or another AI tool in the acknowledgments section as if it were a person. Nature's policy specifically addresses this.
  • Don't assume that because you heavily edited AI-generated text, it no longer counts as AI-assisted. The use of the tool itself is what requires disclosure.
  • Don't use AI to generate supplementary figures thinking the ban only applies to main figures. It applies to all visual content.

Need help preparing your manuscript for Nature's requirements? A free manuscript review can check whether your paper meets the journal's standards before you submit.

Bottom line

Nature's AI policy is clear and relatively stable: use AI tools if they help, disclose them in Methods, don't make them authors, and don't let them generate your images. The policy applies to every journal in the Springer Nature portfolio, covering thousands of titles. The biggest risk for authors isn't the policy itself but failing to disclose, because undisclosed AI use that's later discovered creates an integrity problem that's much harder to fix than a simple Methods section statement.

References

Sources

  1. Nature editorial: Tools such as ChatGPT threaten transparent science
  2. Springer Nature AI policy page
  3. Nature author guidelines
  4. COPE guidelines on AI in publishing

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist