Nature's AI Policy: Disclosure Rules, Image Bans, and What Authors Must Know
Nature requires AI disclosure in Methods, prohibits AI authorship and AI-generated images across all Springer Nature journals, with a copy editing exemption.
Next step
Choose the next useful decision step first.
Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.
Nature at a glance
Key metrics to place the journal before deciding whether it fits your manuscript and career goals.
What makes this journal worth targeting
- IF 48.5 puts Nature in a visible tier — citations from papers here carry real weight.
- Scope specificity matters more than impact factor for most manuscript decisions.
- Acceptance rate of ~<8% means fit determines most outcomes.
When to look elsewhere
- When your paper sits at the edge of the journal's stated scope — borderline fit rarely improves after submission.
- If timeline matters: Nature takes ~7 day. A faster-turnaround journal may suit a grant or job deadline better.
- If OA is required: gold OA costs Verify current Nature pricing page. Check institutional agreements before submitting.
Quick answer: Nature AI policy rules are straightforward but strict: AI tools cannot be authors, generative AI use should be documented in the Methods section or equivalent, AI-assisted copy editing does not need declaration, and Springer Nature journals generally do not permit generative AI images or videos for publication.
Method note: this page was reviewed against Nature Portfolio's AI editorial policy, Nature Methods' 2026 editorial discussion of generative AI in publishing, COPE AI guidance, local Nature-family pages, and Manusights pre-submission review patterns for Nature Portfolio manuscripts. It owns the AI-policy query. Nature formatting, submission-guide, and cover-letter questions stay on separate pages.
Nature AI Policy at a Glance
- AI authorship: Prohibited. AI tools cannot be listed as authors and cannot take accountability for the work.
- AI disclosure: Required. Disclose use of AI tools (e.g., ChatGPT, Claude, Gemini) in the Methods section.
- AI-generated images: Prohibited. AI-created figures, illustrations, or visualizations are not permitted in the manuscript.
- Copy editing: Copy editing for grammar and language is exempt from disclosure.
The core policy
Nature's AI policy rests on three rules:
- AI tools can't be authors. Large language models and other AI tools don't meet authorship criteria because they can't take accountability for published work. Only humans who made substantive intellectual contributions qualify.
- All AI use must be disclosed. If you used ChatGPT, Claude, Copilot, or any other LLM during manuscript preparation, you must describe this in the Methods section. The disclosure should specify which tool was used and how it was used.
- AI-generated images are banned. You can't include images produced by generative AI tools (DALL-E, Midjourney, Stable Diffusion, etc.) in your manuscript. This applies to figures, graphical abstracts, and any other visual content.
There's one notable exemption: copy editing assistance from AI doesn't require disclosure. If you used Grammarly or a similar tool purely for grammar and spelling, Nature doesn't need to know.
What the disclosure looks like in practice
Nature requires AI disclosure in the Methods section, not buried in acknowledgments or a footnote. The disclosure should be specific enough that a reader understands the scope of AI involvement. Generic statements like "AI tools were used during writing" aren't sufficient.
A properly formatted disclosure would look something like:
"During preparation of this manuscript, the authors used ChatGPT (OpenAI, GPT-4) to improve the clarity of the Discussion section. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the final published text."
This placement matters. The Methods section is the part of the paper where other researchers look to understand how work was done. Putting AI disclosure there signals that Nature treats AI use as a methodological choice, not just an administrative detail.
The image ban explained
Nature's prohibition on AI-generated images isn't about aesthetics. It's about data integrity. In scientific publishing, images are evidence. A microscopy image, a gel photo, a clinical scan, these are data points that readers use to evaluate claims. AI-generated images can't serve this function because they don't represent real observations.
The ban extends beyond obvious cases. You can't use AI to:
- Generate schematic figures from text descriptions
- Create "enhanced" versions of real images using generative models
- Produce synthetic data visualizations that weren't derived from actual data
AI-assisted image processing tools that enhance existing real images (like denoising or contrast adjustment) occupy a gray area. Nature's guidance suggests these should be disclosed but aren't automatically prohibited, as long as the underlying image data is real and the processing doesn't alter the scientific content.
Scope: it's broader than you think
Nature's AI policy isn't limited to the flagship journal. It applies across the entire Nature Portfolio, which includes:
- All Nature-branded journals (Nature Medicine, Nature Genetics, Nature Methods, Nature Biotechnology, Nature Communications, etc.)
- All Springer Nature publications (Scientific Reports, BMC-series journals, etc.)
- Conference proceedings published through Springer Nature
This means roughly 3,000+ journals follow the same core AI rules. If you've submitted to any Springer Nature journal, you've already been subject to this policy whether you knew it or not.
Timeline and evolution
Nature's AI policy has evolved rapidly:
Date | Development |
|---|---|
January 2023 | Nature publishes initial editorial on AI and authorship, stating LLMs can't be authors |
Early 2023 | Formal policy added to author guidelines across Nature Portfolio |
Mid 2023 | Image generation ban added explicitly |
2024 | Policy refined with clearer disclosure requirements and copy editing exemption |
2025-2026 | Policy stable; enforcement integrated into submission workflow |
The speed of this rollout was unusual for academic publishing, where policy changes typically take years. Nature moved within months of ChatGPT's public release to establish rules, and the core principles haven't changed since.
How Nature enforces the policy
Nature doesn't use AI detection software to screen manuscripts. The editors have publicly acknowledged that current detection tools aren't reliable enough for editorial decisions. Instead, enforcement relies on:
- Author attestation. During submission, authors confirm they've followed the AI disclosure policy. This is a trust-based system, similar to how journals handle conflict of interest disclosures.
- Peer review. Reviewers may flag text that appears AI-generated, though this is informal rather than systematic.
- Post-publication scrutiny. If undisclosed AI use is identified after publication, Nature treats it as a breach of publication ethics, similar to undisclosed conflicts of interest.
The penalty for violating the policy isn't automatic retraction. Nature's editors have indicated they'd handle cases individually, considering the extent of AI use and whether it affected the scientific content. But a violation would damage your relationship with the journal and could trigger a formal investigation under COPE guidelines.
What this means for different types of AI use
AI use case | Allowed? | Disclosure needed? |
|---|---|---|
Grammar and spelling checks (Grammarly, etc.) | Yes | No |
Language polishing for non-native speakers | Yes | Yes (Methods) |
Rewriting sections for clarity | Yes | Yes (Methods) |
Generating first drafts of text | Yes, but risky | Yes (Methods) |
Literature search and summarization | Yes | Yes (Methods) |
Code generation for data analysis | Yes | Yes (Methods) |
AI-generated figures or images | No | N/A |
AI as a listed author | No | N/A |
Statistical analysis with AI assistance | Yes | Yes (Methods) |
The "yes, but risky" for first draft generation deserves explanation. Nature technically allows it as long as authors take full responsibility for the final text. But in practice, submitting AI-generated drafts with minimal human revision is likely to produce text that reviewers notice, and that undermines confidence in the intellectual contribution of the authors.
In our pre-submission review work
In our pre-submission review work with Nature-family manuscripts, we see three specific failure patterns around AI policy compliance.
Disclosure placed in the wrong location. Authors often put AI acknowledgments in the cover letter or acknowledgments when the relevant policy asks for Methods disclosure, or an equivalent manuscript section if no Methods section exists. That makes the disclosure harder for reviewers and readers to interpret as part of the research record.
Copy editing confused with generative drafting. Nature's policy exempts AI-assisted copy editing of human-generated text, but not autonomous content creation or generative editorial work. We see authors under-disclose when they use an LLM to restructure arguments, summarize literature, draft responses, or generate code comments.
Image workflow left undocumented. The visible risk is a fully AI-generated figure, but the quieter risk is using non-generative machine-learning tools to modify real images without caption disclosure. Nature's policy distinguishes generative images from non-generative processing, but both need a careful audit before submission.
Manusights internal analysis shows that the preventable mistake is usually not malicious AI use. It is poor process documentation. Before uploading a Nature Portfolio manuscript, authors should keep a simple record of which tools touched text, code, data analysis, images, and figures.
How Nature's policy compares to other top journals
Nature's approach is moderate compared to some peers. Science (AAAS) initially banned all AI-generated text in January 2023 before relaxing to a disclosure-based model in November 2023. The Lancet takes a more restrictive position, limiting AI to "readability and language" improvements only. NEJM requires disclosure in both the cover letter and manuscript body.
Nature's copy editing exemption is more generous than most. Many journals require disclosure of any AI use, including basic grammar tools. Nature's position is that mechanical language correction doesn't constitute a meaningful intellectual contribution and therefore doesn't need to be tracked.
The image ban is universal across elite journals. No top-tier scientific journal currently allows AI-generated images, and this is unlikely to change as long as the scientific integrity concerns remain unresolved.
Practical advice for Nature submissions
Before submission:
- Audit your manuscript for any AI-generated content. If you used ChatGPT at any stage, even for brainstorming, decide whether disclosure is warranted.
- Check all figures and images. If any were created or modified using generative AI tools, replace them.
- Draft your Methods section disclosure statement before finalizing the manuscript.
During submission:
- Include the AI disclosure in the Methods section, not in supplementary materials.
- Be specific about which tools you used and which parts of the manuscript they touched.
- If you're unsure whether your use requires disclosure, disclose it anyway. Over-disclosure is always safer than under-disclosure.
Common mistakes to avoid:
- Don't list ChatGPT or another AI tool in the acknowledgments section as if it were a person. Nature's policy specifically addresses this.
- Don't assume that because you heavily edited AI-generated text, it no longer counts as AI-assisted. The use of the tool itself is what requires disclosure.
- Don't use AI to generate supplementary figures thinking the ban only applies to main figures. It applies to all visual content.
Need help preparing your manuscript for Nature's requirements? A Nature submission readiness check can check whether your paper meets the journal's standards before you submit.
Submit If / Think Twice If
Submit if:
- every AI tool used in drafting, analysis, coding, or figure preparation has been logged
- generative text use is disclosed in Methods or the closest equivalent section
- all authors have reviewed the final text and accept accountability for it
- every image or figure is based on real data, with any non-generative processing disclosed where relevant
Think twice if:
- an AI tool drafted scientific claims that no author independently verified
- any figure, graphical abstract, or visual summary was generated from text prompts
- the team cannot reconstruct which parts of the manuscript were AI-assisted
- the disclosure plan relies on vague wording such as "AI tools were used" without naming the tool and use case
Readiness check
Run the scan while the topic is in front of you.
See score, top issues, and journal-fit signals before you submit.
Bottom line
Nature's AI policy is clear and relatively stable: use AI tools if they help, disclose them in Methods, don't make them authors, and don't let them generate your images. The policy applies to every journal in the Springer Nature portfolio, covering thousands of titles. The biggest risk for authors isn't the policy itself but failing to disclose, because undisclosed AI use that's later discovered creates an integrity problem that's much harder to fix than a simple Methods section statement.
What should you do about Nature's AI policy?
Comply proactively if:
- You used any AI tool (ChatGPT, Grammarly, Copilot) during manuscript preparation
- The journal requires AI use disclosure in the methods or acknowledgments
- Your institution has its own AI use policy that may be stricter
Less concerned if:
- You used AI only for grammar/spell checking (most journals exempt this)
- The journal does not have a formal AI policy yet
- Your use was limited to literature search or reference management
Frequently asked questions
Yes, but with restrictions. Authors can use AI tools like ChatGPT to improve readability and language. However, AI cannot be listed as an author, AI-generated images are prohibited, and all AI use must be disclosed in the Methods section.
In the Methods section of the manuscript. Nature requires authors to describe which AI tools were used and how they were used. Copy editing assistance from AI is exempt from disclosure requirements.
No. Nature explicitly prohibits AI-generated images in manuscripts. This includes images created by tools like DALL-E, Midjourney, or Stable Diffusion. The ban exists because AI-generated images can introduce fabricated visual data that undermines scientific integrity.
Yes. The policy applies across all Nature Portfolio journals and extends to all Springer Nature publications. This includes Nature, Nature Medicine, Nature Methods, Nature Communications, and all other Nature-branded titles, plus journals like Scientific Reports.
No. Nature's policy states that AI tools cannot meet the criteria for authorship because they cannot take accountability for the work. Only humans who made substantive contributions and can take responsibility for the content qualify as authors.
Sources
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Start here
Same journal, next question
- Nature Energy Submission Guide
- How to Avoid Desk Rejection at Nature Cell Biology (2026)
- Is Nature Cell Biology a Good Journal? A Practical Fit Verdict for Authors
- Nature Appeal Rejection: Should You Fight, and How? (2026)
- Nature Response to Reviewers: How to Write a Rebuttal That Wins (2026)
- Nature Submission Process: Steps & Timeline
Supporting reads
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.