Publishing Strategy7 min readUpdated Mar 25, 2026

Science (AAAS) AI Policy: From Total Ban to Mandatory Disclosure

Science requires AI disclosure in three locations (cover letter, acknowledgments, methods), classifies violations as scientific misconduct, and prohibits AI-generated images without editor permission.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness Scan

Science's AI policy has the most dramatic origin story among top journals. In January 2023, editor-in-chief Holden Thorp published an editorial flatly banning AI-generated text from the journal. Ten months later, Science reversed course and adopted a disclosure-based model instead. That reversal tells you something about how fast this landscape is moving, and why you need to know exactly where Science stands today.

The policy reversal timeline

Date
Policy position
January 2023
Editor-in-chief Holden Thorp publishes editorial: "ChatGPT is fun, but not an author." AI-generated text banned from all Science family journals
January-October 2023
Ban in effect. Authors prohibited from using LLMs for any text generation in manuscripts
November 2023
Science reverses position. New policy allows AI tools with mandatory disclosure
2024-2026
Disclosure-based policy stable. Violations classified as scientific misconduct

The initial ban was understandable. In early 2023, nobody knew how pervasive AI writing tools would become, and an outright prohibition was the safest default. But by late 2023, it was clear that banning AI text was both unenforceable (detection tools couldn't reliably distinguish AI from human writing) and counterproductive (legitimate uses like language polishing for non-native English speakers were being prohibited).

The current rules

Science's current AI policy has four components:

1. AI can't be an author. Like every major journal, Science doesn't allow AI tools to be listed as authors. The reasoning: authors must take accountability for published work, and AI can't do that.

2. All AI use must be disclosed in three places. This is where Science differs from most journals. You must describe AI use in:

  • The cover letter (so editors know before review begins)
  • The acknowledgments section (so it's visible in the published paper)
  • The methods or supplementary materials (so the technical details are documented)

Three disclosure points is more than Nature requires (Methods only) or Cell Press requires (dedicated section before References). Science wants AI use flagged at every stage of the editorial process.

3. AI-generated images require editor permission. You can't include images created by generative AI tools without explicit advance approval from your handling editor. This is slightly less absolute than Nature's outright ban, but in practice, getting that approval for a standard research paper would be unusual.

4. Violations constitute scientific misconduct. This is the sharpest teeth in Science's policy. Undisclosed AI use isn't treated as a paperwork error or an oversight. It's classified as misconduct, the same category as data fabrication and plagiarism. Misconduct findings can trigger retraction, institutional notification, and publishing bans.

Why the misconduct classification matters

Most journals treat AI policy violations as ethical lapses to be handled on a case-by-case basis. Science is more categorical. When Science says "scientific misconduct," it invokes a specific framework:

  • The journal can notify your institution's research integrity office
  • Your institution may be obligated to investigate under federal research misconduct policies (especially if you receive NIH, NSF, or other federal funding)
  • A misconduct finding goes on your record and can affect future funding and publication

This doesn't mean Science will retract your paper if you forgot to mention you used Grammarly. The policy targets substantive undisclosed use, like generating entire sections of text without disclosure. But the classification signals that Science takes this more seriously than a typical policy violation.

What disclosure looks like at Science

Science's three-point disclosure requirement means you need to think about AI use before you even begin the submission process.

In the cover letter:

"During preparation of this manuscript, we used Claude (Anthropic) to assist with language editing of the Results and Discussion sections. All AI-assisted text was reviewed, verified, and edited by the authors, who take full responsibility for the content."

In the acknowledgments:

"The authors acknowledge the use of Claude (Anthropic) for language editing assistance during manuscript preparation."

In the methods/supplementary materials:

"Language editing: Sections of the manuscript were refined using Claude (Anthropic, Claude 3.5 Sonnet) for clarity and grammar. The authors reviewed all AI-assisted text and verified its accuracy against the original data and analysis."

The key principle: be specific about which tool, which version if relevant, and which parts of the manuscript were affected.

The image policy in detail

Science's image policy is worth understanding because it's slightly more nuanced than a blanket ban. The journal prohibits AI-generated images by default but allows exceptions with explicit editor permission. In practice, this means:

  • Banned without permission: DALL-E, Midjourney, or Stable Diffusion graphics used as figures, schematics, or graphical abstracts
  • Requires disclosure: AI-enhanced processing of real images (denoising, super-resolution, etc.)
  • Potentially allowed with permission: AI-generated visualizations that serve a scientific purpose and are clearly labeled as AI-generated

The "with permission" pathway exists primarily for papers about AI image generation itself, or for computational studies where AI-generated visualizations are part of the research methodology. For a standard biology, chemistry, or physics paper, don't expect the editor to approve AI-generated figures.

How Science enforces the policy

Like most journals, Science relies primarily on author self-declaration rather than automated detection. The submission system includes checkboxes and attestations related to AI use. Editors and reviewers may flag suspicious text, but there's no systematic AI-detection screening.

Post-publication enforcement is where the misconduct classification becomes relevant. If another researcher, a reviewer, or a reader identifies undisclosed AI use after publication, Science has the policy infrastructure to treat it as a formal misconduct case rather than just issuing a correction.

Science vs. other top journals

Policy aspect
Science
Nature
Cell Press
NEJM
Lancet
AI text allowed
Yes (since Nov 2023)
Yes
Yes (readability only)
Yes
Yes (readability only)
Disclosure locations
Cover letter + acknowledgments + methods
Methods
Dedicated section before References
Cover letter + manuscript
Acknowledgments
AI images
Banned (editor exceptions possible)
Banned
Banned
Not explicitly addressed
Banned
AI authorship
Prohibited
Prohibited
Prohibited
Prohibited
Prohibited
Violation severity
Scientific misconduct
Case-by-case
Case-by-case
Case-by-case
Case-by-case
Had a ban phase
Yes (Jan-Nov 2023)
No
No
No
No

Science's three-point disclosure requirement and misconduct classification make it the most demanding of the top-tier journals on AI transparency. Whether this is appropriate vigilance or excessive bureaucracy depends on your perspective, but either way, you need to comply.

Practical guidance for Science submissions

Before writing:

  • Decide upfront whether you'll use AI tools. If yes, plan your disclosure language early.
  • Keep records of which tools you used and which manuscript sections they touched. You'll need this specificity for the three disclosure points.

During writing:

  • Use AI tools if they genuinely help, but don't use them just because they're available. Every AI use creates a disclosure obligation.
  • Don't have AI generate your cover letter, the document where you're supposed to disclose AI use. The irony won't be lost on editors.

During submission:

  • Include AI disclosure in all three required locations. Missing even one creates a compliance gap.
  • Be honest about the extent of AI involvement. "Used for minor language editing" when AI generated entire paragraphs is the kind of discrepancy that becomes a misconduct case.

Common pitfalls:

  • Forgetting the cover letter disclosure. Most authors remember the manuscript sections but overlook that Science also requires it in the cover letter.
  • Using generic disclosure language. "AI tools were used during preparation" isn't specific enough. Name the tool, describe the use.
  • Assuming code generation doesn't count. If you used GitHub Copilot or similar tools for analysis code, that falls under the policy too.

Not sure if your manuscript meets Science's disclosure requirements? A free manuscript review can help you identify gaps before submission.

The broader AAAS context

Science's AI policy applies to all journals in the Science family:

  • Science
  • Science Advances
  • Science Translational Medicine
  • Science Signaling
  • Science Immunology
  • Science Robotics

The same three-point disclosure requirement and misconduct classification apply across all six journals. If you're submitting to Science Advances thinking the rules might be softer, they aren't.

Bottom line

Science went from banning AI text to requiring detailed disclosure in under a year, and it classifies violations as scientific misconduct. That's the most aggressive enforcement position among the top journals. Authors submitting to any Science family journal need to disclose AI use in three places (cover letter, acknowledgments, methods), can't use AI-generated images without editor approval, and should understand that getting this wrong isn't a minor paperwork issue. It's a misconduct charge.

References

Sources

  1. Science editorial: ChatGPT is fun, but not an author (Holden Thorp, Jan 2023)
  2. Science editorial policy on AI (updated Nov 2023)
  3. AAAS author guidelines
  4. Science news: AI policy update

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist