Publishing Strategy7 min readUpdated Apr 2, 2026

Science (AAAS) AI Policy: From Total Ban to Mandatory Disclosure

Science requires AI disclosure in three locations (cover letter, acknowledgments, methods), classifies violations as scientific misconduct, and prohibits AI-generated images without editor permission.

Author contextSenior Researcher, Oncology & Cell Biology. Experience with Nature Medicine, Cancer Cell, Journal of Clinical Oncology.View profile

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness ScanOr find your best-fit journal in 30 seconds
Journal context

Science at a glance

Key metrics to place the journal before deciding whether it fits your manuscript and career goals.

Full journal profile
Impact factor45.8Clarivate JCR
Acceptance rate<7%Overall selectivity
Time to decision~14 days to first decisionFirst decision

What makes this journal worth targeting

  • IF 45.8 puts Science in a visible tier — citations from papers here carry real weight.
  • Scope specificity matters more than impact factor for most manuscript decisions.
  • Acceptance rate of ~<7% means fit determines most outcomes.

When to look elsewhere

  • When your paper sits at the edge of the journal's stated scope — borderline fit rarely improves after submission.
  • If timeline matters: Science takes ~~14 days to first decision. A faster-turnaround journal may suit a grant or job deadline better.
  • If open access is required by your funder, verify the journal's OA agreements before submitting.

Quick answer: Science's AI policy has the most dramatic origin story among top journals. In January 2023, editor-in-chief Holden Thorp published an editorial flatly banning AI-generated text from the journal. Ten months later, Science reversed course and adopted a disclosure-based model instead. That reversal tells you something about how fast this landscape is moving, and why you need to know exactly where Science stands today.

Science AI Policy at a Glance

  • AI authorship: Prohibited. AI tools cannot be listed as authors and cannot take accountability for the work.
  • AI disclosure: Required. Disclose use of AI tools (e.g., ChatGPT, Claude, Gemini) in the Acknowledgments section.
  • AI-generated images: Prohibited. AI-created figures, illustrations, or visualizations are not permitted in the manuscript.
  • Copy editing: All AI use, including copy editing, must be disclosed.

The policy reversal timeline

Date
Policy position
January 2023
Editor-in-chief Holden Thorp publishes editorial: "ChatGPT is fun, but not an author." AI-generated text banned from all Science family journals
January-October 2023
Ban in effect. Authors prohibited from using LLMs for any text generation in manuscripts
November 2023
Science reverses position. New policy allows AI tools with mandatory disclosure
2024-2026
Disclosure-based policy stable. Violations classified as scientific misconduct

The initial ban was understandable. In early 2023, nobody knew how pervasive AI writing tools would become, and an outright prohibition was the safest default. But by late 2023, it was clear that banning AI text was both unenforceable (detection tools couldn't reliably distinguish AI from human writing) and counterproductive (legitimate uses like language polishing for non-native English speakers were being prohibited).

The current rules

Science's current AI policy has four components:

1. AI can't be an author. Like every major journal, Science doesn't allow AI tools to be listed as authors. The reasoning: authors must take accountability for published work, and AI can't do that.

2. All AI use must be disclosed in three places. This is where Science differs from most journals. You must describe AI use in:

  • The cover letter (so editors know before review begins)
  • The acknowledgments section (so it's visible in the published paper)
  • The methods or supplementary materials (so the technical details are documented)

Three disclosure points is more than Nature requires (Methods only) or Cell Press requires (dedicated section before References). Science wants AI use flagged at every stage of the editorial process.

3. AI-generated images require editor permission. You can't include images created by generative AI tools without explicit advance approval from your handling editor. This is slightly less absolute than Nature's outright ban, but in practice, getting that approval for a standard research paper would be unusual.

4. Violations constitute scientific misconduct. This is the sharpest teeth in Science's policy. Undisclosed AI use isn't treated as a paperwork error or an oversight. It's classified as misconduct, the same category as data fabrication and plagiarism. Misconduct findings can trigger retraction, institutional notification, and publishing bans.

Why the misconduct classification matters

Most journals treat AI policy violations as ethical lapses to be handled on a case-by-case basis. Science is more categorical. When Science says "scientific misconduct," it invokes a specific framework:

  • The journal can notify your institution's research integrity office
  • Your institution may be obligated to investigate under federal research misconduct policies (especially if you receive NIH, NSF, or other federal funding)
  • A misconduct finding goes on your record and can affect future funding and publication

This doesn't mean Science will retract your paper if you forgot to mention you used Grammarly. The policy targets substantive undisclosed use, like generating entire sections of text without disclosure. But the classification signals that Science takes this more seriously than a typical policy violation.

What disclosure looks like at Science

Science's three-point disclosure requirement means you need to think about AI use before you even begin the submission process.

In the cover letter:

"During preparation of this manuscript, we used Claude (Anthropic) to assist with language editing of the Results and Discussion sections. All AI-assisted text was reviewed, verified, and edited by the authors, who take full responsibility for the content."

In the acknowledgments:

"The authors acknowledge the use of Claude (Anthropic) for language editing assistance during manuscript preparation."

In the methods/supplementary materials:

"Language editing: Sections of the manuscript were refined using Claude (Anthropic, Claude 3.5 Sonnet) for clarity and grammar. The authors reviewed all AI-assisted text and verified its accuracy against the original data and analysis."

The key principle: be specific about which tool, which version if relevant, and which parts of the manuscript were affected.

The image policy in detail

Science's image policy is worth understanding because it's slightly more nuanced than a blanket ban. The journal prohibits AI-generated images by default but allows exceptions with explicit editor permission. In practice, this means:

  • Banned without permission: DALL-E, Midjourney, or Stable Diffusion graphics used as figures, schematics, or graphical abstracts
  • Requires disclosure: AI-enhanced processing of real images (denoising, super-resolution, etc.)
  • Potentially allowed with permission: AI-generated visualizations that serve a scientific purpose and are clearly labeled as AI-generated

The "with permission" pathway exists primarily for papers about AI image generation itself, or for computational studies where AI-generated visualizations are part of the research methodology. For a standard biology, chemistry, or physics paper, don't expect the editor to approve AI-generated figures.

How Science enforces the policy

Like most journals, Science relies primarily on author self-declaration rather than automated detection. The submission system includes checkboxes and attestations related to AI use. Editors and reviewers may flag suspicious text, but there's no systematic AI-detection screening.

Post-publication enforcement is where the misconduct classification becomes relevant. If another researcher, a reviewer, or a reader identifies undisclosed AI use after publication, Science has the policy infrastructure to treat it as a formal misconduct case rather than just issuing a correction.

Science vs. other top journals

Policy aspect
Science
Nature
Cell Press
NEJM
Lancet
AI text allowed
Yes (since Nov 2023)
Yes
Yes (readability only)
Yes
Yes (readability only)
Disclosure locations
Cover letter + acknowledgments + methods
Methods
Dedicated section before References
Cover letter + manuscript
Acknowledgments
AI images
Banned (editor exceptions possible)
Banned
Banned
Not explicitly addressed
Banned
AI authorship
Prohibited
Prohibited
Prohibited
Prohibited
Prohibited
Violation severity
Scientific misconduct
Case-by-case
Case-by-case
Case-by-case
Case-by-case
Had a ban phase
Yes (Jan-Nov 2023)
No
No
No
No

Science's three-point disclosure requirement and misconduct classification make it the most demanding of the top-tier journals on AI transparency. Whether this is appropriate vigilance or excessive bureaucracy depends on your perspective, but either way, you need to comply.

Readiness check

Run the scan while the topic is in front of you.

See score, top issues, and journal-fit signals before you submit.

Get free manuscript previewAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr run a stats sanity check

Practical guidance for Science submissions

Before writing:

  • Decide upfront whether you'll use AI tools. If yes, plan your disclosure language early.
  • Keep records of which tools you used and which manuscript sections they touched. You'll need this specificity for the three disclosure points.

During writing:

  • Use AI tools if they genuinely help, but don't use them just because they're available. Every AI use creates a disclosure obligation.
  • Don't have AI generate your cover letter, the document where you're supposed to disclose AI use. The irony won't be lost on editors.

During submission:

  • Include AI disclosure in all three required locations. Missing even one creates a compliance gap.
  • Be honest about the extent of AI involvement. "Used for minor language editing" when AI generated entire paragraphs is the kind of discrepancy that becomes a misconduct case.

Common pitfalls:

  • Forgetting the cover letter disclosure. Most authors remember the manuscript sections but overlook that Science also requires it in the cover letter.
  • Using generic disclosure language. "AI tools were used during preparation" isn't specific enough. Name the tool, describe the use.
  • Assuming code generation doesn't count. If you used GitHub Copilot or similar tools for analysis code, that falls under the policy too.

Not sure if your manuscript meets Science's disclosure requirements? A Science submission readiness check can help you identify gaps before submission.

The broader AAAS context

Science's AI policy applies to all journals in the Science family:

The same three-point disclosure requirement and misconduct classification apply across all six journals. If you're submitting to Science Advances thinking the rules might be softer, they aren't.

Bottom line

Science went from banning AI text to requiring detailed disclosure in under a year, and it classifies violations as scientific misconduct. That's the most aggressive enforcement position among the top journals. Authors submitting to any Science family journal need to disclose AI use in three places (cover letter, acknowledgments, methods), can't use AI-generated images without editor approval, and should understand that getting this wrong isn't a minor paperwork issue. It's a misconduct charge.

What should you do about Science (AAAS)'s AI policy?

Comply proactively if:

  • You used any AI tool (ChatGPT, Grammarly, Copilot) during manuscript preparation
  • The journal requires AI use disclosure in the methods or acknowledgments
  • Your institution has its own AI use policy that may be stricter

Less concerned if:

  • You used AI only for grammar/spell checking (most journals exempt this)
  • The journal does not have a formal AI policy yet
  • Your use was limited to literature search or reference management

Frequently asked questions

Yes, as of November 2023. Science initially banned all AI-generated text in January 2023 but reversed course ten months later. Authors can now use AI tools for writing assistance, but must disclose all AI use in the cover letter, acknowledgments, and methods sections.

Science treats undisclosed AI use as scientific misconduct. This is one of the strongest enforcement positions among major journals. Misconduct findings can trigger retractions, institutional investigations, and publishing bans.

Three places: (1) the cover letter, (2) the acknowledgments section, and (3) the methods or supplementary materials. Science requires more disclosure touchpoints than most journals, which typically require only one or two.

Not without explicit editor permission. Science prohibits AI-generated imagery by default. If there is a scientific reason to include AI-generated visual content, authors must obtain advance permission from the editor handling their manuscript.

Science requires more disclosure locations (three vs. one for Nature) and classifies violations as misconduct (Nature handles violations case-by-case). Science also went through a public policy reversal, having banned AI text before allowing it. Nature never imposed a full ban.

References

Sources

  1. Science editorial: ChatGPT is fun, but not an author (Holden Thorp, Jan 2023)
  2. Science editorial policy on AI (updated Nov 2023)
  3. AAAS author guidelines
  4. Science news: AI policy update

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist