Publishing Strategy8 min readUpdated Mar 25, 2026

eLife's AI Policy: Funder-Backed Open Access Meets AI Transparency

eLife requires AI disclosure in Methods and amplifies accountability through its public peer review model, where reviewer concerns about AI use become permanently visible alongside the published paper.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness Scan

eLife changed how it operates in 2023, and the change matters for AI policy. The journal no longer makes accept/reject decisions. Instead, every manuscript that passes initial review gets published alongside public peer reviews and an eLife assessment. This means that if a reviewer flags AI-generated language in your paper, that concern doesn't stay in a confidential review file, it becomes a permanent, public annotation attached to your article. For AI disclosure, this creates a transparency mechanism that no other major journal can match. You can't hide an AI concern at eLife the way you might at a journal with confidential review.

The eLife AI policy

eLife's AI policy follows standard principles but exists within a unique publishing model:

  1. AI can't be an author. eLife follows ICMJE and COPE guidelines, AI tools don't meet authorship criteria.
  1. AI use must be disclosed in Methods. If you used generative AI tools during manuscript preparation, describe this in the Methods section with enough detail for readers to assess the scope.
  1. AI-generated images are prohibited. No figures or visual content from generative AI tools. Data-derived visualizations are fine.
  1. Authors bear full responsibility. Every listed author must verify all content, including AI-assisted sections.
  1. The public review model amplifies accountability. Any AI concerns raised during peer review become part of the published record.

eLife's unique publishing model and AI implications

Since January 2023, eLife operates differently from traditional journals:

No accept/reject decisions. After desk screening, manuscripts that proceed to review are published regardless of reviewer recommendation. The published version includes the full manuscript, peer reviews, the eLife assessment (a summary evaluation), and the authors' response to reviews.

Public peer reviews. Reviewer comments are published alongside the paper. Reviewer identities can be made public if the reviewer opts in.

eLife assessments. Each paper receives a summary assessment with standardized descriptors (e.g., "important," "convincing," "solid") that indicate the editors' view of the work's significance and strength of evidence.

What this means for AI disclosure:

Traditional journal
eLife
Reviewer flags AI concern → stays in private review file
Reviewer flags AI concern → published permanently with the paper
Editor handles AI disclosure privately
AI disclosure issues become publicly visible
Post-publication correction is separate
The review trail shows what was flagged and how it was resolved
No public quality signal
eLife assessment reflects overall evaluation

The practical consequence: at eLife, getting caught with inadequate AI disclosure is worse than at most journals because the evidence is public. If a reviewer writes "the Discussion reads as if it was generated by an LLM and no disclosure was provided," that comment will be attached to your paper forever.

Why eLife's funding model matters for AI

eLife now charges publication fees (introduced in 2024), but remains primarily funder-backed by research organizations (Howard Hughes Medical Institute, Wellcome Trust, Max Planck Society, among others). This creates a different incentive structure than gold OA journals:

Lower volume-driven pressure. While eLife now charges publication fees, the journal remains primarily funded by HHMI, Wellcome, and Max Planck. This means publication fees supplement rather than drive revenue, creating less pressure to accept papers quickly compared to fully APC-dependent publishers.

Funder accountability. eLife's funders have their own positions on research integrity and AI. The journal's AI policy reflects not just editorial judgment but the expectations of organizations like HHMI and Wellcome, which have published their own guidance on AI in research.

Lower financial barrier to compliance. eLife's publication fees are moderate compared to many gold OA journals, and fee waivers are available. This reduces the sunk-cost pressure that can discourage authors at more expensive venues from cooperating with disclosure corrections.

Writing the disclosure for eLife

eLife's format is relatively flexible compared to STAR Methods journals. Your AI disclosure goes in a standard Methods section:

For a biology paper:

"During preparation of this manuscript, the authors used ChatGPT (GPT-4, OpenAI) to improve the language and clarity of the Discussion section. All suggestions were reviewed by the corresponding author (J.K.) and verified against the experimental results. The authors take full responsibility for the published content."

For a computational biology paper:

"The neural network described in this paper was developed using PyTorch 2.1 (see Methods: Model Architecture). Separately, during manuscript preparation, the authors used Claude (Claude 3.5, Anthropic) to edit the Introduction for clarity. GitHub Copilot (Microsoft) assisted with writing test scripts for the benchmarking suite. All code is available at [GitHub URL]. The authors take full responsibility for the content."

For a paper with extensive analysis code:

"The authors used GitHub Copilot (Microsoft) to assist with writing Python scripts for the single-cell RNA-seq analysis pipeline. All scripts were validated against published reference datasets and are available in the accompanying GitHub repository. ChatGPT (GPT-4, OpenAI) was used to improve the readability of the Results section. The authors take full responsibility for the published content."

The public review consideration

When writing your AI disclosure for eLife, remember that reviewers will comment publicly. A thorough disclosure preempts reviewer concerns. If your disclosure is vague, "AI tools were used", a reviewer might publicly ask for specifics, creating a visible record of inadequate initial compliance. A detailed, upfront disclosure avoids this.

What requires disclosure at eLife

Use case
Disclosure required?
eLife-specific notes
Grammar/spell check
No
Standard tools exempt
ChatGPT for language editing
Yes
Methods section
AI for analysis code
Yes
Code should be deposited publicly
AI as research subject
No (research method)
Standard Methods
AI-generated figures
Prohibited
Data-derived visualizations fine
Translation
Yes
Name tool and languages
AI for preprint text
Yes
eLife accepts preprints; still disclose
AI for response to reviewers
Gray area
Reviewers see the response publicly
AI for revision of manuscript
Yes
Update Methods disclosure
AI for data processing scripts
Yes
Confirm validation

The preprint point is important. eLife encourages preprint posting and accepts submissions via bioRxiv and medRxiv. If you used AI for the preprint version and then submit to eLife, the disclosure should cover AI use during all phases of manuscript preparation, including the preprint stage.

The response to reviewers point is uniquely relevant at eLife. Since reviewer responses are published, using AI to draft your response creates a publicly visible document. eLife doesn't technically require disclosure of AI use in review responses, but transparency is the journal's core value, if reviewers can tell your response was AI-generated, it undermines the dialogue.

Consequences of non-disclosure

During review:

  • Reviewers may publicly flag AI concerns in their review
  • Editor requests disclosure addition
  • The review flagging becomes part of the permanent public record

After publication:

  • The paper, reviews, and eLife assessment are already published together
  • A post-publication correction adds to the public record
  • Expression of concern or retraction for serious cases
  • COPE guidelines apply for investigation

The public accountability factor: At most journals, the worst case for non-disclosure is a private conversation with an editor followed by a published correction. At eLife, the entire chain of events is visible: the paper, the reviewers' concerns, the editor's assessment, and any subsequent corrections. This transparency creates a stronger deterrent but also means that legitimate oversights become publicly visible alongside deliberate concealment. The advice is simple: disclose thoroughly upfront. You won't regret over-disclosure, but you'll definitely regret under-disclosure if it's flagged publicly.

Comparison with other high-profile journals

Feature
eLife
Nature
Science
Cell
PNAS
Publisher
eLife Sciences
Springer Nature
AAAS
Cell Press (Elsevier)
NAS
AI authorship
Prohibited
Prohibited
Prohibited
Prohibited
Prohibited
Disclosure location
Methods
Methods
Acknowledgments/Methods
STAR Methods
Methods + Author Contributions
AI image ban
Yes
Yes
Yes
Yes
Yes
Access model
Diamond OA
Subscription + OA
Subscription + OA
Subscription + OA
Mixed
APC
$0
N/A (subscription)
N/A (subscription)
N/A / ~$9,900
~$2,350
Public peer review
Yes
No
No
No
No
Accept/reject decision
No (all reviewed papers published)
Yes
Yes
Yes
Yes

eLife is the only journal in this comparison with public peer review and no accept/reject decisions. This combination creates the strongest AI disclosure accountability mechanism of any major journal, not because the rules are stricter, but because the transparency makes non-compliance much more visible.

Practical advice for eLife submissions

For all submissions:

  • Disclose thoroughly in Methods. Remember, reviewers will comment publicly, and vague disclosure invites pointed questions.
  • If you posted a preprint and used AI at any stage, include that in the disclosure.
  • Keep in mind that eLife assessments are public. An "incomplete" or "inadequate" assessment partly driven by AI disclosure concerns becomes a permanent quality signal.

For computational biology and bioinformatics:

  • eLife publishes significant computational work. Deposit code publicly and note which portions were AI-assisted.
  • Benchmarking papers should specify whether AI helped with benchmarking code or parameter selection.

For structural and molecular biology:

  • AlphaFold and similar tools are research methods, not writing tools, describe them in Methods as computational approaches.
  • If AI helped write analysis scripts for cryo-EM or crystallography, that's a writing disclosure.

For neuroscience and systems biology:

  • Don't input participant data into cloud AI tools
  • Analysis pipeline code should be independently reproducible without AI tool access

Before submission checklist:

  • [ ] AI disclosure in Methods section
  • [ ] Tool name, version, and use case specified
  • [ ] No AI-generated images
  • [ ] Code deposited in public repository
  • [ ] Disclosure covers preprint stage if applicable
  • [ ] All co-authors aware of disclosure
  • [ ] Disclosure is specific enough to preempt reviewer questions

A free manuscript assessment can help verify your eLife submission meets the journal's transparency standards.

References

Sources

  1. eLife author guide and policies
  2. eLife submission guidelines
  3. eLife editorial process overview
  4. COPE position statement on AI
  5. ICMJE Recommendations

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist