Publishing Strategy7 min readUpdated Mar 25, 2026

The Lancet's AI Policy: Stricter Than Elsevier, With an Anti-AI-Imagery Stance

The Lancet restricts AI to readability and language improvements only, requires disclosure in acknowledgments, and prohibits AI-generated images across all Lancet family journals.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness Scan

The Lancet occupies a distinctive position in the AI policy landscape. It's published by Elsevier, the world's largest scientific publisher, but it doesn't simply adopt Elsevier's general AI guidelines. Instead, The Lancet has developed its own stricter framework, and one of its specialty journals has gone further than any other major medical journal in opposing AI-generated imagery. If you're submitting to any Lancet title, the rules you need to follow are more restrictive than what you'd encounter at most other Elsevier journals.

The policy structure

The Lancet's AI policy can be summarized in four rules:

1. AI is allowed for readability and language only. You can use ChatGPT, Claude, or similar tools to improve your English, fix grammar, restructure sentences for clarity, and polish prose. You can't use them to generate scientific arguments, draft methodology descriptions, write literature reviews, or create new content.

2. Disclosure goes in the acknowledgments. When you've used an AI tool, name it and describe the purpose in the acknowledgments section. This is simpler than Science's three-location requirement but less prominent than Cell Press's dedicated section.

3. AI can't be an author. Standard across all top journals. AI tools don't meet authorship criteria because they can't take responsibility for the work.

4. AI-generated images are prohibited. No images from generative AI tools in any part of the manuscript.

The Lancet Global Health editorial

In 2024, Lancet Global Health published an editorial that went beyond standard policy to make an ideological case against AI imagery in health science. The editorial argued that AI-generated images of people, particularly in global health contexts, threaten trust in health communication and risk perpetuating stereotypes.

The editorial's specific concerns:

  • AI-generated images of patients or health workers can create false representations of clinical scenarios
  • In global health, where visual representation of communities is politically and ethically sensitive, AI-generated faces raise consent and authenticity issues
  • The growing use of AI imagery in health communications erodes the evidentiary value of all images, because readers can no longer trust that any image represents a real situation

This editorial called on the entire scientific community to "stop using AI-generated imagery," not just in journal articles but in conference presentations, policy documents, and health communication materials. It's one of the strongest anti-AI-imagery positions published by any major medical journal.

How The Lancet's policy differs from Elsevier's general rules

Elsevier's company-wide AI policy allows authors to use AI tools for writing assistance, with disclosure. It's a permissive framework that covers thousands of Elsevier journals. The Lancet's interpretation is significantly more restrictive:

Aspect
Elsevier general policy
The Lancet
AI use scope
Writing assistance (broad)
Readability and language only (narrow)
Content generation
Implicitly allowed with disclosure
Not permitted
Disclosure location
Varies by journal
Acknowledgments
AI imagery stance
Case-by-case
Prohibited, with editorial opposition
Integrity infrastructure
Elsevier Research Integrity team
Lancet Research Integrity Group (est. 2024)

The distinction between "writing assistance" and "readability and language" might seem semantic, but it matters. Elsevier's broader framing could encompass using AI to draft a paragraph explaining a statistical method. The Lancet's framing would not.

The Lancet Research Integrity Group

In 2024, The Lancet established its own Research Integrity Group specifically to handle integrity concerns, including AI-related issues. This group operates alongside Elsevier's existing integrity infrastructure but focuses on Lancet-specific editorial standards.

The group's creation signals that The Lancet's editors consider their integrity needs distinct from the broader Elsevier portfolio. For a journal that publishes some of the most influential medical research in the world (The Lancet's IF is approximately 98), maintaining an independent integrity function makes sense. The stakes of a retraction or misconduct finding at The Lancet are career-defining.

What this means for different manuscript types

The Lancet publishes several article types, and the AI policy implications vary:

Original research articles. The readability restriction is most clearly applicable here. Use AI to polish your text, but the scientific content, including methods descriptions, results narratives, and discussion arguments, should be human-generated.

Reviews and meta-analyses. The readability limitation is particularly relevant for reviews, where AI tools could in theory help synthesize literature. Using AI to structure or draft review sections would fall outside the "readability and language" scope.

Comments and correspondence. These shorter formats are opinion-driven, and AI-generated opinions raise obvious authenticity concerns. The Lancet expects these pieces to represent the genuine views of the named authors.

Clinical guidelines. The Lancet publishes influential clinical practice guidelines. AI involvement in guideline drafting would be exceptionally problematic given the direct impact on patient care.

Scope across Lancet journals

The policy applies to all journals in the Lancet family:

  • The Lancet (IF ~88.5)
  • Lancet Oncology (IF ~35)
  • Lancet Infectious Diseases (IF ~31)
  • Lancet Neurology (IF ~44)
  • Lancet Psychiatry (IF ~30)
  • Lancet Global Health (IF ~19)
  • Lancet Digital Health (IF ~23)
  • Lancet Respiratory Medicine (IF ~32)
  • Lancet Public Health
  • Lancet HIV
  • Lancet Haematology
  • Lancet Gastroenterology & Hepatology
  • Lancet Child & Adolescent Health
  • Lancet Healthy Longevity
  • Lancet Microbe
  • Lancet Rheumatology
  • Lancet Regional Health (all editions)
  • eBioMedicine
  • EClinicalMedicine

That's a substantial portfolio. If you're submitting to any Lancet-branded journal, these rules apply regardless of the specific title's subject area or impact factor.

Practical disclosure guidance

The Lancet wants AI disclosure in the acknowledgments. A properly formatted statement:

"The authors acknowledge the use of ChatGPT (OpenAI, GPT-4) for English language editing during manuscript preparation. All content was written by the authors, and the final text was reviewed and approved by all authors."

Things to include:

  • The specific tool name and version
  • What it was used for (language editing, grammar correction, etc.)
  • A statement that the authors reviewed and take responsibility for the content

Things to avoid:

  • Vague descriptions ("AI tools were used")
  • Placing the disclosure in the wrong section (don't put it in Methods or in a separate section)
  • Failing to mention AI use in the cover letter as well (while the formal policy specifies acknowledgments, mentioning it in the cover letter demonstrates transparency)

The Lancet vs. other top medical journals

Feature
The Lancet
NEJM
BMJ
JAMA
AI scope
Readability and language only
Writing assistance with disclosure
Writing assistance with disclosure
Writing assistance with disclosure
Disclosure location
Acknowledgments
Cover letter + manuscript
Methods or acknowledgments
Cover letter + manuscript
AI images
Banned (strong editorial opposition)
Not explicitly addressed in detail
Banned
Not explicitly addressed in detail
Dedicated integrity group
Yes (est. 2024)
No (uses ICMJE framework)
No (uses COPE framework)
No (uses ICMJE framework)
Parent publisher policy
Stricter than Elsevier general
Independent (NEJM Group)
Stricter than BMJ general
Independent (AMA)

The Lancet and Cell Press share the "readability only" restriction, making them the most conservative among the top-tier journals. NEJM, BMJ, and JAMA allow broader writing assistance.

Advice for Lancet submissions

Stay within the readability boundary. If you're tempted to use AI for anything beyond language polishing, don't. The Lancet's restriction is narrow and intentional.

Be especially careful with clinical content. The Lancet publishes clinical research that directly influences treatment decisions. AI-generated clinical content, even if reviewed by authors, introduces a layer of risk that editors are acutely aware of.

Don't assume Elsevier rules apply. If you've published with other Elsevier journals and used AI more freely, tighten your approach for Lancet submissions. The Lancet's rules are stricter.

Consider the journal's position on AI imagery. If your submission includes any visual content that was created or modified using AI tools, remove it before submission. The Lancet's editorial stance on AI imagery is among the strongest in publishing.

Non-native English speakers. The Lancet's policy is designed to support you. Use AI tools freely for language improvement, disclose it, and focus your energy on the scientific content.

Submitting to a Lancet journal? A free manuscript review can help ensure your manuscript meets editorial standards before you enter the formal review process.

Bottom line

The Lancet restricts AI to readability and language improvements only, requires disclosure in the acknowledgments, bans AI-generated images, and has established a dedicated Research Integrity Group to handle AI-related concerns. It's stricter than its parent publisher Elsevier and stricter than Nature, Science, or NEJM on what AI use is permitted. The Lancet Global Health's editorial calling for an end to all AI-generated health imagery represents the sharpest public stance any major journal has taken on AI visuals. Authors should treat these rules as non-negotiable across all 20+ Lancet family journals.

References

Sources

  1. Elsevier AI publishing policy
  2. The Lancet author information
  3. Lancet Global Health editorial on AI imagery00048-0/fulltext)
  4. The Lancet Research Integrity Group announcement00455-6/fulltext)

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist