Lancet Oncology's AI Policy: More Restrictive Than JCO, JAMA Oncology, and Most Competitors
Lancet Oncology restricts AI to readability and language improvements only, stricter than JCO and JAMA Oncology, with disclosure required in the acknowledgments section.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Next step
Choose the next useful decision step first.
Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.
Most oncology journals allow AI for general "writing assistance" and leave it to authors to interpret what that means. Lancet Oncology doesn't. It restricts AI use to "readability and language" improvements only, a narrower scope than what you'll find at JCO, JAMA Oncology, Annals of Oncology, or Cancer Discovery. That single-phrase distinction, "readability and language" versus "writing assistance," determines whether using AI to draft a paragraph explaining your statistical approach is acceptable or a policy violation. At Lancet Oncology, it's the latter.
The Lancet family AI policy
Lancet Oncology doesn't maintain its own standalone AI policy. It follows the Lancet family policy, which applies identically across all 20+ Lancet titles. The rules:
1. AI is permitted for readability and language only. You can use ChatGPT, Claude, or similar tools to fix grammar, improve sentence flow, restructure text for clarity, and polish English prose. You can't use them to generate content, draft scientific arguments, write literature syntheses, or create new material.
2. Disclosure goes in the acknowledgments. Not Methods. This is a Lancet-specific placement that differs from JCO, JAMA Oncology, and most other major oncology journals. Name the tool, describe how it was used, and place it in the acknowledgments section.
3. AI can't be an author. ICMJE criteria require accountability and responsibility. AI tools provide neither. No Lancet journal will accept a submission with an AI tool in the author list.
4. AI-generated images are prohibited. No generative AI figures, no AI-created graphical abstracts, no AI-synthesized clinical illustrations. Lancet Global Health published an editorial in 2024 calling for the end of all AI-generated imagery in health science, this position applies across the Lancet family.
5. Authors bear complete responsibility. Every co-author vouches for the accuracy of all content, including language that was edited by AI tools. If AI rephrasing introduces an inaccuracy in a clinical claim, the authors are accountable.
Why "readability and language" matters more than it sounds
The distinction between Lancet Oncology's "readability and language" limitation and the broader "writing assistance" permission at most other journals isn't just semantic. Consider these use cases:
Permitted at Lancet Oncology:
- Using AI to fix grammar and spelling in a paragraph you wrote
- Having AI restructure a long, complex sentence into two clearer ones
- Using AI to improve the flow between paragraphs you drafted
- Translating your text from your native language to English and then polishing it
Not permitted at Lancet Oncology (but allowed at JCO or JAMA Oncology):
- Asking AI to draft a paragraph explaining the Cox proportional hazards model you used
- Having AI generate a summary of five clinical trials for your introduction
- Using AI to write the first draft of a discussion section based on your bullet points
- Asking AI to compose responses to reviewer comments
This is a meaningful difference. At JCO, you could ask ChatGPT to help you draft a Methods paragraph about your statistical approach, disclose it, and be compliant. At Lancet Oncology, that same use would violate the policy. The content must come from you; AI can only clean it up.
How Lancet Oncology compares to its parent publisher
Lancet Oncology is published by Elsevier, the world's largest scientific publisher. Elsevier's general AI policy is permissive, it allows AI for "writing assistance" broadly, with disclosure. The Lancet family has chosen to apply a stricter interpretation:
Aspect | Elsevier general policy | Lancet Oncology (Lancet family) |
|---|---|---|
AI use scope | Writing assistance (broad) | Readability and language only (narrow) |
Content generation | Allowed with disclosure | Not permitted |
Disclosure location | Varies by journal | Acknowledgments |
AI-generated imagery | Prohibited | Prohibited, with editorial opposition |
Research Integrity | Elsevier RI team | Lancet Research Integrity Group (est. 2024) |
Interpretation | Permissive | Restrictive |
If you've submitted to other Elsevier journals, say, a materials science paper to Acta Materialia or a pharmacology paper to European Journal of Pharmacology, and used AI for broader writing assistance, be aware that the same approach won't be compliant at Lancet Oncology. The Lancet family operates under different rules than the wider Elsevier portfolio.
The Lancet Research Integrity Group
In 2024, The Lancet established its own Research Integrity Group to handle integrity concerns across the Lancet journal family, including Lancet Oncology. This group operates alongside Elsevier's existing Research Integrity team but focuses specifically on Lancet editorial standards.
For oncology authors, this means that an AI-related integrity concern at Lancet Oncology gets handled by people who understand clinical oncology publishing, not by a generalist publisher team. The Lancet Research Integrity Group includes editors with clinical and scientific backgrounds who can assess whether AI involvement in a cancer trial report actually compromised the science or was limited to language polishing.
This is arguably better for authors. A specialist integrity team is more likely to make nuanced judgments than a general-purpose system. But it also means there's nowhere to hide, the people investigating know exactly what appropriate oncology writing looks like and can spot AI-generated clinical language from a distance.
Oncology-specific AI considerations
Phase III clinical trial reports
Lancet Oncology publishes some of the most consequential Phase III oncology trials in the world. These papers, reporting overall survival results for checkpoint inhibitors, progression-free survival for targeted therapies, quality of life outcomes for novel combinations, change oncology practice within weeks of publication. NCCN guidelines get updated based on Lancet Oncology publications. ESMO guidelines reference them. Tumor boards discuss them.
The AI stakes here are as high as they get in medical publishing. AI involvement in interpreting hazard ratios, characterizing adverse event profiles, drawing efficacy conclusions, or formulating treatment recommendations would undermine the clinical authority that makes Lancet Oncology papers so influential.
Use AI to fix your grammar. Don't use it to help you decide whether your trial showed clinically meaningful benefit.
Biomarker and companion diagnostic studies
Lancet Oncology publishes important biomarker validation studies, PD-L1 expression cutoffs, TMB thresholds, ctDNA-based monitoring. These papers directly affect which patients get which treatments. AI involvement in interpreting biomarker performance characteristics (sensitivity, specificity, predictive values) or in setting clinical cutoffs would be problematic.
If your paper uses machine learning to develop a biomarker classifier, that's research AI, described in standard Methods. If you also used Claude to edit your Discussion, that's writing AI, disclosed in acknowledgments. Keep them strictly separate.
Cancer epidemiology and population studies
Large population-based cancer studies involve complex datasets: SEER, cancer registries, claims databases. If AI coding tools helped you write the scripts to query these databases, disclose it. But the clinical interpretation of incidence trends, survival patterns, and risk factor associations must come from the epidemiologists on the team.
Systematic reviews and meta-analyses
Lancet Oncology publishes influential systematic reviews that synthesize evidence for clinical questions. The "readability and language" limitation means you can use AI to improve the prose of your completed review. You can't use it to help screen abstracts, extract data, or synthesize findings, those are content generation activities that fall outside the permitted scope.
Writing your AI disclosure statement
Remember: at Lancet Oncology, the disclosure goes in the acknowledgments, not Methods.
For a Phase III clinical trial:
"The authors used ChatGPT (GPT-4, OpenAI) to improve the readability and English language of the Introduction section. No AI tools were involved in trial design, data management, statistical analysis, efficacy assessment, safety evaluation, or interpretation of clinical outcomes. All clinical conclusions were drawn by the study investigators. The authors take full responsibility for the published content."
For a biomarker validation study:
"Claude (Claude 3.5, Anthropic) was used to improve the readability of the Discussion section. No AI tools were used in biomarker assay development, statistical analysis, cutoff determination, or clinical interpretation. The authors take full responsibility for the published content."
For a cancer epidemiology study:
"During the preparation of this manuscript, the authors used ChatGPT (GPT-4o, OpenAI) to improve the clarity of the Results and Discussion sections. GitHub Copilot (Microsoft) assisted with writing SAS scripts for database queries. All scripts were validated by the study biostatistician (E.F.). No AI tools were involved in the interpretation of epidemiologic findings. The authors take full responsibility for the published content."
For a systematic review/meta-analysis:
"The authors used Claude (Claude 3.5, Anthropic) to improve the language and readability of selected sections of the manuscript. The systematic search, study selection, data extraction, risk of bias assessment, meta-analysis, and evidence grading were performed entirely by the author team without AI assistance. The authors take full responsibility for the published content."
Notice that these disclosures are placed in the acknowledgments section, not Methods. This is easy to get wrong if you've been submitting to JCO or JAMA Oncology, where the disclosure goes in Methods. If a Lancet Oncology editor sees an AI disclosure in your Methods section instead of acknowledgments, they'll ask you to move it, not a major problem, but it signals that you didn't read their specific instructions.
What happens if you don't disclose
The Lancet family takes AI disclosure seriously, and Lancet Oncology is no exception. Here's the escalation:
During peer review. If a reviewer or editor suspects undisclosed AI use, the corresponding author receives a direct inquiry. Given Lancet Oncology's 95%+ desk rejection rate, papers that survive to external review are already under intense scrutiny. An integrity question on top of scientific review isn't a good position to be in.
After acceptance, before publication. The paper is held in production. You'll need to add a proper acknowledgments disclosure and explain the omission. The Lancet Research Integrity Group may become involved.
After publication, the consequences:
- Correction. A published correction adding the AI disclosure, permanently linked to your paper in PubMed. For a Lancet Oncology paper that's being cited in practice guidelines, a correction notice gets noticed by the oncology community.
- Expression of concern. If AI involvement raises questions about the paper's clinical conclusions, particularly for trial reports, an expression of concern can follow. This is essentially a public statement that something about the paper is under investigation.
- Retraction. If AI use was extensive enough to undermine confidence in clinical findings, retraction is possible. For a Lancet Oncology paper, retraction would likely trigger guideline reassessment by NCCN, ESMO, or other bodies that cited the work.
- Institutional and regulatory notification. In serious cases involving clinical trial reports, the journal may notify the authors' institution and potentially relevant regulatory bodies. If AI was involved in interpreting trial data for a drug that's now part of standard care, the ripple effects extend far beyond the authors' careers.
The clinical weight of Lancet Oncology publications amplifies every consequence. A retraction at a lower-impact journal might affect the authors. A retraction at Lancet Oncology can affect treatment guidelines, drug approvals, and patient care decisions.
Comparison with other top clinical oncology journals
Feature | Lancet Oncology | JCO | JAMA Oncology | Annals of Oncology | Cancer Discovery |
|---|---|---|---|---|---|
Publisher | Lancet/Elsevier | ASCO/WK | AMA/JAMA Network | ESMO/Elsevier | AACR |
AI use scope | Readability & language only | Writing assistance (broad) | Writing assistance (broad) | Writing assistance (broad) | Writing assistance (broad) |
Disclosure location | Acknowledgments | Methods | Methods | Methods | Methods |
AI authorship | Prohibited | Prohibited | Prohibited | Prohibited | Prohibited |
AI-generated images | Prohibited | Prohibited | Prohibited | Prohibited | Prohibited |
Restrictiveness | Most restrictive | Moderate | Moderate | Moderate | Moderate |
Impact factor (approx.) | ~42 | ~45 | ~28 | ~32 | ~30 |
The comparison makes Lancet Oncology's position stark. It's the only one among the top five clinical oncology journals that restricts AI to readability and language only. JCO, JAMA Oncology, Annals of Oncology, and Cancer Discovery all use broader "writing assistance" language that gives authors more flexibility.
JCO under ASCO's policy is the most natural comparison. JCO and Lancet Oncology compete directly for the same high-impact clinical trial papers. An author who used AI to help draft a Methods section would be compliant at JCO but not at Lancet Oncology. The two journals also differ on disclosure placement: Methods (JCO) versus acknowledgments (Lancet Oncology).
JAMA Oncology follows AMA/JAMA Network rules, which are moderate in scope. The JAMA Network requires disclosure, prohibits AI authorship, and expects transparency, but it doesn't restrict AI to language editing only.
Annals of Oncology follows ESMO guidelines implemented through Elsevier's infrastructure. Despite sharing a publisher with Lancet Oncology (Elsevier), Annals of Oncology follows ESMO's interpretation, which is closer to the Elsevier general policy than to the Lancet's restrictive stance.
Cancer Discovery follows AACR's policy, which aligns with the broader publishing community's moderate approach. The AACR's diverse journal portfolio (Cancer Research, Cancer Discovery, Clinical Cancer Research, etc.) follows consistent rules that permit writing assistance with disclosure.
Practical advice for Lancet Oncology submissions
Read the instructions, really
This sounds patronizing, but the number of authors who submit to Lancet Oncology with a Methods-section AI disclosure instead of an acknowledgments-section one suggests that many people don't read the journal-specific instructions carefully enough. The Lancet family has specific formatting requirements that differ from other publishers. Don't assume that what worked at JCO will work here.
Respect the "readability and language" boundary
If you're unsure whether your AI use crosses from "readability and language" into "content generation," it probably does. The safe rule: write everything yourself first, then use AI only to clean up the language. If you can't point to a human-written draft that preceded the AI interaction, you've likely crossed the line.
Prepare two versions of your disclosure
If you're submitting to Lancet Oncology but have JCO or Annals of Oncology as backup targets, prepare two disclosure statements: one for the acknowledgments (Lancet Oncology) and one for Methods (JCO/Annals). It's a small effort that saves time if you need to redirect your submission.
The cover letter matters
While the formal disclosure goes in acknowledgments, mentioning AI use in the cover letter is good practice for Lancet Oncology submissions. The editors handle hundreds of submissions, and proactive transparency in the cover letter signals that you take the policy seriously.
Watch for co-author AI use that exceeds the policy
If a co-author used AI to draft their section of the paper rather than just to edit it, that's a problem under Lancet Oncology's rules even if it would be fine at JCO. The corresponding author needs to verify that all co-authors' AI use falls within the "readability and language" scope, not just that they used AI.
Before-submission checklist
- [ ] All AI use has been limited to readability and language improvements (not content generation)
- [ ] The acknowledgments section includes a disclosure naming each AI tool, version, and purpose
- [ ] The disclosure is in acknowledgments, not Methods (Lancet-specific requirement)
- [ ] Co-authors' AI use has been verified to fall within the "readability and language" scope
- [ ] No AI-generated images, figures, or graphical abstracts are included
- [ ] Clinical trial data hasn't been processed through external AI tools
- [ ] Clinical interpretations, efficacy conclusions, and safety assessments are human-generated
- [ ] The submission system's AI-related questions have been answered accurately
- [ ] The cover letter mentions AI use if applicable
- [ ] All AI-edited sections have been verified for accuracy by domain experts
A free manuscript assessment can help verify that your Lancet Oncology submission meets the Lancet family's stricter AI requirements, particularly the readability-and-language limitation that differs from most competing journals.
Bottom line
Lancet Oncology applies the Lancet family's restrictive AI policy, limiting permitted use to readability and language improvements only. This is stricter than JCO, JAMA Oncology, Annals of Oncology, and Cancer Discovery, all of which allow broader writing assistance. The disclosure goes in acknowledgments (not Methods), and the Lancet Research Integrity Group handles compliance. For clinical trial reports that influence treatment guidelines, the stakes are especially high, a retraction or correction at Lancet Oncology reverberates through the entire oncology practice framework. Authors should write their content first, use AI only to polish the language, disclose it in acknowledgments, and verify that all co-authors followed the same approach. It's a stricter standard than most of the field, and Lancet Oncology makes no apology for it.
Sources
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Dataset / benchmark
Biomedical Journal Acceptance Rates
A field-organized acceptance-rate guide that works as a neutral benchmark when authors are deciding how selective to target.
Reference table
Journal Submission Specs
A high-utility submission table covering word limits, figure caps, reference limits, and formatting expectations.
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.