Publishing Strategy8 min readUpdated Mar 25, 2026

Gut's AI Policy: BMJ Rules for Gastroenterology and Hepatology Authors

Gut follows BMJ Publishing Group's AI policy requiring disclosure in Methods, prohibiting AI authorship and AI-generated images, and applying the same rules as The BMJ across all BMJ specialty journals.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness Scan

Gut desk-rejects roughly 70% of submissions within a few days. The ones that survive go through one of the most competitive review processes in gastroenterology, the journal's acceptance rate sits around 10-12%. For the researchers who make it through, the last thing you want is a publication ethics issue over something as avoidable as AI disclosure. Here's what Gut expects and how it fits into the broader BMJ Publishing Group framework.

The BMJ Publishing Group policy

Gut is published by BMJ Publishing Group, the same publisher behind The BMJ, Heart, Thorax, and over 70 other specialty journals. The AI policy is set at the publisher level:

  1. AI can't be an author. ICMJE criteria require accountability, manuscript approval, and responsibility, none of which AI tools can provide.
  1. AI use must be disclosed in Methods. Describe the tool, its version, and how it was applied. Be specific enough that a reader can assess the scope.
  1. AI-generated images are prohibited. No figures, graphical abstracts, or visual content from generative AI tools.
  1. Basic grammar tools are exempt. Standard spell checkers and grammar tools don't require disclosure.
  1. Authors bear full responsibility. Every co-author must vouch for the accuracy of all content, including AI-assisted sections.
  1. The submission system may include an AI declaration. Like The BMJ, some BMJ Group journals include AI-related questions in the online submission workflow.

How Gut's implementation compares to The BMJ's

The BMJ is the flagship; Gut is the group's highest-impact specialty journal. Both follow the same rules, but there are practical differences:

Aspect
The BMJ
Gut
AI policy source
BMJ Publishing Group
BMJ Publishing Group
Submission form AI question
Prominent, structured
Present
Editorial AI scrutiny
Very high
High
Open peer review
Yes
No (standard review)
Clinical content sensitivity
Very high
High (GI-specific)
Acceptance rate
~7%
~10-12%

The biggest practical difference: The BMJ publishes peer review reports alongside accepted papers, which means reviewer concerns about AI-generated text become public. Gut uses standard confidential peer review, so AI-related reviewer comments stay private between the author, reviewers, and editor.

This doesn't mean Gut takes AI disclosure less seriously. It means the enforcement mechanism is different, at The BMJ, public accountability deters undisclosed AI use; at Gut, it's the editorial team's internal scrutiny.

GI-specific AI considerations

Endoscopy and imaging AI

Gut publishes significant research on AI-assisted endoscopy, computer-aided detection (CADe) for polyp identification, computer-aided diagnosis (CADx) for characterization, and AI for capsule endoscopy reading. If your paper is about these tools, the AI is your research subject, described in standard Methods as methodology.

The manuscript preparation AI disclosure is separate. If you developed an AI endoscopy system and also used ChatGPT to edit your paper, you need two clearly distinct descriptions:

"The CADe system described in this study was developed using a ResNet-50 architecture trained on 50,000 annotated colonoscopy frames (see Methods: Model Development). Separately, during manuscript preparation, the authors used ChatGPT (GPT-4, OpenAI) to improve the language of the Discussion section. The authors take full responsibility for the published content."

Microbiome analysis

Gut microbiome papers involve extensive bioinformatics: 16S rRNA gene sequencing, shotgun metagenomics, metabolomics integration, taxonomic classification, diversity analysis. If AI tools helped with writing analysis code, disclose it.

The analysis tools themselves, QIIME2, MetaPhlAn, HUMAnN, LEfSe, are research software, not AI writing tools. They belong in standard Methods. But if ChatGPT or Copilot helped you write scripts to run these tools or process their outputs, that's AI-assisted code generation.

Disclosure example for a microbiome paper:

"Microbiome analysis was performed using QIIME2 (v2023.9) and MetaPhlAn 4 as described in Methods. GitHub Copilot (Microsoft) was used to assist with writing custom Python scripts for alpha and beta diversity calculations and for generating the LEfSe input files. All scripts were validated against published tutorial datasets. ChatGPT (GPT-4, OpenAI) was used to improve the readability of the Results section. The authors take full responsibility for the published content."

Clinical trial data

Gut publishes GI clinical trials, drug efficacy studies, endoscopic intervention trials, dietary interventions. For these papers, the same rules that apply at NEJM or The Lancet apply here: keep AI away from clinical data interpretation and outcome reporting.

Don't use AI to draft sections describing primary endpoints, adverse events, or clinical significance. Gut's clinical reviewers will scrutinize these sections closely, and AI-generated clinical language often introduces subtle inaccuracies that human experts catch.

Writing the disclosure for Gut

For a clinical GI paper:

"During preparation of this manuscript, the authors used Claude (Claude 3.5, Anthropic) to improve the language and clarity of the Introduction and Discussion sections. No AI tools were used for statistical analysis, clinical data interpretation, or reporting of trial outcomes. The statistical analysis was conducted by the study biostatistician (M.H.) using Stata 17. All AI-edited text was reviewed by the clinical investigators. The authors take full responsibility for the published content."

For a basic science GI paper:

"The authors used ChatGPT (GPT-4, OpenAI) to edit the Methods section for conciseness and to improve the language of the figure legends. All revisions were reviewed by the corresponding author (P.L.). The authors take full responsibility for the content."

For an AI-in-endoscopy research paper:

"The deep learning model for polyp detection described in this paper was developed using PyTorch and trained on the institutional colonoscopy dataset (see Methods: Model Architecture). Separately, during manuscript preparation, ChatGPT (GPT-4, OpenAI) was used to improve the readability of the Discussion. The research methodology and the manuscript editing tool are entirely separate systems. All AI text suggestions were reviewed by the senior author."

What requires disclosure

Use case
Disclosure required?
GI-specific notes
Standard grammar tools
No
Exempt
ChatGPT for language editing
Yes
Methods section
AI for microbiome analysis code
Yes
Specify which steps
QIIME2/MetaPhlAn usage
No (research tool)
Standard Methods
AI for endoscopy model code
No (research method)
Described in research Methods
AI-generated GI tract diagrams
Prohibited
Use BioRender or medical illustrator
AI for forest plot generation
Not if from real data
Data-derived plots are fine
AI for CONSORT diagram formatting
Yes
Disclose formatting assistance
AI for dietary intake analysis code
Yes
Confirm validation
AI for H. pylori pathway illustrations
Prohibited if generative
Standard illustration tools OK

Consequences of non-disclosure

BMJ Publishing Group enforcement:

During review:

  • Editor requests AI disclosure addition
  • Deliberate concealment may lead to rejection
  • If AI use affected clinical content, additional scrutiny from the statistical editor

After publication:

  • Correction for minor undisclosed language editing
  • Expression of concern if scope is unclear
  • Retraction if AI generated clinical claims or fabricated data
  • COPE-guided investigation for serious cases

For Gut specifically: The journal has a high proportion of clinical content that can influence treatment guidelines from organizations like the American Gastroenterological Association (AGA), the British Society of Gastroenterology (BSG), and the European Association for the Study of the Liver (EASL). If an AI-generated clinical claim in a Gut paper makes it into a guideline recommendation, the downstream consequences affect patient care.

Comparison with other GI journals

Feature
Gut
Gastroenterology
Hepatology
American Journal of Gastroenterology
Alimentary Pharmacology & Therapeutics
Publisher
BMJ Publishing
AGA (Elsevier)
AASLD (Wolters Kluwer)
ACG (Wolters Kluwer)
Wiley
AI authorship
Prohibited
Prohibited
Prohibited
Prohibited
Prohibited
Disclosure location
Methods
Methods
Methods
Methods
Methods
AI image ban
Yes
Yes
Yes
Yes
Yes
Impact factor range
~24-25
~29-34
~14-17
~10-12
~7
Endoscopy AI papers
Common
Common
Rare
Common
Rare
Microbiome papers
Very common
Very common
Moderate
Moderate
Moderate

Gastroenterology (AGA) follows Elsevier's AI policy, which is broadly similar. Hepatology (AASLD) follows Wolters Kluwer guidelines. The policies are functionally equivalent across all five journals, the main differences are in editorial culture and enforcement rigor.

If you're choosing between Gut and Gastroenterology, the AI policy isn't a differentiator. Both require disclosure, both prohibit AI authorship, and both ban AI-generated images. Your decision should be based on the science, the readership, and the fit with your manuscript.

Practical advice for Gut submissions

For clinical papers:

  • Don't use AI to draft clinical outcome descriptions, adverse event summaries, or treatment recommendations
  • If your paper includes a CONSORT or STROBE checklist, complete it yourself, AI-generated checklist responses may not accurately reflect your study
  • Keep AI away from the Abstract, Gut's desk-rejection process starts with the abstract, and AI-generated clinical language can trigger concerns

For microbiome research:

  • Document which bioinformatics code was AI-assisted and which was written manually
  • Deposit analysis code in a public repository with clear documentation
  • If AI helped design your analysis pipeline, distinguish this from the pipeline's execution

For endoscopy and imaging research:

  • Clearly separate your AI research method from your AI writing disclosure
  • Include model training details, validation datasets, and performance metrics in standard Methods
  • The ASGE guidelines on AI reporting in endoscopy research may provide additional framework for your Methods section

Before submission checklist:

  • [ ] AI disclosure in Methods with tool name, version, and use case
  • [ ] Research AI tools described in standard Methods (not in AI disclosure)
  • [ ] No patient/clinical data processed through cloud AI tools
  • [ ] Clinical interpretations are human-generated
  • [ ] Microbiome code deposited and validated
  • [ ] Submission form AI declaration completed honestly
  • [ ] All co-authors aware of AI disclosure

A free manuscript assessment can help verify your Gut submission meets the journal's requirements before you enter the competitive review process.

References

Sources

  1. BMJ Publishing Group AI policy
  2. Gut author guidelines
  3. Gut editorial policies
  4. ICMJE Recommendations
  5. COPE position statement on AI
  6. ASGE guidelines on AI in endoscopy

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist