Publishing Strategy10 min readUpdated Mar 25, 2026

Hepatology's AI Policy: AASLD and Wolters Kluwer Rules for Liver Disease Authors

Hepatology requires AI disclosure in Methods under dual AASLD and Wolters Kluwer rules, prohibits AI authorship and AI-generated images, and applies heightened scrutiny to AASLD practice guideline papers.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness Scan

If you're preparing a manuscript for Hepatology, add the AI disclosure statement to your Methods section before you write anything else. That might sound backward, but here's why it works: deciding upfront which AI tools you'll use, and for what, forces you to think about the boundaries before you start. Too many authors treat the AI disclosure as an afterthought, something they'll sort out during final revisions. By then, they've forgotten half the tools they used, their co-authors haven't been asked about their own AI use, and the resulting disclosure statement is either too vague to satisfy the editors or too generic to be meaningful.

The AASLD and Wolters Kluwer policy framework

Hepatology sits at the intersection of two organizations. The American Association for the Study of Liver Diseases (AASLD) is the medical society that owns the journal. Wolters Kluwer is the publisher that handles production, distribution, and the submission infrastructure. Both have AI policies, and Hepatology draws from both.

The core rules:

1. AI can't be an author. ICMJE criteria require that authors can take accountability, approve the final version, and agree to be responsible for published work. AI tools can't do any of these things. Hepatology won't accept submissions with AI tools listed in the author byline.

2. AI use must be disclosed in Methods. Describe the tool, its version, and its specific purpose. "We used AI" isn't sufficient, you need to say which tool, which version, and what exactly it did.

3. AI-generated images are prohibited. No figures, histopathology illustrations, imaging reconstructions, or graphical abstracts created by generative AI. All visual content must derive from real patient data, experimental results, or traditional scientific illustration.

4. Authors retain full responsibility. Every co-author must verify the accuracy of all content, including sections where AI assisted. If an AI tool introduced an error in a fibrosis staging description or a drug dosing table, the authors are liable.

5. Standard grammar tools are exempt. Built-in spell checkers and basic grammar correction don't need disclosure. The policy targets generative AI: tools that can create new text, substantially rephrase, or generate code.

How the AASLD policy compares to Wolters Kluwer's publisher-wide rules

Wolters Kluwer publishes thousands of medical journals and textbooks. Their company-wide AI policy provides a baseline that applies across the portfolio. AASLD adds a layer of clinical and society-specific expectations:

Aspect
Wolters Kluwer general policy
AASLD/Hepatology
AI authorship
Prohibited
Prohibited
Permitted AI use
Writing assistance with disclosure
Language editing, manuscript preparation
Clinical content
General responsibility statement
Heightened expectations for clinical content
Practice guidelines
No specific restriction
AI should not contribute to guideline recommendations
Disclosure location
Varies by journal
Methods section
AI-generated images
Prohibited
Prohibited
Submission system
Publisher infrastructure
Editorial Manager (WK)

The AASLD's influence shows most clearly in how Hepatology treats clinical content. Wolters Kluwer's general policy is written for a diverse portfolio that includes non-clinical journals. The AASLD's overlay adds specific expectations about hepatology practice guidelines, clinical trial interpretations, and diagnostic criteria, areas where AI-generated content could directly affect patient care.

Hepatology-specific AI considerations

Non-invasive fibrosis assessment and AI

The hepatology field has been moving toward non-invasive assessment of liver fibrosis for years. Elastography interpretation, serum biomarker panels (FIB-4, APRI, ELF), and AI-assisted ultrasound analysis are all active areas of research published in Hepatology. When your paper is about these AI diagnostic tools, the AI is your research subject.

The distinction between research AI and writing AI remains critical. If you developed a deep learning model for automated elastography interpretation and also used ChatGPT to polish your manuscript, keep the descriptions completely separate:

  • Research AI: standard Methods section with architecture details, training/validation data, performance metrics
  • Writing AI: dedicated disclosure paragraph naming the tool and its purpose

NASH/MASH clinical trials

Hepatology publishes practice-changing clinical trials for metabolic dysfunction-associated steatohepatitis (MASH, formerly NASH). Resmetirom, other FXR agonists, thyroid hormone receptor-beta agonists, these trials have transformed the treatment landscape. The papers reporting their results carry enormous clinical weight.

For trial reports, AI can help with language. It shouldn't help with interpreting histologic response rates, characterizing adverse event profiles, or drawing conclusions about clinical benefit. Hepatology's reviewers include hepatologists who run these trials and serve on DSMB committees. They'll notice if the language in your efficacy section has the hedged, overly balanced quality that AI tools tend to produce when discussing clinical outcomes.

Liver transplant data

Hepatology publishes substantial transplant research: allocation algorithms, donor-recipient matching studies, post-transplant outcomes. These papers often involve large registry datasets (UNOS, Eurotransplant) with sensitive patient information.

Never process transplant registry data through external AI tools like ChatGPT. Even de-identified data can create compliance issues if it's uploaded to a cloud-based AI service. This isn't just an AI policy concern, it's a data governance issue that your IRB and registry access agreement would also prohibit.

AASLD practice guidelines

Hepatology is the primary publication venue for AASLD Practice Guidelines. These cover hepatitis B management, hepatitis C treatment, liver cancer surveillance, alcohol-related liver disease, MASH management, and more. Millions of patients worldwide are treated according to these documents.

The AI stakes for guideline papers are at their highest. While there's no separate formal rule for guidelines versus original research, the editorial expectation is clear: guideline recommendations must be entirely the product of the expert panel's deliberation. AI involvement in drafting or formulating guideline statements would undermine the entire framework of evidence-based expert consensus that these documents represent.

Writing your AI disclosure statement

For an original research article (e.g., MASH biomarker study):

"During the preparation of this manuscript, the authors used ChatGPT (GPT-4o, OpenAI) to improve the clarity of the Introduction and Discussion sections. All AI-generated suggestions were reviewed and edited by the corresponding author (A.B.) and the senior hepatologist (C.D.). No AI tools were used in study design, sample analysis, biomarker validation, or clinical interpretation. The authors take full responsibility for the content of this article."

For a clinical trial report (e.g., Phase III MASH trial):

"The authors used Claude (Claude 3.5, Anthropic) to improve the English language of the Methods section. No AI tools were used for trial design, statistical analysis, histologic assessment, efficacy evaluation, safety reporting, or interpretation of clinical outcomes. All clinical conclusions were drawn by the study investigators based on the pre-specified statistical analysis plan reviewed by the Data Safety Monitoring Board. The authors take full responsibility for the published content."

For a transplant outcomes study:

"ChatGPT (GPT-4, OpenAI) was used to improve the readability of the Results and Discussion sections. No patient-level data from the transplant registry was processed through any external AI tool. Statistical analyses were performed using SAS 9.4 (SAS Institute) and R (v4.3.2) by the study biostatisticians. The authors take full responsibility for the published content."

For a systematic review:

"During the preparation of this systematic review, the authors used Claude (Claude 3.5, Anthropic) to improve sentence-level readability of the Discussion section. The literature search, study screening, data extraction, quality assessment, and evidence synthesis were performed entirely by the author team. No AI tools assisted with any stage of the systematic review process. The authors take full responsibility for the published content."

Each disclosure follows the same pattern: name the tool, describe the scope, clarify what wasn't AI-assisted, and assert responsibility. For clinical papers, the explicit disclaimer about clinical interpretation is essential. Reviewers and editors look for it.

What happens if you don't disclose

The consequences of undisclosed AI use at Hepatology follow COPE guidelines, implemented through both AASLD and Wolters Kluwer infrastructure:

During peer review. Reviewers increasingly comment on prose that reads like AI output. If a reviewer flags this concern, the editor will ask for clarification. This doesn't automatically doom your paper, but it shifts the editorial dynamic. You're now defending your integrity instead of discussing your science.

After acceptance, before publication. The paper can be held in production while the issue is resolved. You'll need to provide a disclosure statement and explain why it wasn't included originally. This delays your publication and creates an editorial record.

After publication, the consequences escalate:

  1. Correction. If AI use was limited to language editing, a published erratum adding the disclosure may be sufficient. Your paper gets a correction notice that's permanently linked to it in PubMed and other databases.
  1. Expression of concern. If AI involvement raises doubts about the scientific content, say, AI was used to draft clinical interpretations for a fibrosis study, the editor may issue a formal expression of concern. This is a public statement that something about the paper is under investigation.
  1. Retraction. For cases where AI use was extensive enough to undermine confidence in the findings, retraction is possible. In hepatology, where clinical guidelines are built on published evidence, a retracted paper can have downstream effects on treatment recommendations.
  1. Institutional notification. In serious cases, the journal notifies the authors' institution. For faculty at academic medical centers, this triggers a formal research integrity inquiry. These investigations typically take 6 to 12 months and can affect promotion, grant funding, and clinical privileges.
  1. Cross-journal pattern detection. Wolters Kluwer's infrastructure allows them to identify patterns across their portfolio. If the same author has undisclosed AI use flagged at multiple Wolters Kluwer journals, the consequences compound.

Comparison with other top liver and GI journals

Feature
Hepatology
J of Hepatology
Gut
Gastroenterology
Liver International
Publisher
AASLD/WK
EASL/Elsevier
BMJ
AGA/Elsevier
APASL/Wiley
Policy source
AASLD + WK
EASL + Elsevier
BMJ Group
AGA + Elsevier
APASL + Wiley
AI authorship
Prohibited
Prohibited
Prohibited
Prohibited
Prohibited
Disclosure location
Methods
Methods
Methods
Methods
Methods
AI-generated images
Prohibited
Prohibited
Prohibited
Prohibited
Prohibited
Guideline sensitivity
Very high (AASLD)
Very high (EASL)
High
Very high (AGA)
Moderate (APASL)
Impact factor (approx.)
~14
~26
~24
~29-34
~6

Key observations:

Hepatology and Journal of Hepatology are the two primary competitors in liver disease research. Both follow the same basic AI rules but draw from different societies: AASLD (North American) and EASL (European). The policy substance is nearly identical. The main practical differences are in submission systems, Hepatology uses Wolters Kluwer's Editorial Manager while Journal of Hepatology uses Elsevier's.

Gut publishes significant liver disease content alongside GI research. Its BMJ Publishing Group policy is consistent with what Hepatology requires. If you've submitted to Gut, the AI disclosure expectations will feel familiar.

Gastroenterology also publishes hepatology research, particularly MASH/NAFLD studies, hepatitis trials, and liver cancer screening studies. Its AGA/Elsevier framework produces equivalent author requirements to Hepatology's AASLD/WK framework.

Liver International represents the Asia-Pacific perspective through APASL. Wiley's AI policy provides the publisher framework. The core rules align with Western journals, though the emphasis on particular clinical contexts may differ given the different disease epidemiology in the Asia-Pacific region (higher hepatitis B prevalence, different liver cancer demographics).

All five journals agree on fundamentals. No AI authorship, mandatory disclosure in Methods, no AI-generated images, full author responsibility. The liver disease field has reached the same consensus that cardiology, oncology, and other specialties have: the basic rules aren't controversial anymore.

Practical advice for Hepatology submissions

Plan your AI use before you start

I've said it at the top and I'll say it again: decide which AI tools you'll use before you begin drafting. This doesn't mean you can't change your mind, but having a plan means you can track everything systematically.

The fibrosis staging problem

Hepatology papers frequently involve fibrosis staging, whether through liver biopsy (Metavir, NASH CRN), elastography, or serum markers. If you're using AI to edit sections that describe fibrosis results, be extra careful. Large language models sometimes introduce subtle inaccuracies in staging descriptions, such as conflating F2 and F3 criteria or mischaracterizing the meaning of particular elastography cutoffs. Have a hepatologist specifically review any AI-edited text about fibrosis staging.

Coordinate international author teams

Hepatology papers often involve collaborations spanning multiple continents. A multicenter MASH trial might include investigators from the US, Europe, Asia, and Latin America. Language barriers make AI tools particularly useful for non-English-speaking co-authors, and that's exactly what the policy permits. But coordinate the disclosure. The corresponding author should ask every co-author about their AI tool use before submission.

Supplementary materials count

Many Hepatology papers include extensive supplementary tables, figures, and methods. The AI disclosure covers supplementary content too. If AI helped edit your supplementary methods or generate supplementary figure legends, include it in the disclosure.

Statistical code disclosure

If GitHub Copilot or ChatGPT helped you write R or SAS code for survival analysis, propensity score matching, or any other statistical procedure, that's AI-assisted code generation. Disclose it. This is particularly relevant for Hepatology papers that use complex modeling approaches, competing risk analyses for transplant outcomes, time-varying covariate models for disease progression, and multi-state models for liver disease natural history.

Before-submission checklist

  • [ ] All AI tools used during manuscript preparation have been identified and logged
  • [ ] The Methods section includes a specific AI disclosure naming each tool, version, and purpose
  • [ ] Research AI (diagnostic algorithms, imaging tools) is described separately from writing AI
  • [ ] All co-authors have confirmed their AI tool usage across all manuscript sections
  • [ ] No AI-generated images, figures, or graphical abstracts are included
  • [ ] Patient data, registry data, and clinical trial data haven't been processed through external AI tools
  • [ ] Clinical interpretations, fibrosis staging descriptions, and treatment conclusions are human-generated
  • [ ] The Editorial Manager AI-related questions have been answered accurately
  • [ ] Supplementary materials are covered by the disclosure
  • [ ] AI-edited sections have been specifically reviewed by a domain expert hepatologist

A free manuscript assessment can help verify that your Hepatology submission meets both AASLD and Wolters Kluwer editorial standards and that your AI disclosure is complete before you submit.

Bottom line

Hepatology's AI policy draws from both the AASLD and Wolters Kluwer, producing a framework that's substantively similar to what Journal of Hepatology, Gut, and Gastroenterology require. The core rules are settled: no AI authorship, mandatory disclosure in Methods, no AI-generated images, full author responsibility. Where Hepatology's policy carries extra weight is in its treatment of AASLD Practice Guidelines and clinical trial reports, areas where AI involvement could directly affect patient care. For most authors, compliance is straightforward: track your AI use, write a specific disclosure, and keep AI away from clinical conclusions. The researchers who get into trouble aren't the ones who used AI and disclosed it, they're the ones who used it and forgot to mention it.

References

Sources

  1. Hepatology author instructions
  2. AASLD publications and policies
  3. Wolters Kluwer author resource center
  4. ICMJE recommendations on AI and authorship
  5. COPE position statement on AI in publishing

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist