Publishing Strategy10 min readUpdated Mar 27, 2026

Blood's AI Policy: ASH Rules for Hematology's Flagship Journal

Blood requires AI disclosure in Methods under ASH rules, prohibits AI authorship and AI-generated images, and applies the same policy across Blood Advances and all ASH publications.

Author contextSenior Researcher, Oncology & Cell Biology. Experience with Nature Medicine, Cancer Cell, Journal of Clinical Oncology.View profile

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness ScanOr find your best-fit journal in 30 seconds
Journal context

Blood at a glance

Key metrics to place the journal before deciding whether it fits your manuscript and career goals.

Full journal profile
Impact factor23.1Clarivate JCR
Acceptance rate~20%Overall selectivity
Time to decision~30 daysFirst decision

What makes this journal worth targeting

  • IF 23.1 puts Blood in a visible tier — citations from papers here carry real weight.
  • Scope specificity matters more than impact factor for most manuscript decisions.
  • Acceptance rate of ~~20% means fit determines most outcomes.

When to look elsewhere

  • When your paper sits at the edge of the journal's stated scope — borderline fit rarely improves after submission.
  • If timeline matters: Blood takes ~~30 days. A faster-turnaround journal may suit a grant or job deadline better.
  • If open access is required by your funder, verify the journal's OA agreements before submitting.

Quick answer: Hematology was one of the first medical specialties to integrate machine learning into routine clinical workflows. Flow cytometry gating algorithms, genomic variant classifiers for myeloid neoplasms, and minimal residual disease detection panels all rely on computational methods that sit somewhere on the AI spectrum.

Blood AI Policy at a Glance

  • AI authorship: Prohibited. AI tools cannot be listed as authors and cannot take accountability for the work.
  • AI disclosure: Required. Disclose use of AI tools (e.g., ChatGPT, Claude, Gemini) in the Methods section.
  • AI-generated images: Prohibited. AI-created figures, illustrations, or visualizations are not permitted in the manuscript.
  • Copy editing: All AI use, including copy editing, must be disclosed.

The ASH AI policy

Blood doesn't create AI policy in isolation. The American Society of Hematology established a society-wide framework that governs all ASH publications. Blood, with an impact factor typically ranging from 20 to 25 and a reputation as the top hematology journal globally, implements this policy with the scrutiny you'd expect.

The core rules:

1. AI can't be an author. ICMJE criteria require that authors can take accountability, approve the final version, and agree to be responsible for the work. AI tools meet none of these criteria. Blood won't accept any submission listing an AI tool as a co-author.

2. AI use must be disclosed in Methods. If you used generative AI during manuscript preparation, whether for language editing, code generation, or any other purpose, describe it in the Methods section. Name the tool, specify the version, and explain what it did.

3. AI-generated images are prohibited. No figures, graphical abstracts, or visual content produced by generative AI. Blood films, bone marrow aspirate images, flow cytometry dot plots, and other hematology visuals must come from real patient or experimental data.

4. Authors retain full responsibility. Every listed author vouches for the accuracy of all content, including sections where AI tools were involved. If an AI tool introduced a factual error or mischaracterized a study, the authors are accountable.

5. Standard grammar tools are exempt. Built-in spell checkers and basic grammar correction tools aren't covered by the disclosure requirement. The line is drawn at generative AI tools that can produce new text or substantially rephrase existing content.

How the ASH policy relates to Blood's editorial practice

ASH publishes two main journals: Blood and Blood Advances. The AI policy text is identical across both, but the editorial context differs:

Aspect
Blood
Blood Advances
Policy source
ASH
ASH
Acceptance rate
~15-18%
~25-30%
Article types
Research, reviews, guidelines
Research, shorter reports
Clinical impact
Very high
High
Editorial AI scrutiny
Very high
High
Typical review timeline
4-6 weeks
3-5 weeks

Blood's lower acceptance rate means that every submission receives intense scrutiny. An AI disclosure issue that might be handled with a quick email at a less selective journal becomes a more formal concern at Blood. The editors aren't looking to catch authors out, they're protecting the journal's integrity because a retraction at Blood carries weight across the entire hematology field.

Blood Advances was launched as a companion journal to handle the volume of quality submissions that Blood can't accommodate. It follows the same ASH rules, but the editorial tone tends to be slightly more flexible on formatting details while maintaining the same ethical standards.

Genomics and bioinformatics

Modern hematology research is deeply computational. Whole-genome sequencing for leukemia characterization, RNA-seq for gene expression profiling, single-cell transcriptomics for clonal architecture mapping, these methods involve extensive code and data analysis pipelines.

If you used GitHub Copilot or ChatGPT to help write Python scripts for your bioinformatics pipeline, that's AI-assisted code generation and it requires disclosure. The analysis tools themselves, GATK, STAR, Seurat, CellRanger, are standard bioinformatics software, not generative AI. They go in regular Methods. But AI assistance in writing the code that runs these tools is a different category.

Disclosure example for a genomics paper:

"Whole-genome sequencing analysis was performed using GATK (v4.4) and Mutect2 for somatic variant calling, as described in Methods. GitHub Copilot (Microsoft) was used to assist with writing custom Python scripts for variant filtering and annotation. All scripts were validated against published benchmark datasets. ChatGPT (GPT-4, OpenAI) was used to improve the readability of the Results and Discussion sections. The authors take full responsibility for the published content."

Flow cytometry and immunophenotyping

Blood publishes extensive flow cytometry data, immunophenotyping panels for leukemia diagnosis, MRD detection, immune subset quantification. Some modern flow cytometry software incorporates machine learning for automated gating. This is a research tool, not a writing tool, and belongs in standard Methods.

But if you used AI to help interpret complex flow cytometry patterns or to draft descriptions of immunophenotypic findings, that crosses into the disclosure territory. The distinction matters because flow cytometry interpretation directly affects diagnostic conclusions.

Clinical trial reports

Blood publishes practice-changing clinical trials, new treatments for acute myeloid leukemia, novel therapies for sickle cell disease, CAR-T cell therapy results, transplant conditioning regimen comparisons. These papers determine treatment protocols at cancer centers worldwide.

For clinical trial reports, keep AI away from efficacy conclusions, safety summaries, and treatment recommendations. You can use AI to polish your language, but the clinical interpretations must be entirely human-generated. Blood's clinical reviewers are hematologists who'll notice if AI-generated text introduces the kind of hedged, noncommittal phrasing that large language models tend to produce when discussing clinical outcomes.

Writing your AI disclosure statement

Here's what proper disclosure looks like for different Blood submission types.

For an original research article (e.g., AML genomics study):

"During the preparation of this manuscript, the authors used Claude (Claude 3.5, Anthropic) to improve the clarity and readability of the Introduction and Discussion sections. All AI-generated suggestions were reviewed and edited by the corresponding author (J.K.) and the senior computational biologist (L.M.). No AI tools were used in study design, sequencing analysis, variant interpretation, or clinical correlations. The authors take full responsibility for the content of this article."

For a clinical trial report (e.g., Phase III leukemia trial):

"The authors used ChatGPT (GPT-4o, OpenAI) to improve the English language of the Methods section. No AI tools were used for trial design, data analysis, efficacy assessment, safety evaluation, or interpretation of clinical outcomes. All clinical conclusions were drawn by the study investigators based on the pre-specified statistical analysis plan. The authors take full responsibility for the published content."

For a review article:

"During the preparation of this review, the authors used ChatGPT (GPT-4, OpenAI) to improve the readability of selected sections and to check the consistency of reference citations. The literature search, study selection, data synthesis, and all scientific conclusions were performed entirely by the authors. The authors take full responsibility for the published content."

For a brief report / Blood Advances submission:

"Claude (Claude 3.5, Anthropic) was used to improve the language of the manuscript. All content was reviewed and approved by all authors, who take full responsibility for the published work."

Each of these follows the same structure: name the tool, describe what was and wasn't done, confirm author responsibility. Shorter papers can have shorter disclosures, but the essential elements remain the same.

What happens if you don't disclose

Blood follows COPE guidelines for handling integrity concerns, and undisclosed AI use falls squarely within that framework. Here's the escalation path:

During peer review. If a reviewer flags text that appears AI-generated, or if the editor suspects undisclosed AI use, you'll receive an email asking for clarification. This isn't automatically a rejection, but it does raise a red flag. Editors may use AI detection tools as a preliminary check, though everyone in the field acknowledges these tools have significant false positive rates.

After conditional acceptance. If AI use surfaces during the revision or production phase, the paper can be held until the issue is resolved. You'll need to add a proper disclosure statement and explain the omission. This delays publication and creates a record that follows the manuscript.

After publication. This is where things get serious:

  • Correction: If the AI use was limited to language editing and didn't affect scientific content, a published correction adding the disclosure may suffice. This is the best-case scenario for post-publication discovery.
  • Expression of concern: If the scope of AI use raises questions about the paper's scientific validity, for example, if AI was used to draft interpretive conclusions, the editor may publish an expression of concern while the matter is investigated.
  • Retraction: If AI involvement was extensive enough to undermine confidence in the scientific content, retraction becomes possible. This is the nuclear option and it's rare, but it's on the table.
  • Institutional notification: In serious cases, Blood may notify the authors' institution. For faculty at academic medical centers, this triggers a formal research integrity investigation, a process that can take six months to a year and has career implications far beyond one paper.

The practical reality: most undisclosed AI use gets caught during peer review or shortly after. Reviewers in hematology are increasingly attuned to AI-generated prose patterns, and Blood's reviewer pool includes some of the sharpest editors in the field. It's much better to disclose upfront than to explain after the fact.

Comparison with other top hematology and oncology journals

Feature
Blood
JCO
Leukemia
Haematologica
Blood Advances
Publisher
ASH
ASCO/WK
Nature/Springer
EHA/Ferrata Storti
ASH
Policy source
ASH
ASCO
Springer Nature
EHA
ASH
AI authorship
Prohibited
Prohibited
Prohibited
Prohibited
Prohibited
Disclosure location
Methods
Methods
Methods
Methods
Methods
AI-generated images
Prohibited
Prohibited
Prohibited
Prohibited
Prohibited
Dual disclosure
No
No
No
No
No
Impact factor (approx.)
~21
~45
~12
~10
~8
Clinical trial emphasis
High
Very high
Moderate
Moderate
Moderate

Several things are worth noting in this comparison:

Blood and JCO overlap significantly in clinical hematology-oncology content. Both publish leukemia trials, lymphoma studies, and transplant outcomes. JCO's policy comes from ASCO, Blood's from ASH, but the substance is nearly identical. If you're deciding between the two for a clinical paper, the AI policy won't be the differentiating factor.

Leukemia follows Springer Nature's policy, which is generally consistent with what Blood requires but comes from a commercial publisher rather than a medical society. Springer Nature's framework covers thousands of journals, so the language tends to be more generic. Blood's ASH-derived policy has a more clinical tone.

Haematologica is published by the European Hematology Association through Ferrata Storti Foundation. The EHA's AI policy aligns with European standards and COPE guidelines. It's broadly equivalent to ASH's approach.

All five journals agree on the fundamentals: no AI authorship, mandatory disclosure, no AI-generated images, and full author responsibility. The differences are in implementation details, not principles.

Keep a running log

Start tracking AI use from the moment you begin drafting. A simple text file noting "Used ChatGPT for language editing, Introduction, March 5" gives you the raw material for an accurate disclosure statement. Reconstructing your AI use from memory six months later when you're doing final revisions isn't reliable.

Readiness check

Run the scan while the topic is in front of you.

See score, top issues, and journal-fit signals before you submit.

Get free manuscript previewAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr run a stats sanity check

Coordinate with co-authors

Multi-center hematology studies can have 15 to 30 co-authors across multiple institutions. Your co-first-author in Munich might be using DeepL Write for translation assistance. Your biostatistician in Boston might be using GitHub Copilot. The corresponding author needs to ask every co-author about their AI tool use before assembling the final disclosure.

A simple email to all co-authors asking "Did you use any AI tools during your work on this manuscript?" should be part of your pre-submission workflow. Don't assume everyone will volunteer this information unprompted.

Don't over-disclose

Blood doesn't need to know that you used Google Translate to look up a word, or that your word processor's built-in grammar checker flagged a comma splice. The disclosure requirement covers generative AI tools, systems that can produce new text, substantially rephrase content, or generate code. Standard reference management software, statistical packages, and basic writing tools don't require disclosure.

Watch the clinical content

If your paper includes treatment recommendations, prognostic assessments, or diagnostic criteria, and many Blood papers do, make absolutely sure these sections were written by the clinical investigators. AI-assisted language editing of clinical conclusions is acceptable if the clinical reasoning came from the authors. AI-generated clinical reasoning is not.

Before-submission checklist

Use this before submitting to Blood or Blood Advances:

  • [ ] All AI tools used during manuscript preparation have been identified
  • [ ] The Methods section includes a specific disclosure naming each tool, version, and purpose
  • [ ] Research AI (bioinformatics tools, analysis pipelines) is described separately from writing AI
  • [ ] All co-authors have confirmed their AI tool usage (or non-usage)
  • [ ] No AI-generated images or figures are included
  • [ ] Patient data and clinical trial data haven't been processed through external AI tools
  • [ ] Clinical interpretations and treatment conclusions were generated by human investigators
  • [ ] The disclosure confirms that all authors take full responsibility
  • [ ] The ScholarOne submission form questions about AI use have been answered accurately
  • [ ] The manuscript has been read in full by all authors to verify AI-edited sections are accurate

A Blood submission readiness check can help verify that your Blood submission meets ASH's editorial requirements and that your AI disclosure statement is complete before you enter the formal review process.

Bottom line

Blood requires AI disclosure in the Methods section, prohibits AI authorship and AI-generated images, and expects clinical content to remain human-generated. The policy comes from ASH and applies identically to Blood Advances and other ASH publications. Compared to JCO, Leukemia, and Haematologica, Blood's rules are substantively similar, the hematology field has converged on a consistent set of expectations. The biggest risk isn't policy complexity; it's simply forgetting to disclose, especially in large multi-author studies where individual co-authors' AI use can slip through the cracks. Track AI use from day one, coordinate with co-authors, and write a specific disclosure statement rather than a vague one.

What should you do about Blood's's AI policy?

Comply proactively if:

  • You used any AI tool (ChatGPT, Grammarly, Copilot) during manuscript preparation
  • The journal requires AI use disclosure in the methods or acknowledgments
  • Your institution has its own AI use policy that may be stricter

Less concerned if:

  • You used AI only for grammar/spell checking (most journals exempt this)
  • The journal does not have a formal AI policy yet
  • Your use was limited to literature search or reference management

Frequently asked questions

Yes, with mandatory disclosure. Blood follows the American Society of Hematology's AI policy. AI tools can be used for language editing and manuscript preparation, but all use must be disclosed in the Methods section. AI cannot be an author, and authors bear full responsibility for the content.

In the Methods section. Authors must name the specific AI tool, its version, and describe how it was used. The disclosure should confirm that all authors reviewed and take responsibility for the AI-assisted content.

Yes. ASH's AI policy covers both Blood and Blood Advances, as well as other ASH publications. The same disclosure requirements, authorship rules, and image prohibitions apply across the ASH journal portfolio.

Blood treats undisclosed AI use as a publication ethics violation following COPE guidelines. Consequences range from a required correction to expression of concern or retraction, depending on severity. The journal may notify the authors' institution in serious cases.

AI tools used as research methodology (e.g., variant calling algorithms, machine learning classifiers for flow cytometry) are described in standard Methods as research tools. Separately, if you used ChatGPT to edit the manuscript text, that requires an AI writing disclosure. Keep the two clearly distinct in your Methods section.

References

Sources

  1. Blood author instructions
  2. ASH publications policies
  3. Blood Advances author instructions
  4. ICMJE recommendations on AI and authorship
  5. COPE position statement on AI in publishing

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist