Reference notes

Coverage

9 biomedical fields

Sources

Clarivate JCR 2024

Last reviewed

February 2026

Prepared by the Manusights editorial team.

Journal-metrics guide

Journal Impact Factor: What It Means for Submission Decisions

Impact factor comes up in every conversation about where to publish. Advisors mention it. Grant committees look at it. Promotion files include it. But it's also widely misunderstood, especially when it comes to using it to actually choose where to submit.

This guide explains what JIF is, how it's calculated, why comparing it across fields doesn't work, and what to use it for (and what not to).

Quick orientation

Use this page when impact factor is showing up in a journal decision and you need to keep the number in context.

This guide is most useful when a lab is comparing journal tiers, writing a promotion narrative, or deciding how much weight a raw JIF should carry. The main job is not to glorify the number, but to stop it from distorting field-specific submission decisions.

9 field ranges5 decision questionsJCR-based examplesBest for journal triage

Start here first

The 4-part impact-factor workflow

If you are using JIF for submission planning, first calibrate it within field, then understand what the number actually measures, then check what it misses, and only then decide how much weight it deserves in the final journal decision.

01

Calibrate by field

Use field-specific ranges before interpreting a number as high, mid, or accessible.

02

Understand the formula

Know the citation window and denominator so you know what JIF is and is not showing.

03

Check the limits

Remember that JIF is a journal average, not a measure of your paper’s quality or fit.

04

Use it with other inputs

Combine JIF with acceptance rate, audience, review time, and journal fit before choosing where to submit.

Methodology

What this JIF guide is built from

This reference is grounded in Clarivate JCR methodology plus field-specific journal calibration from the Manusights journal database. The field-comparison section is emphasized because cross-field misuse is the most common way researchers misread impact factor.

In this guide

The sections that matter most when JIF enters the conversation

Start with the formula only if you need to understand the number itself. Most readers should spend more time on field calibration, what JIF misses, and how to combine it with fit, selectivity, and timing.

What Impact Factor Actually Is

Journal Impact Factor (JIF) is a single number that Clarivate calculates annually and publishes in the Journal Citation Reports (JCR). The formula is simple:

JIF (2024) = Citations in 2024 to articles published in 2022–2023
divided by
Number of citable items published in 2022–2023

So if NEJM published 500 research articles in 2022–2023, and those articles were cited 39,250 times during 2024, NEJM's JIF for 2024 would be 39,250 ÷ 500 = 78.5.

That's it. No complexity. The issues (and there are real ones) come from what this metric can't account for, and how it gets misused.

2024 JIF Examples

Lancet88.5
NEJM78.5
JAMA55.0
Nature Medicine50.0
Nature48.5
Science45.8
Cancer Cell44.5
Nature Methods32.1
Neuron15.0
PNAS9.1
PLOS ONE2.6

Source: Clarivate JCR 2024

Practical note

Four ways researchers tend to misuse impact factor

Comparing raw JIF across unrelated fields as if the number carried one universal meaning.
Using JIF as a proxy for article quality when it only reports a journal-level citation average.
Treating JIF as if it tells you acceptance probability, scope fit, or review speed.
Quoting JIF in promotion or grant contexts without understanding the DORA-style objections to overuse.

The Most Common Mistake: Comparing Across Fields

A JIF of 10 means something completely different in neuroscience than it does in clinical medicine. In clinical medicine, a JIF of 10 puts you in the lower tier of general interest journals. In neuroscience, a JIF of 10 puts you in solid mid-tier territory (Brain has an 11.7 JIF and is a very well-regarded journal). In math or humanities, a JIF of 2 can represent the top journals in the field.

This happens because citation rates differ between fields. Clinical researchers cite aggressively: a methods paper or clinical trial in a major journal can accumulate thousands of citations quickly. Neuroscience researchers cite more conservatively, and the reference list for a typical paper is shorter. JIF just reflects those cultural differences in citation behavior. It says nothing absolute about journal quality across field boundaries.

Typical JIF Ranges by Biomedical Field

Use this table to calibrate what "high," "mid," and "accessible" mean within your field, not across fields.

FieldTop Tier JIFMid Tier JIFAccessible JIF
Clinical Medicine55–8820–422–15
Multidisciplinary45–489–162–4
Oncology35–4415–305–20
Cardiology35–3815–225–16
Cell Biology30–4210–202–10
Genomics & Methods29–3212–165–13
Immunology26–2710–203–10
Neuroscience20–4510–153–11
GI / Hepatology25–2610–163–9

Source: Clarivate JCR 2024 data for journals in Manusights database. Tiers are based on journal positioning within each field, not absolute IF values.

What JIF Doesn't Tell You

Whether your specific paper will do well

JIF is the average across all articles. A minority of highly-cited papers drive most of the average at most journals. Your paper's citation count will depend on the quality of the work, not the journal it's in, though visibility does help.

How selective the journal actually is

Nucleic Acids Research has a JIF of 13.1 and an acceptance rate around 45%. JACC has a JIF of 22.3 and accepts around 5% of submissions. JIF and selectivity are correlated, but the relationship isn't tight enough to substitute one for the other.

Scope fit

A JIF of 30 in a journal outside your subfield is worth less than a JIF of 10 in the flagship journal your community actually reads. Where your peers publish and read matters more than the raw JIF number.

Article-level quality

Publishing in a high-JIF journal doesn't make a study better. The research community has known this for a long time: San Francisco Declaration on Research Assessment (DORA) explicitly discourages using JIF to evaluate individual researchers. It's a journal-level metric being applied at the article and researcher level.

Open access reach

Nature Communications (fully OA, IF 15.7) may get your work read by more people than a subscription journal with a higher JIF. OA and JIF address different things. For researchers whose funding agencies require OA, JIF is secondary to compliance.

How fast you'll get a decision

High JIF journals aren't necessarily faster. Scientific Reports (JIF 3.9) takes ~120 days for a first decision. Nature Methods (JIF 32.1) returns desk decisions in about 4 days. Timeline is independent of JIF.

How to Actually Use JIF When Choosing a Journal

1

Establish your field's range first

Using the table above, find what top-tier, mid-tier, and accessible look like in your specific field. A JIF of 16 might be a reach journal in neuroscience and a comfortable target in cell biology. Get calibrated before you compare journals.

2

Build a tiered shortlist, not a single target

Most experienced researchers identify three journal tiers before submitting: a reach (top-tier JIF, low acceptance, high upside), a solid match (mid-tier JIF, fits scope and audience), and an accessible option (higher acceptance, lower bar on significance). JIF helps slot journals into those tiers within your field.

3

Combine it with acceptance rate and timeline

JIF alone is half the picture. A journal with IF 22 and 5% acceptance (JACC) requires a very different manuscript than a journal with IF 22 and 20% acceptance (Blood). See the acceptance rate guide and peer review timeline guide alongside JIF data to build a complete picture.

4

Don't chase JIF at the expense of fit

Submitting to the highest-JIF journal you can think of, regardless of scope, leads to serial rejection and wasted months. A paper that's a perfect fit for Gut (JIF 25.8) will fare better there than at a general journal with a higher JIF that doesn't focus on gastroenterology. Fit and significance for the specific readership matters more than the absolute number.

The Broader Debate: Should JIF Be Used at All?

The scientific community has serious reservations about how JIF gets used. The San Francisco Declaration on Research Assessment (DORA) signed by thousands of researchers and hundreds of institutions, explicitly calls for not using journal-level metrics as surrogates for the quality of individual research articles in funding, hiring, or promotion decisions.

The practical reality: JIF is still widely used in many countries and institutions for exactly that purpose. Knowing what it is and what it can't tell you lets you engage with it honestly. It's a useful rough signal of journal standing within a field, not a measure of your work's quality.

For individual article quality, citations, altmetrics, and downstream use are more informative than the journal's average. For journal selection as a researcher, scope fit, acceptance rate, and review timeline are at least as important as JIF.

Source note

Where the values in this guide come from

All impact-factor values cited here come from Clarivate Journal Citation Reports (JCR), 2024 release. JCR remains the authoritative source for JIF data and typically requires institutional access. The examples on this page are included for reference and educational use, not as a substitute for the full JCR dataset.

References

  1. Garfield E. The history and meaning of the journal impact factor. JAMA. 2006;295(1):90-93. [doi.org/10.1001/jama.295.1.90 ↗]
  2. Clarivate. (2024). Journal Citation Reports. Clarivate Analytics. [jcr.clarivate.com ↗]
  3. San Francisco Declaration on Research Assessment (DORA). (2012). Retrieved February 2026. [sfdora.org ↗]
  4. Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015;520(7548):429-431. [doi.org/10.1038/520429a ↗]
  5. Bornmann L, Daniel HD. What do we know about the h index? J Am Soc Inf Sci Technol. 2007;58(9):1381-1385. [doi.org/10.1002/asi.20609 ↗]

Ready to apply this to a real draft?

Move from reference guidance to a manuscript-specific check

Use the public submission-readiness path when you already have a manuscript and need a draft-specific signal, not just a general guide.

Best for researchers who want a fast readiness read before deciding whether to revise, retarget, or submit.

Related guides in this collection