Reference notes
Coverage
9 biomedical fields
Sources
Clarivate JCR 2024
Last reviewed
February 2026
Prepared by the Manusights editorial team.
Journal-metrics guide
Journal Impact Factor: What It Means for Submission Decisions
Impact factor comes up in every conversation about where to publish. Advisors mention it. Grant committees look at it. Promotion files include it. But it's also widely misunderstood, especially when it comes to using it to actually choose where to submit.
This guide explains what JIF is, how it's calculated, why comparing it across fields doesn't work, and what to use it for (and what not to).
Quick orientation
Use this page when impact factor is showing up in a journal decision and you need to keep the number in context.
This guide is most useful when a lab is comparing journal tiers, writing a promotion narrative, or deciding how much weight a raw JIF should carry. The main job is not to glorify the number, but to stop it from distorting field-specific submission decisions.
Best used with
Journal metrics guide
Use it when you need to compare JIF with CiteScore, SJR, Eigenfactor, and other venue signals.
Acceptance rates
Pair prestige with the practical question of how selective the journal actually is.
Peer-review timelines
Add decision-speed context before letting JIF dominate the final shortlist.
Start here first
The 4-part impact-factor workflow
If you are using JIF for submission planning, first calibrate it within field, then understand what the number actually measures, then check what it misses, and only then decide how much weight it deserves in the final journal decision.
Calibrate by field
Use field-specific ranges before interpreting a number as high, mid, or accessible.
Understand the formula
Know the citation window and denominator so you know what JIF is and is not showing.
Check the limits
Remember that JIF is a journal average, not a measure of your paper’s quality or fit.
Use it with other inputs
Combine JIF with acceptance rate, audience, review time, and journal fit before choosing where to submit.
Methodology
What this JIF guide is built from
This reference is grounded in Clarivate JCR methodology plus field-specific journal calibration from the Manusights journal database. The field-comparison section is emphasized because cross-field misuse is the most common way researchers misread impact factor.
Best used with
In this guide
The sections that matter most when JIF enters the conversation
Start with the formula only if you need to understand the number itself. Most readers should spend more time on field calibration, what JIF misses, and how to combine it with fit, selectivity, and timing.
What JIF is
Formula, citation window, and what Clarivate is actually reporting.
Field calibration
Why a JIF of 10 means very different things across disciplines.
What JIF misses
Article quality, selectivity, scope fit, and decision speed.
How to use it
Build a shortlist without letting the number run the whole decision.
The DORA debate
How to talk about JIF honestly in modern research assessment.
What Impact Factor Actually Is
Journal Impact Factor (JIF) is a single number that Clarivate calculates annually and publishes in the Journal Citation Reports (JCR). The formula is simple:
divided by
Number of citable items published in 2022–2023
So if NEJM published 500 research articles in 2022–2023, and those articles were cited 39,250 times during 2024, NEJM's JIF for 2024 would be 39,250 ÷ 500 = 78.5.
That's it. No complexity. The issues (and there are real ones) come from what this metric can't account for, and how it gets misused.
2024 JIF Examples
Source: Clarivate JCR 2024
Practical note
Four ways researchers tend to misuse impact factor
The Most Common Mistake: Comparing Across Fields
A JIF of 10 means something completely different in neuroscience than it does in clinical medicine. In clinical medicine, a JIF of 10 puts you in the lower tier of general interest journals. In neuroscience, a JIF of 10 puts you in solid mid-tier territory (Brain has an 11.7 JIF and is a very well-regarded journal). In math or humanities, a JIF of 2 can represent the top journals in the field.
This happens because citation rates differ between fields. Clinical researchers cite aggressively: a methods paper or clinical trial in a major journal can accumulate thousands of citations quickly. Neuroscience researchers cite more conservatively, and the reference list for a typical paper is shorter. JIF just reflects those cultural differences in citation behavior. It says nothing absolute about journal quality across field boundaries.
Typical JIF Ranges by Biomedical Field
Use this table to calibrate what "high," "mid," and "accessible" mean within your field, not across fields.
| Field | Top Tier JIF | Mid Tier JIF | Accessible JIF |
|---|---|---|---|
| Clinical Medicine | 55–88 | 20–42 | 2–15 |
| Multidisciplinary | 45–48 | 9–16 | 2–4 |
| Oncology | 35–44 | 15–30 | 5–20 |
| Cardiology | 35–38 | 15–22 | 5–16 |
| Cell Biology | 30–42 | 10–20 | 2–10 |
| Genomics & Methods | 29–32 | 12–16 | 5–13 |
| Immunology | 26–27 | 10–20 | 3–10 |
| Neuroscience | 20–45 | 10–15 | 3–11 |
| GI / Hepatology | 25–26 | 10–16 | 3–9 |
Source: Clarivate JCR 2024 data for journals in Manusights database. Tiers are based on journal positioning within each field, not absolute IF values.
What JIF Doesn't Tell You
Whether your specific paper will do well
JIF is the average across all articles. A minority of highly-cited papers drive most of the average at most journals. Your paper's citation count will depend on the quality of the work, not the journal it's in, though visibility does help.
How selective the journal actually is
Nucleic Acids Research has a JIF of 13.1 and an acceptance rate around 45%. JACC has a JIF of 22.3 and accepts around 5% of submissions. JIF and selectivity are correlated, but the relationship isn't tight enough to substitute one for the other.
Scope fit
A JIF of 30 in a journal outside your subfield is worth less than a JIF of 10 in the flagship journal your community actually reads. Where your peers publish and read matters more than the raw JIF number.
Article-level quality
Publishing in a high-JIF journal doesn't make a study better. The research community has known this for a long time: San Francisco Declaration on Research Assessment (DORA) explicitly discourages using JIF to evaluate individual researchers. It's a journal-level metric being applied at the article and researcher level.
Open access reach
Nature Communications (fully OA, IF 15.7) may get your work read by more people than a subscription journal with a higher JIF. OA and JIF address different things. For researchers whose funding agencies require OA, JIF is secondary to compliance.
How fast you'll get a decision
High JIF journals aren't necessarily faster. Scientific Reports (JIF 3.9) takes ~120 days for a first decision. Nature Methods (JIF 32.1) returns desk decisions in about 4 days. Timeline is independent of JIF.
How to Actually Use JIF When Choosing a Journal
Establish your field's range first
Using the table above, find what top-tier, mid-tier, and accessible look like in your specific field. A JIF of 16 might be a reach journal in neuroscience and a comfortable target in cell biology. Get calibrated before you compare journals.
Build a tiered shortlist, not a single target
Most experienced researchers identify three journal tiers before submitting: a reach (top-tier JIF, low acceptance, high upside), a solid match (mid-tier JIF, fits scope and audience), and an accessible option (higher acceptance, lower bar on significance). JIF helps slot journals into those tiers within your field.
Combine it with acceptance rate and timeline
JIF alone is half the picture. A journal with IF 22 and 5% acceptance (JACC) requires a very different manuscript than a journal with IF 22 and 20% acceptance (Blood). See the acceptance rate guide and peer review timeline guide alongside JIF data to build a complete picture.
Don't chase JIF at the expense of fit
Submitting to the highest-JIF journal you can think of, regardless of scope, leads to serial rejection and wasted months. A paper that's a perfect fit for Gut (JIF 25.8) will fare better there than at a general journal with a higher JIF that doesn't focus on gastroenterology. Fit and significance for the specific readership matters more than the absolute number.
The Broader Debate: Should JIF Be Used at All?
The scientific community has serious reservations about how JIF gets used. The San Francisco Declaration on Research Assessment (DORA) signed by thousands of researchers and hundreds of institutions, explicitly calls for not using journal-level metrics as surrogates for the quality of individual research articles in funding, hiring, or promotion decisions.
The practical reality: JIF is still widely used in many countries and institutions for exactly that purpose. Knowing what it is and what it can't tell you lets you engage with it honestly. It's a useful rough signal of journal standing within a field, not a measure of your work's quality.
For individual article quality, citations, altmetrics, and downstream use are more informative than the journal's average. For journal selection as a researcher, scope fit, acceptance rate, and review timeline are at least as important as JIF.
Source note
Where the values in this guide come from
All impact-factor values cited here come from Clarivate Journal Citation Reports (JCR), 2024 release. JCR remains the authoritative source for JIF data and typically requires institutional access. The examples on this page are included for reference and educational use, not as a substitute for the full JCR dataset.
References
- Garfield E. The history and meaning of the journal impact factor. JAMA. 2006;295(1):90-93. [doi.org/10.1001/jama.295.1.90 ↗]
- Clarivate. (2024). Journal Citation Reports. Clarivate Analytics. [jcr.clarivate.com ↗]
- San Francisco Declaration on Research Assessment (DORA). (2012). Retrieved February 2026. [sfdora.org ↗]
- Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015;520(7548):429-431. [doi.org/10.1038/520429a ↗]
- Bornmann L, Daniel HD. What do we know about the h index? J Am Soc Inf Sci Technol. 2007;58(9):1381-1385. [doi.org/10.1002/asi.20609 ↗]
Ready to apply this to a real draft?
Move from reference guidance to a manuscript-specific check
Use the public submission-readiness path when you already have a manuscript and need a draft-specific signal, not just a general guide.
Best for researchers who want a fast readiness read before deciding whether to revise, retarget, or submit.
Related guides in this collection
Journal Metrics Guide
Compare JIF to CiteScore, SJR, Eigenfactor, and other venue signals.
Acceptance Rates
Balance prestige against the realistic probability of acceptance.
Open Access Guide
Connect prestige decisions to APCs, OA models, and funder rules.
Predatory Journals
Sanity-check questionable venues before treating any metric as trustworthy.
Submission Requirements
Move from journal ranking into actual submission preparation.