What Is Impact Factor? The Plain-English Explanation
Impact factor is a simple formula that gets complicated fast. Here's exactly what it measures, what it doesn't, and how researchers actually use it.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Journal evaluation
Want the full journal picture?
See scope, selectivity, submission context, and what editors actually want before you decide whether the journal is realistic.
Quick answer: Impact factor is a number that runs a surprising amount of academic life. It affects where researchers publish, how departments evaluate hiring candidates, what journals grants require, and occasionally how entire fields distribute prestige.
It's also a two-year-old average citation count calculated from a specific database. That's the whole formula.
The Actual Calculation
Impact factor for a given year is calculated like this:
IF (year Y) = Citations in Y to papers published in Y-1 and Y-2 / Number of citable articles published in Y-1 and Y-2
So a journal's 2024 IF counts how many times, in 2024, papers from 2022 and 2023 were cited, then divides by the number of "citable items" the journal published in those two years.
Citable items means original research articles and reviews, not editorials, letters, or news pieces. This distinction matters because a journal can inflate its IF by publishing more non-citable editorials (which attract citations but don't go in the denominator) while keeping the denominator from growing.
Clarivate publishes the official IFs annually, usually in June, covering the prior calendar year. The 2024 IF released in June 2025 covers citations made in 2024 to papers from 2022-2023.
What Impact Factor Measures (and What It Doesn't)
Impact factor measures the average citation rate of papers published in a journal over a two-year window. That's a useful proxy for a journal's reach and influence in its field.
What it doesn't measure:
- Quality of individual papers: High-IF journals publish mediocre papers. Low-IF journals publish important ones. The journal average says nothing about any specific article.
- Reproducibility or rigor: Citation counts don't track whether cited findings replicated. Some highly-cited papers have failed to reproduce.
- Relevance to your career: An IF 40 publication in a journal outside your field may matter less for your career than an IF 8 publication in the leading journal in your specific domain.
- Speed or accessibility: High-IF journals are often slower and more expensive than alternatives.
Why IF Varies Enormously by Field
A journal with IF 5 in mathematics is a top-tier publication. A journal with IF 5 in oncology is mid-tier. A journal with IF 5 in cell biology is below average.
The variation reflects how citation norms differ across disciplines:
- Biology and medicine: Large research communities, fast-moving literature, lots of papers citing each other rapidly, high IFs
- Mathematics and physics: Smaller citation pools, papers that stand for decades rather than years, lower IFs by convention
- Humanities and social sciences: Many publications in books rather than journals, smaller citation databases, IFs often below 5 even for leading journals
Never compare IFs across fields. It's meaningless. A Nature paper (IF 48) and a top economics journal paper (IF 8) represent roughly equivalent prestige within their respective fields.
The JCR and How to Look Up IF
Clarivate's Journal Citation Reports (JCR) is the official source for impact factors. Access typically requires an institutional subscription. Most university libraries provide JCR access for free through their database portals.
If you don't have institutional access, several free alternatives exist:
- Scimago Journal Rankings (SJR), free, covers most indexed journals, uses its own citation metric
- Google Scholar, doesn't report IF directly, but shows h-index for journals
- Journal websites, many journals report their IF on their "About" page
For an approximate check, searching "[Journal name] impact factor [year]" usually returns the IF from the journal's own reporting or from aggregator sites. Verify against JCR for anything where precision matters.
How IF Affects Your Career (and When It Doesn't)
In research-intensive academic environments, IF is used routinely as a shorthand in hiring, promotion, and grant review. A first-author paper in Nature (IF 48.5) or NEJM (IF 78.5) signals something that most evaluation committees understand without needing to read the paper.
The practical effect varies by:
- Career stage: Early-career researchers at competitive institutions feel IF pressure most acutely. Senior researchers with established track records can publish in a wider range of venues without penalty.
- Institution type: Research-intensive universities often weight IF explicitly. Teaching-focused institutions and some industry positions evaluate papers on content rather than IF.
- Field conventions: Some fields (chemistry, materials science) have moved to alternative metrics and care less about traditional IF.
- Country: China, South Korea, and some other research systems have tied promotion directly to IF thresholds, sometimes to a degree that's been criticized for distorting research incentives.
Alternatives to Impact Factor
Several alternative metrics have emerged as supplements or replacements for IF:
- CiteScore (Scopus): Similar calculation but uses a 4-year window instead of 2, and a broader definition of citable documents
- SJR (Scimago Journal Rank): Citation-weighted metric that adjusts for the prestige of citing journals
- H-index: Measures both productivity and citation impact, but designed for authors not journals
- Altmetric score: Measures online attention (social media, news, policy documents) rather than citations
- Field-normalized citation impact (FNCI): Compares citations to the field average, adjusting for disciplinary differences
None of these has displaced IF as the dominant metric, but awareness of their existence and what they measure helps researchers evaluate journals more accurately.
Readiness check
See how your manuscript scores before you submit.
Run the scan to get a readiness signal before you commit to a journal.
Common Myths About Impact Factor
Myth: Higher IF always means a better journal. Within the same field, higher IF generally correlates with prestige. But IF 5 in mathematics (Annals of Mathematics) is more prestigious than IF 15 in a mid-tier biomedical field. The number only means something in context.
Myth: Your paper will be cited as often as the journal's IF suggests. Citation distributions within journals are heavily skewed. At most high-IF journals, 10-20% of papers generate the majority of citations. Many individual papers in Nature (IF 48.5) are cited fewer than 10 times.
Myth: IF reflects peer review quality. IF measures citation frequency, not review rigor. Some journals with moderate IFs have excellent peer review. Some high-IF journals have published papers that failed to replicate. The correlation between IF and individual paper quality is weak.
Myth: Clarivate's IF is the only valid metric. CiteScore (Scopus), SJR (Scimago), and field-normalized metrics like JCI each capture different aspects of journal influence. For cross-field comparisons, JCI and SNIP are more informative than raw IF.
Myth: IF is fixed and objective. Clarivate periodically changes which document types count in the denominator, which can shift IFs substantially. Journals can also influence their IF through editorial strategies (publishing more reviews, timing publications, encouraging self-citation). The metric is calculated, not measured.
Why Some Fields Ignore IF Entirely
A significant minority of researchers work in fields where IF is largely irrelevant to career decisions. Knowing whether your field is one of them saves time and prevents chasing the wrong metrics.
Mathematics is the clearest example. Top mathematics journals like Annals of Mathematics (IF ~4) or Inventiones Mathematicae (IF ~3) are among the most prestigious in the world, and an Annals of Mathematics paper is a career-defining publication. The IF tells you almost nothing about this. It reflects the smaller and slower-citation nature of pure mathematics, not journal quality.
Computer science has a different structure: conference publications (NeurIPS, ICML, CVPR, ICLR) carry more weight than journal publications in many subfields. Papers at top ML conferences are more impactful for hiring and tenure than publications in high-IF journals. IF is essentially irrelevant in those communities.
In engineering, IEEE and ACM publications in specific technical areas often outrank their IFs suggest. A paper in IEEE Transactions on Automatic Control (IF ~6) may matter more for a control systems career than a Nature Communications paper (IF 15.7) in the same person's field.
The rule: always ask what your hiring committee specifically looks at. In research medicine and biology, they almost always look at IF. In mathematics, computer science, and many engineering fields, they look at conference acceptance rate, venue reputation, and citation counts instead.
How DORA Changed the Conversation
The San Francisco Declaration on Research Assessment (DORA), published in 2012, argued that using journal IF to evaluate individual researchers is a methodological error, it conflates journal-level averages with paper-level quality.
Over 2,000 organizations have signed DORA, including many research funders (NIH, Wellcome Trust) and universities. In practice, DORA hasn't eliminated IF from evaluation, but it has shifted language. Many institutions now say they "consider the content of papers" rather than "weighting by IF."
For researchers preparing CVs and promotion materials: acknowledge journal prestige where relevant, but also provide citation counts and altmetric scores for individual papers when they're strong. That combination, journal + paper-level evidence, is more informative and aligns with where evaluation norms are heading.
The Bottom Line
Impact factor is a simple calculation of average citations per paper over a two-year window. It's useful as a rough proxy for journal prestige within a field, and for evaluating your own publication record against field norms. It's not useful for comparing across fields, evaluating individual papers, or as a measure of scientific quality. Use it as one data point among several, not the only one.
Not sure if your manuscript is ready? Our manuscript readiness check checks scope, methodology, and journal fit in about 30 minutes.
Frequently asked questions
Impact factor is the average number of times articles from a journal were cited in a given year, counting only citations to papers published in the prior two years. A journal with IF 10 means its recent papers were cited an average of 10 times each.
It depends on the field. In highly competitive fields like cell biology and medicine, IF 10+ is strong. In mathematics or engineering, IF 3-5 is excellent. Always compare within the field, never across fields.
Clarivate (formerly Thomson Reuters) calculates and publishes impact factors annually through the Journal Citation Reports (JCR). The data is released each summer, typically in June, covering the prior calendar year.
Yes. Journals can artificially inflate IF by publishing more review articles (which attract more citations), encouraging self-citation, or coordinating citation rings with other journals. Clarivate has suppressed journals for manipulation.
No. Impact factor measures journal-level citation averages, not the quality of individual papers. Better metrics for individual papers include citation counts, Altmetric scores, and field-normalized citation impact. IF is still widely used, but researchers increasingly combine it with other measures.
Sources
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.
Checklist system / operational asset
Elite Submission Checklist
A flagship pre-submission checklist that turns journal-fit, desk-reject, and package-quality lessons into one operational final-pass audit.
Flagship report / decision support
Desk Rejection Report
A canonical desk-rejection report that organizes the most common editorial failure modes, what they look like, and how to prevent them.
Dataset / reference hub
Journal Intelligence Dataset
A canonical journal dataset that combines selectivity posture, review timing, submission requirements, and Manusights fit signals in one citeable reference asset.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Supporting reads
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.