What Is Impact Factor? The Plain-English Explanation
Senior Researcher, Oncology & Cell Biology
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Is your manuscript ready?
Run a free diagnostic before you submit. Catch the issues editors reject on first read.
Impact factor is a number that runs a surprising amount of academic life. It affects where researchers publish, how departments evaluate hiring candidates, what journals grants require, and occasionally how entire fields distribute prestige.
It's also a two-year-old average citation count calculated from a specific database. That's the whole formula.
The Actual Calculation
Impact factor for a given year is calculated like this:
IF (year Y) = Citations in Y to papers published in Y-1 and Y-2 / Number of citable articles published in Y-1 and Y-2
So a journal's 2024 IF counts how many times, in 2024, papers from 2022 and 2023 were cited , then divides by the number of "citable items" the journal published in those two years.
Citable items means original research articles and reviews, not editorials, letters, or news pieces. This distinction matters because a journal can inflate its IF by publishing more non-citable editorials (which attract citations but don't go in the denominator) while keeping the denominator from growing.
Clarivate publishes the official IFs annually, usually in June, covering the prior calendar year. The 2024 IF released in June 2025 covers citations made in 2024 to papers from 2022-2023.
What Impact Factor Measures (and What It Doesn't)
Impact factor measures the average citation rate of papers published in a journal over a two-year window. That's a useful proxy for a journal's reach and influence in its field.
What it doesn't measure:
- Quality of individual papers: High-IF journals publish mediocre papers. Low-IF journals publish important ones. The journal average says nothing about any specific article.
- Reproducibility or rigor: Citation counts don't track whether cited findings replicated. Some highly-cited papers have failed to reproduce.
- Relevance to your career: An IF 40 publication in a journal outside your field may matter less for your career than an IF 8 publication in the leading journal in your specific domain.
- Speed or accessibility: High-IF journals are often slower and more expensive than alternatives.
Why IF Varies Enormously by Field
A journal with IF 5 in mathematics is a top-tier publication. A journal with IF 5 in oncology is mid-tier. A journal with IF 5 in cell biology is below average.
The variation reflects how citation norms differ across disciplines:
- Biology and medicine: Large research communities, fast-moving literature, lots of papers citing each other rapidly , high IFs
- Mathematics and physics: Smaller citation pools, papers that stand for decades rather than years , lower IFs by convention
- Humanities and social sciences: Many publications in books rather than journals, smaller citation databases , IFs often below 5 even for leading journals
Never compare IFs across fields. It's meaningless. A Nature paper (IF 48) and a top economics journal paper (IF 8) represent roughly equivalent prestige within their respective fields.
The JCR and How to Look Up IF
Clarivate's Journal Citation Reports (JCR) is the official source for impact factors. Access typically requires an institutional subscription. Most university libraries provide JCR access for free through their database portals.
If you don't have institutional access, several free alternatives exist:
- Scimago Journal Rankings (SJR) , free, covers most indexed journals, uses its own citation metric
- Google Scholar , doesn't report IF directly, but shows h-index for journals
- Journal websites , many journals report their IF on their "About" page
For an approximate check, searching "[Journal name] impact factor [year]" usually returns the IF from the journal's own reporting or from aggregator sites. Verify against JCR for anything where precision matters.
How IF Affects Your Career (and When It Doesn't)
In research-intensive academic environments, IF is used routinely as a shorthand in hiring, promotion, and grant review. A first-author paper in Nature (IF 48) or NEJM (IF 176) signals something that most evaluation committees understand without needing to read the paper.
The practical effect varies by:
- Career stage: Early-career researchers at competitive institutions feel IF pressure most acutely. Senior researchers with established track records can publish in a wider range of venues without penalty.
- Institution type: Research-intensive universities often weight IF explicitly. Teaching-focused institutions and some industry positions evaluate papers on content rather than IF.
- Field conventions: Some fields (chemistry, materials science) have moved to alternative metrics and care less about traditional IF.
- Country: China, South Korea, and some other research systems have tied promotion directly to IF thresholds, sometimes to a degree that's been criticized for distorting research incentives.
Alternatives to Impact Factor
Several alternative metrics have emerged as supplements or replacements for IF:
- CiteScore (Scopus): Similar calculation but uses a 4-year window instead of 2, and a broader definition of citable documents
- SJR (Scimago Journal Rank): Citation-weighted metric that adjusts for the prestige of citing journals
- H-index: Measures both productivity and citation impact, but designed for authors not journals
- Altmetric score: Measures online attention (social media, news, policy documents) rather than citations
- Field-normalized citation impact (FNCI): Compares citations to the field average, adjusting for disciplinary differences
None of these has displaced IF as the dominant metric, but awareness of their existence and what they measure helps researchers evaluate journals more accurately.
Common Mistakes to Avoid
Most authors lose time in this topic for one reason: they optimize the wrong variable first. They spend hours polishing language while leaving structural issues unresolved. Editors and reviewers evaluate structure before style.
In practice, the recurring mistakes are predictable:
- Using generic claims instead of specifics. Replace vague statements with concrete numbers, study details, and explicit scope boundaries.
- Ignoring fit and audience. A strong manuscript sent to the wrong journal or framed for the wrong reader still fails quickly.
- Treating revision as proofreading. Revision is where argument quality, methodological clarity, and limitation handling should improve meaningfully.
- Skipping process checks. Formatting, references, checklist compliance, and data statements look administrative, but they're part of editorial quality control.
A useful rule is to run one final pre-submission pass that checks only these operational risks: scope fit, claim strength, methods clarity, and policy compliance. That pass catches most avoidable rejection reasons before they become reviewer comments.
If you're deciding between two valid options, pick the one that improves clarity for an external reader who has no context besides your paper. Clearer framing beats denser writing almost every time.
Practical Checklist Before You Act
Use this short checklist right before submission or journal targeting:
- Scope check (2 minutes): Can you explain in one sentence why this exact journal is the right reader audience?
- Claim check (3 minutes): Does each major claim map directly to a result already shown in the manuscript?
- Methods check (3 minutes): Could an external reviewer reproduce your approach from what is written now?
- Limitations check (2 minutes): Are the real constraints stated plainly instead of hidden in soft wording?
- Decision check (2 minutes): If this is rejected at desk, do you already know your next-best journal target?
Most delays in publication come from skipping this simple operational pass. Authors often discover after rejection that the science was acceptable but the framing, scope alignment, or reporting completeness was not. Running this checklist before submission reduces that avoidable risk.
For teams, make one person responsible for this pass. Shared ownership usually means nobody does it thoroughly. A single owner with final sign-off keeps quality control consistent across projects.
Why Some Fields Ignore IF Entirely
A significant minority of researchers work in fields where IF is largely irrelevant to career decisions. Knowing whether your field is one of them saves time and prevents chasing the wrong metrics.
Mathematics is the clearest example. Top mathematics journals like Annals of Mathematics (IF ~4) or Inventiones Mathematicae (IF ~3) are among the most prestigious in the world, and an Annals of Mathematics paper is a career-defining publication. The IF tells you almost nothing about this. It reflects the smaller and slower-citation nature of pure mathematics, not journal quality.
Computer science has a different structure: conference publications (NeurIPS, ICML, CVPR, ICLR) carry more weight than journal publications in many subfields. Papers at top ML conferences are more impactful for hiring and tenure than publications in high-IF journals. IF is essentially irrelevant in those communities.
In engineering, IEEE and ACM publications in specific technical areas often outrank their IFs suggest. A paper in IEEE Transactions on Automatic Control (IF ~6) may matter more for a control systems career than a Nature Communications paper (IF 15.7) in the same person's field.
The rule: always ask what your hiring committee specifically looks at. In research medicine and biology, they almost always look at IF. In mathematics, computer science, and many engineering fields, they look at conference acceptance rate, venue reputation, and citation counts instead.
How DORA Changed the Conversation
The San Francisco Declaration on Research Assessment (DORA), published in 2012, argued that using journal IF to evaluate individual researchers is a methodological error , it conflates journal-level averages with paper-level quality.
Over 2,000 organizations have signed DORA, including many research funders (NIH, Wellcome Trust) and universities. In practice, DORA hasn't eliminated IF from evaluation, but it has shifted language. Many institutions now say they "consider the content of papers" rather than "weighting by IF."
For researchers preparing CVs and promotion materials: acknowledge journal prestige where relevant, but also provide citation counts and altmetric scores for individual papers when they're strong. That combination , journal + paper-level evidence , is more informative and aligns with where evaluation norms are heading.
The Bottom Line
Impact factor is a simple calculation of average citations per paper over a two-year window. It's useful as a rough proxy for journal prestige within a field, and for evaluating your own publication record against field norms. It's not useful for comparing across fields, evaluating individual papers, or as a measure of scientific quality. Use it as one data point among several, not the only one.
See also
- How to find a journal's impact factor
- Nature Communications impact factor 2026
- Scientific Reports impact factor 2026
Sources
- Clarivate Journal Citation Reports methodology (clarivate.com)
- Scimago Journal Rankings (scimagojr.com)
- San Francisco Declaration on Research Assessment , DORA (sfdora.org)
- Pre-Submission Checklist
Free scan in about 60 seconds.
Run a free readiness scan before you submit.
Related Journal Guides
Apply these insights to specific journals you're considering:
More Articles
Find out before reviewers do.
Anthropic Privacy Partner - zero retention