Reference notes
Coverage
6 journal metrics compared
Sources
Clarivate JCR 2024 + Scopus
Last reviewed
February 2026
Prepared by the Manusights editorial team.
Research-assessment guide
Journal Metrics Beyond Impact Factor: CiteScore, SJR, Eigenfactor, h-index, and Altmetrics
Impact factor gets most of the attention, but it's one of at least six journal-level metrics that researchers, librarians, and promotion committees use. Each measures something different. Using the wrong one for the wrong purpose leads to bad decisions or looks uninformed to a grant reviewer who knows the difference.
This guide explains what each metric measures, how it's calculated, where to find it, and when to use it.
Quick orientation
Use this page when a journal conversation has moved beyond impact factor and the real question is which metric fits the decision.
This guide is for readers who need to compare journal signals without pretending one number can do every job. It is most useful when journal choice, promotion language, grant narratives, or library support work requires metric literacy rather than metric worship.
Best used with
Impact factor explained
Go deeper on the single metric that still dominates submission and promotion conversations.
Open access guide
Pair prestige and ranking signals with APCs, licensing, and funder rules.
Journal submission specs
Move from journal comparison into the operational submission package requirements.
Start here first
The 4-part journal-metrics workflow
If you are choosing a journal, first identify the right metric for the decision, then compare journals within the same field, then check how institutions actually use the metric, and finally confirm where the underlying data is coming from.
Choose the right metric
Use the metric that matches the decision instead of defaulting to impact factor for everything.
Compare within field
Interpret rankings in category context because raw numbers drift heavily across disciplines.
Check the use case
Submission planning, promotion files, and article-level impact all need different evidence.
Verify the source
Know whether the value comes from JCR, Scopus, SCImago, or an all-time index before quoting it.
Methodology
What this journal-metrics guide is built from
This reference synthesizes Clarivate JCR, Scopus, SCImago, Eigenfactor, and DORA-style research-assessment guidance. The “which metric to use when” section is emphasized because the main failure mode is not ignorance of metrics, but using the wrong metric for the wrong decision.
Quick Comparison
| Metric | Source | Free? | Citation Window |
|---|---|---|---|
| Journal Impact Factor (JIF) | Clarivate (Journal Citation Reports) | No (subscription) | 2-year citation window |
| CiteScore | Elsevier (Scopus) | Yes | 4-year citation window |
| SCImago Journal Rank (SJR) | SCImago (Scopus data) | Yes | 3-year citation window |
| Eigenfactor Score | eigenfactor.org (Web of Science data) | Yes | 5-year citation window |
| h-index (journal-level) | Web of Science, Scopus, or Google Scholar | Partial | All-time |
| Altmetrics | Altmetric.com, PlumX (Elsevier) | Partial | Real-time / ongoing |
Interpretation rules
Four rules that keep journal-metrics decisions honest
Each Metric in Detail
Journal Impact Factor (JIF)
Best used for
Comparing journals within the same field; establishing rough tier hierarchies; grant and promotion documentation
Limitations
- • Can't compare across fields (a neuroscience IF of 10 ≠ a clinical medicine IF of 10)
- • Skewed by a small number of highly-cited articles: the median article citation count is far lower
- • Paywalled (JCR subscription required to access full data)
- • Doesn't capture very recent citation trends (2-year lag)
CiteScore
Best used for
Free alternative to JIF; broader coverage than JCR; checking journals not indexed in Web of Science
Limitations
- • Uses Scopus data, not Web of Science (different journal coverage)
- • 4-year window inflates scores for journals with slow citation uptake
- • Not as widely recognized as JIF in grant and promotion committees
- • Includes all document types (including editorials, letters) in denominator, which can lower scores
SCImago Journal Rank (SJR)
Best used for
Free ranking tool; assessing journal prestige by field quartile (Q1–Q4); comparing journals across Scopus categories
Limitations
- • Less intuitive than a simple citation count
- • Quartile rankings vary by subject category and change annually
- • Uses Scopus coverage, not Web of Science
Eigenfactor Score
Best used for
Understanding journal influence in the citation network; Article Influence Score (Eigenfactor ÷ article count) for per-article comparison
Limitations
- • Not normalized for journal size: Nature will always score higher than a specialty journal
- • Article Influence Score is more useful for comparisons than raw Eigenfactor
- • Less commonly cited in grant/promotion contexts than JIF
h-index (journal-level)
Best used for
Assessing the sustained citation record of a journal; identifying journals with consistent high-impact output (not just occasional blockbuster papers)
Limitations
- • Favors older, larger journals: newer journals can't accumulate a high h-index yet
- • Doesn't capture recent trajectory or emerging journals
- • Primarily used as an author-level metric; journal-level h-index is less commonly reported
Altmetrics
Best used for
Measuring public engagement and societal impact of individual articles; demonstrating broader impact beyond academic citations for grant narratives
Limitations
- • Measures attention, not quality. Viral controversy can inflate scores as much as genuine impact.
- • Inconsistent between Altmetric.com and PlumX
- • Not accepted as a quality indicator by most promotion committees
- • Easy to game (social media sharing by authors inflates scores)
Which Metric to Use When
📌 Deciding where to submit your manuscript
JIF (for field-calibrated tier comparison) + acceptance rate + review timeline. JIF tells you roughly where the journal sits within your field. Don't use altmetrics or h-index for this.
📌 Promotion and tenure documentation
JIF is the most recognized. SJR quartile (Q1–Q4) is a useful free supplement if your target journals aren't in JCR. CiteScore is sometimes accepted. Check what your institution specifically requires.
📌 Grant applications (NIH, NSF, UKRI)
JIF for publication venue quality. Altmetrics for demonstrating broader impact beyond academia (useful in NIH broader impacts or UKRI public engagement sections). Article-level citation counts for your specific papers.
📌 Evaluating journals NOT indexed in Web of Science
CiteScore (Scopus) or SJR: both cover more journals than JCR. Also check DOAJ listing for OA journals and whether the journal is in MEDLINE.
📌 Comparing journals across different fields
Don't use raw JIF. Use field-normalized metrics: SJR quartile within the specific subject category, or consult the JIF percentile within field from JCR. A JIF of 10 means very different things in clinical medicine vs neuroscience.
📌 Assessing individual article impact
Citation count (Web of Science or Scopus) is most reliable. Altmetric attention score for online/policy engagement. Downloads or views for early-stage articles before citations accumulate.
The DORA Perspective
The San Francisco Declaration on Research Assessment (DORA) has been signed by thousands of researchers and hundreds of institutions worldwide. Its central recommendation: don't use journal-level metrics as proxies for the quality of individual research articles in hiring, promotion, or funding decisions.
The reason is straightforward: JIF is the average across all articles in a journal. Any individual paper could be far above or below that average. Using the journal's JIF as a proxy for the article's quality confuses two completely different things.
The practical reality in biomedical research: JIF is still widely used in exactly the way DORA argues against. Knowing its limitations lets you engage with it honestly: as a rough signal of journal standing, not as a quality stamp on your specific work.
Where to Find Each Metric (Free Options)
References
- Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA. 2005;102(46):16569-16572. [doi.org/10.1073/pnas.0507655102 ↗]
- San Francisco Declaration on Research Assessment (DORA). (2012). Retrieved February 2026. [sfdora.org ↗]
- Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015;520(7548):429-431. [doi.org/10.1038/520429a ↗]
- Clarivate. (2024). Journal Citation Reports methodology. Clarivate Analytics. [jcr.clarivate.com ↗]
- Elsevier. CiteScore metrics: Methodology and calculation. Scopus. Retrieved February 2026. [elsevier.com/scopus/metrics ↗]
- Bornmann L, Marx W. The h index as a research performance indicator. Eur Sci Ed. 2011;37(3):77-80. [ease.org.uk ↗]
Ready to apply this to a real draft?
Move from reference guidance to a manuscript-specific check
Use the public submission-readiness path when you already have a manuscript and need a draft-specific signal, not just a general guide.
Best for researchers who want a fast readiness read before deciding whether to revise, retarget, or submit.
Related guides in this collection
Impact Factor Explained
Dive deeper into the most recognized journal metric and its limits.
Open Access Guide
Connect journal prestige decisions to APCs, models, and publishing route.
Predatory Journals
Sanity-check questionable journals before trusting their claimed metrics.
Reference Library
Return to the wider publishing reference system for timelines, acceptance rates, and submission specs.
Frequently Asked Questions
What is the difference between Impact Factor, CiteScore, and h-index?
Impact Factor (IF) measures a journal's average citations per article over the prior two years, published annually by Clarivate in the Journal Citation Reports. CiteScore is Elsevier's competing metric using a four-year citation window, generally producing higher scores than IF for the same journal. The h-index measures an individual researcher's productivity and citation impact, not a journal's - a researcher has an h-index of N if they have N papers each cited at least N times. All three metrics have known limitations: IF rewards high-volume citation fields (cell biology, oncology) over slower disciplines (mathematics, clinical medicine), and none capture article-level variation within a journal.
Is a higher Impact Factor always better for my career?
Not necessarily. Impact Factor varies enormously by field - an IF of 4 in mathematics is exceptional, while an IF of 4 in oncology is average. Field-normalized metrics like SCImago Journal Rank (SJR) or Source Normalized Impact per Paper (SNIP) are fairer cross-field comparisons. Many hiring committees and grant panels now look beyond IF to article-level metrics, journal reputation within the field, and the quality of the specific paper. Publishing one genuinely impactful paper in a respected mid-IF journal often matters more than chasing a top-5 IF journal and landing a desk rejection.
What is Quartile ranking (Q1, Q2, Q3, Q4) and how is it used?
Quartile rankings sort journals within a subject category by Impact Factor (or SJR score) into four equal groups. Q1 journals are in the top 25% of their category, Q2 are in the 25-50th percentile range, Q3 in the 50-75th range, and Q4 in the bottom 25%. Many funding agencies, institutions, and tenure committees use quartile rankings as a proxy for journal quality within a discipline. Check both the category and the quartile - a Q1 journal in a narrow field often carries more weight than a Q2 journal in a broad, high-IF field.