Computer Science Review Impact Factor
Science impact factor is 45.8. See the current rank, quartile, and what the number actually means before you submit.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Journal evaluation
Want the full picture on Science?
See scope, selectivity, submission context, and what editors actually want before you decide whether Science is realistic.
A fuller snapshot for authors
Use Science's impact factor as one signal, then stack it against selectivity, editorial speed, and the journal guide before you decide where to submit.
What this metric helps you decide
- Whether Science has the citation profile you want for this paper.
- How the journal compares to nearby options when prestige or visibility matters.
- Whether the citation upside is worth the likely selectivity and process tradeoffs.
What you still need besides JIF
- Scope fit and article-type fit, which matter more than a high number.
- Desk-rejection risk, which impact factor does not predict.
- Timeline and cost context.
How authors actually use Science's impact factor
Use the number to place the journal in the right tier, then check the harder filters: scope fit, selectivity, and editorial speed.
Use this page to answer
- Is Science actually above your next-best alternatives, or just more famous?
- Does the prestige upside justify the likely cost, delay, and selectivity?
- Should this journal stay on the shortlist before you invest in submission prep?
Check next
- Acceptance rate: <7%. High JIF does not tell you how hard triage will be.
- First decision: ~14 days to first decision. Timeline matters if you are under a grant, job, or revision clock.
- Publishing cost and article type, since those constraints can override prestige.
Quick answer: Computer Science Review currently lists a 2024 impact factor of 12.7 and a CiteScore of 38.4 on its official ScienceDirect insights page. That is a strong citation profile for a computer-science review journal. The practical point is that the journal is not just selective because of prestige. It is selective because it only wants true expert surveys and expository overviews of open problems for a broad computer-science readership.
Computer Science Review impact metrics at a glance
Metric | Value |
|---|---|
Impact Factor | 12.7 |
CiteScore | 38.4 |
Scopus impact score 2024 | 18.39 |
SJR 2024 | 3.276 |
h-index | 88 |
Best quartile | Q1 |
Official submission to first decision | 13 days |
Official submission to decision after review | 87 days |
Official submission to acceptance | 145 days |
Official APC | USD 4,420 |
Publisher | Elsevier |
ISSN | 1574-0137 / 1876-7745 |
That is a strong package for a review-only venue. The combination of citation performance and selective article type is what makes the journal hard to enter.
What 12.7 actually tells you
The first signal is that the journal is influential, not peripheral. A 12.7 impact factor and 38.4 CiteScore place Computer Science Review among the higher-authority review venues in computer science.
The second signal is that review journals work differently. A strong number here does not mean the journal is broad in article type. It means the journal is rewarding a narrow format very well: high-value surveys that help general computer-science readers understand a field.
The third signal is that the article form is doing real work. Because the journal is not built around routine primary research volume, strong citation numbers usually reflect papers that become orientation points for a topic, not just papers that publish incremental results.
Computer Science Review impact factor trend
The official journal page is the authoritative source for the current impact factor and CiteScore on this page. For the longer directional view, the table below uses the open Scopus-based impact-score series as a trend proxy.
Year | Scopus impact score |
|---|---|
2014 | 3.35 |
2015 | 4.86 |
2016 | 6.76 |
2017 | 7.89 |
2018 | 8.83 |
2019 | 10.59 |
2020 | 13.67 |
2021 | 16.99 |
2022 | 17.99 |
2023 | 19.83 |
2024 | 18.39 |
Directionally, the open Scopus-based signal is down from 19.83 in 2023 to 18.39 in 2024, but still extremely strong compared with the earlier history of the journal. The healthier interpretation is not that the journal weakened. It is that the venue remains highly influential even after a slight normalization from a recent peak.
Why the number can mislead authors
The mistake is to see a strong impact factor and assume any long review article should aim here.
That is usually wrong. Computer Science Review is restrictive in article type. The official guidance says the journal publishes research surveys and expository overviews of open problems and that the treatment should be more than a catalogue of known results. Expanded versions of primary research papers are generally not acceptable.
So the number can flatter the fit.
A technically strong manuscript can still be wrong here if it is:
- too narrow for a general computer-science audience
- mostly a literature inventory rather than a critical synthesis
- a disguised research paper with a large related-work section
- missing real open-problem framing
How Computer Science Review compares with nearby choices
Journal | Best fit | When it beats Computer Science Review | When Computer Science Review is stronger |
|---|---|---|---|
Computer Science Review | Broad expert surveys with deep synthesis and open-problem framing | When the manuscript should teach the wider CS field, not only one specialist group | When the survey has strong breadth and interpretive value |
ACM Computing Surveys | Canonical large-scale CS surveys | When the article is mature enough to function as a field reference at the highest survey tier | When the fit is slightly narrower or more Elsevier-shaped editorially |
Foundations and Trends title | Long monograph-style expert synthesis in a defined subfield | When the work is best owned by one clearly bounded specialty lane | When the survey should reach a broader general CS audience |
Specialist review venue | Narrow technical readership | When the topic is too bounded for a general audience | When the field value travels across computer science |
That comparison matters more here than in many journal families because article type is part of the journal identity, not just a formatting choice.
What pre-submission reviews reveal about Computer Science Review submissions
In our pre-submission review work with manuscripts targeting Computer Science Review, the repeating problem is not low technical quality. It is low survey maturity.
The common misses are:
The paper has coverage but not judgment. It lists methods, papers, or architectures accurately, but does not compare them critically enough.
The topic is too narrow. A useful review for a specialist community can still be too bounded for the broad readership the journal names explicitly.
The article is too attached to the authors' own work. That often makes the manuscript feel like a long positioning document rather than a field guide.
The open-problem framing is thin. Computer Science Review expects the paper to orient future work, not just summarize completed work.
If that sounds familiar, a survey-readiness check is usually more useful than another formatting pass.
The information gain that matters here
The official journal insights page adds one especially useful signal beyond the impact factor: the journal's publishing timeline is relatively transparent.
Official timeline signal | Value | Why it matters |
|---|---|---|
Submission to first decision | 13 days | Editorial fit decisions tend to happen quickly |
Submission to decision after review | 87 days | Reviewed surveys go through a real substantive process |
Submission to acceptance | 145 days | Accepted papers still often require substantial editorial and reviewer work |
Those numbers reinforce the article-type reality. The desk can be quick because the journal knows what it wants. The rest of the timeline reflects how much work a true expert survey often still needs.
How to use this number in journal selection
Use the impact factor to place Computer Science Review correctly. It is a serious Q1 review venue, not a soft option.
Then ask the harder question: does the manuscript function as a real survey product for a broad computer-science audience?
That usually means checking whether the article:
- teaches the field architecture clearly
- compares approaches critically
- names unresolved problems honestly
- matters beyond one technical niche
- still works even if the authors' own papers are not the center of gravity
If the answer is yes, the metric supports the target. If the answer is no, the number is making the fit look stronger than it is.
What the number does not tell you
The impact factor does not tell you whether the manuscript is broad enough, interpretive enough, or mature enough as a survey. It also does not tell you whether a specialist review venue or a monograph-style venue would be a more honest match.
Those are the real editorial screens.
Submit if / Think twice if
Submit if:
- the manuscript is a true survey or expository open-problems overview
- the article adds deep interpretive value, not just coverage
- the topic matters to a broad computer-science audience
- the paper would still work if the authors' own research were not central
Think twice if:
- the manuscript is mostly a long related-work section
- the topic is too narrow for a general audience
- the article catalogs papers without evaluating them critically
- a specialist review venue better matches the real readership
Bottom line
Computer Science Review has an official impact factor of 12.7 and an official CiteScore of 38.4. The stronger signal is the combination of those numbers with the journal's strict article-type discipline.
If the paper is not a real expert survey, the metric will flatter the target.
Frequently asked questions
Computer Science Review currently lists a 2024 JCR impact factor of 12.7 on its official ScienceDirect journal insights page. The same official page lists a CiteScore of 38.4.
Yes, within review-oriented computer-science publishing it sits in a strong Q1 position. The better signal is the combination of high citation performance and a very selective article type: expert surveys for a broad computer-science audience.
No. The journal still screens for broad readership, deep interpretive value, and a manuscript that behaves like a true survey rather than an expanded research paper.
The common misses are narrow specialist reviews, literature maps without critical comparison, and manuscripts that still read like primary research with a long related-work section attached.
Authors should also use the official CiteScore, timeline, and article-type rules. For this journal, the article form matters almost as much as the citation metrics.
Sources
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.
Checklist system / operational asset
Elite Submission Checklist
A flagship pre-submission checklist that turns journal-fit, desk-reject, and package-quality lessons into one operational final-pass audit.
Flagship report / decision support
Desk Rejection Report
A canonical desk-rejection report that organizes the most common editorial failure modes, what they look like, and how to prevent them.
Dataset / reference hub
Journal Intelligence Dataset
A canonical journal dataset that combines selectivity posture, review timing, submission requirements, and Manusights fit signals in one citeable reference asset.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Before you upload
Want the full picture on Science?
Scope, selectivity, what editors want, common rejection reasons, and submission context, all in one place.
These pages attract evaluation intent more than upload-ready intent.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Same journal, next question
Supporting reads
Want the full picture on Science?
These pages attract evaluation intent more than upload-ready intent.