Computer Science Review Submission Guide: What to Know Before You Draft a Survey
Science's submission process, first-decision timing, and the editorial checks that matter before peer review begins.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Readiness scan
Before you submit to Science, pressure-test the manuscript.
Run the Free Readiness Scan to catch the issues most likely to stop the paper before peer review.
Key numbers before you submit to Science
Acceptance rate, editorial speed, and cost context — the metrics that shape whether and how you submit.
What acceptance rate actually means here
- Science accepts roughly <7% of submissions — but desk rejection runs higher.
- Scope misfit and framing problems drive most early rejections, not weak methodology.
- Papers that reach peer review face a different bar: novelty, rigor, and fit with the journal's editorial identity.
What to check before you upload
- Scope fit — does your paper address the exact problem this journal publishes on?
- Desk decisions are fast; scope problems surface within days.
- Cover letter framing — editors use it to judge fit before reading the manuscript.
How to approach Computer Science Review
Use the submission guide like a working checklist. The goal is to make fit, package completeness, and cover-letter framing obvious before you open the portal.
Stage | What to check |
|---|---|
1. Scope | Confirm the manuscript is a true survey for a broad CS readership |
2. Package | Tighten structure, comparison logic, and open-problem framing before upload |
3. Cover letter | Submit only when the article already reads like an expert field guide |
Quick answer: This computer science review submission guide answers the real question behind how to submit to Computer Science Review: first confirm that you have written a true survey. Official Elsevier guidance says the journal publishes research surveys and expository overviews of open problems for a general computer-science audience, and that the treatment should be more than a catalogue of known results. If the manuscript is really an expanded research paper, the submission is mistargeted before format even matters.
From our manuscript review practice
The biggest Computer Science Review mistake is submitting a long related-work section and calling it a survey when the journal is actually screening for expert synthesis, comparison, and open-problem framing.
Computer Science Review: Key submission facts
Requirement | Details |
|---|---|
Publisher | Elsevier |
Journal type | Survey and review journal |
Core readership | General computer-science audience |
Official article expectation | Research survey or expository overview of open problems |
Optimal length signal | About 30 printed pages or roughly 20,000 words |
Public timeline signal | 15 days to first decision, 201 days to acceptance |
Open access option | Available, listed APC USD 4,420 |
What Computer Science Review is actually screening for
Computer Science Review is selective in a way that many authors underestimate. The issue is not simply whether the topic is interesting. It is whether the manuscript teaches a broad computer-science reader how to understand an active area.
Editors are usually asking:
- is this a genuine survey rather than a disguised research paper
- does the article matter to a broad computer-science audience
- is the treatment interpretive and comparative rather than descriptive
- does the paper identify open problems clearly enough to move the field conversation forward
That is why a technically strong manuscript can still miss here. A niche literature map may be useful to specialists and still be too narrow for the journal. A long related-work section may be accurate and still not function like a survey.
The public guide for authors makes this explicit in a practical way. It says the review should contain deep insight, open problems, and a comprehensive bibliography. That combination tells you the journal wants field architecture, not just field coverage.
Before you submit
Pressure-test these points before upload:
- can you explain why a general computer-science reader should care
- does the manuscript compare approaches critically rather than just listing them
- is there a visible section on open problems or unresolved tensions
- would the paper still work if all your own prior work were removed
- is the survey the natural primary product, not a derivative of a research paper you already wrote
If those answers are weak, the paper is usually early or aimed at the wrong venue.
What the official materials make explicit
The current author guidance is unusually useful because it describes the shape of a successful survey directly.
Official signal | Why it matters |
|---|---|
The journal publishes research surveys and expository overviews of open problems | Routine empirical papers are not the right object |
Articles should be aimed at a general computer-science audience | A narrow subfield review can still be too local |
The treatment should be more than a catalogue of known results | Comparison and interpretation are mandatory |
Expanded versions of primary research papers are generally not acceptable | Repackaging an existing paper is a poor fit |
A typical survey should include open problems and a comprehensive bibliography | The paper should orient the field, not just summarize it |
ScienceDirect insights list 15 days to first decision | Editors appear to decide fit relatively quickly |
The public guide also describes a practical structure for the manuscript: introduction, outline, basic concepts, review of known results or approaches, comments on relevance and comparison, open problems, and a comprehensive bibliography. That is close to a checklist for whether the article is behaving like a real survey.
Common failure patterns at this journal
1. The manuscript is really a research paper in disguise
The clearest mismatch is an article built around the author's own method or dataset with a large related-work section attached.
2. The survey is too narrow for the readership
Some surveys are useful inside one technical niche but have too little value for the broader computer-science audience the journal names directly.
3. The review has coverage but not judgment
A literature map without critical comparison, synthesis, and open-problem framing reads incomplete here.
Before you invest in the wrong draft shape, a computer-science survey fit check can tell you whether the main problem is scope, article type, or analytical depth.
Failure pattern 4: The paper explains what exists but not what the field still cannot do. Computer Science Review expects open problems to be part of the product, not an afterthought.
Readiness check
Run the scan while Science's requirements are in front of you.
See how this manuscript scores against Science's requirements before you submit.
Cover letter and submission checklist
Before you enter the submission portal, make sure the package can answer these questions directly:
- what exact area of computer science does the survey organize
- why does the topic deserve a broad review now
- what comparative judgment does the article add beyond existing surveys
- where are the open problems surfaced clearly
- why is this a survey for Computer Science Review rather than a specialist venue
At this journal, the cover letter should make the readership and article-type case quickly. It should not read like a generic prestige note.
The most useful cover-letter sentence is often the one that explains why the topic now needs a broad, expert synthesis. If that sentence is vague, the article usually has not yet justified itself at the journal level.
In our pre-submission review work with manuscripts targeting Computer Science Review
In our pre-submission review work with manuscripts targeting Computer Science Review, four repeat patterns show up before peer review starts.
- The article is a strong tutorial but not a strong survey. It teaches the basics well, but it does not compare approaches or frame the field's unresolved problems sharply enough.
- The manuscript is too attached to the author's own program of work. That often makes the paper read like an expanded self-positioning document instead of a balanced field guide.
- The topic is important but too narrow for the journal's named readership. This happens often in specialized machine learning, security, or systems subareas.
- The bibliography is large but the editorial logic is thin. A survey-readiness check is useful here because the real weakness is often article architecture, not expertise.
Those patterns matter because Computer Science Review is one of the venues where a good manuscript can still be a wrong manuscript. Authors sometimes interpret that as harshness when it is really just article-type discipline.
Computer Science Review versus nearby alternatives
Journal | Best fit | Think twice if |
|---|---|---|
Computer Science Review | Broad expert surveys that add deep insight and open-problem framing | The manuscript is narrow, tutorial-only, or still behaves like primary research |
ACM Computing Surveys | Major authoritative CS surveys, often with even broader canonical reach | The article does not yet feel mature enough to serve as a field reference |
Foundations and Trends title | Long monograph-style expert synthesis in a defined subfield | The manuscript needs a more general computer-science readership |
Specialist review venue | High-quality survey for one technical community | The broader CS audience is not the real owner |
The best target depends on who the survey is really trying to teach. If the answer is one subcommunity, the journal may be too broad. If the answer is the wider field, the survey needs corresponding authority and balance.
Submit If
- the manuscript is a real survey, not a disguised research paper
- the topic matters to a broad computer-science audience
- the article compares approaches critically and clearly
- open problems are a meaningful part of the paper
- the survey adds deep insight rather than just collecting references
Think Twice If
- the manuscript is mainly a long literature review attached to a primary contribution
- the topic is too narrow for a general computer-science readership
- the article catalogs methods without interpreting them
- the open-problem section is thin or missing
Before upload, run a computer-science first-read check to see whether the manuscript is truly shaped like a survey journal submission.
Frequently asked questions
Computer Science Review uses Elsevier's submission workflow, but the real gate is editorial fit. The journal publishes research surveys and expository overviews of open problems in computer science, so authors should first confirm that the manuscript is a true expert survey rather than an expanded research paper.
The journal publishes research surveys and expository overviews for a general computer-science audience. Official author guidance says the treatment should be more than a catalogue of known results and should add deep insight to the topic under review.
In general, no. The public guide for authors says expanded versions of primary research papers are generally not acceptable. Authors should treat the venue as a survey journal, not as a repackaging lane.
Common problems include a manuscript that is too narrow for a broad computer-science audience, a literature map without enough critical comparison, and a paper that behaves like a research article with a long related-work section rather than a genuine survey.
Sources
Final step
Submitting to Science?
Run the Free Readiness Scan to see score, top issues, and journal-fit signals before you submit.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Same journal, next question
Supporting reads
Conversion step
Submitting to Science?
Anthropic Privacy Partner. Zero-retention manuscript processing.