Pre-Submission Review for Implementation Science Papers
Implementation science papers need pre-submission review that checks framework use, strategy reporting, outcomes, context, and journal fit.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Readiness scan
Before you submit to Science, pressure-test the manuscript.
Run the Free Readiness Scan to catch the issues most likely to stop the paper before peer review.
Science at a glance
Key metrics to place the journal before deciding whether it fits your manuscript and career goals.
What makes this journal worth targeting
- IF 45.8 puts Science in a visible tier — citations from papers here carry real weight.
- Scope specificity matters more than impact factor for most manuscript decisions.
- Acceptance rate of ~<7% means fit determines most outcomes.
When to look elsewhere
- When your paper sits at the edge of the journal's stated scope — borderline fit rarely improves after submission.
- If timeline matters: Science takes ~~14 days to first decision. A faster-turnaround journal may suit a grant or job deadline better.
- If open access is required by your funder, verify the journal's OA agreements before submitting.
How to use this page well
These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.
Question | What to do |
|---|---|
Use this page for | Getting the structure, tone, and decision logic right before you send anything out. |
Most important move | Make the reviewer-facing or editor-facing ask obvious early rather than burying it in prose. |
Common mistake | Turning a practical page into a long explanation instead of a working template or checklist. |
Next step | Use the page as a tool, then adjust it to the exact manuscript and journal situation. |
Quick answer: Pre-submission review for implementation science papers should test whether the implementation problem, framework, strategy, context, outcomes, reporting checklist, stakeholder logic, and journal fit support the manuscript's claim. Implementation reviewers are quick to reject manuscripts where the intervention is described, but the implementation strategy, mechanism, or context is too thin to teach the field anything reusable.
If you need a manuscript-specific readiness diagnosis, start with the AI manuscript review. If the paper is mainly about care delivery, utilization, access, or claims/EHR data, see pre-submission review for health services research.
Method note: this page uses Implementation Science submission guidance, Implementation Science reporting guidance, CFIR user-guide materials, EQUATOR reporting principles, and Manusights implementation-review patterns reviewed in April 2026.
What This Page Owns
This page owns implementation-science-specific pre-submission review. It applies to implementation trials, hybrid effectiveness-implementation studies, de-implementation, process evaluations, implementation strategies, framework-guided qualitative studies, implementation outcomes, audit-and-feedback studies, guideline uptake, and health-system change interventions.
Intent | Best owner |
|---|---|
Implementation science manuscript needs field critique | This page |
Healthcare delivery data dominates | Health services research review |
Public health intervention dominates | Public health review |
Trial statistics dominate | Statistical review |
General clinical manuscript | Medical manuscript review |
The boundary is implementation knowledge. The manuscript should explain not only whether something worked, but how, why, where, and for whom implementation occurred.
What Implementation Reviewers Check First
Implementation reviewers often ask:
- what implementation problem is being solved?
- which implementation strategy was used, and is it described well enough to replicate?
- which framework, theory, or model guides the work?
- are implementation outcomes distinct from clinical or service outcomes?
- is context described at the setting, team, workflow, organization, and policy levels?
- are fidelity, adaptation, reach, feasibility, acceptability, adoption, and sustainment handled?
- is the right reporting checklist included?
- does the target journal expect implementation science rather than quality improvement or effectiveness research?
The paper has to generate transferable implementation knowledge, not just report a local improvement project.
In Our Pre-Submission Review Work
In our pre-submission review work, implementation science manuscripts most often fail when the authors know the field vocabulary but do not use it to structure the evidence.
Framework decoration: CFIR, RE-AIM, EPIS, Normalization Process Theory, or another framework is named, but aims, data collection, analysis, and interpretation do not actually use it.
Strategy blur: the intervention is described, while the implementation strategy remains vague.
Outcome mixing: clinical outcomes, service outcomes, and implementation outcomes are combined as if they answer the same question.
Context gap: workflow, staffing, leadership, incentives, resources, and local adaptation are too thin for readers to judge transferability.
Reporting mismatch: TIDieR, CONSORT, StaRI, SQUIRE, COREQ, or another design-specific checklist is missing or incomplete.
A useful review should identify whether the manuscript is truly implementation science or an effectiveness paper with implementation language added late.
Public Field Signals
Implementation Science guidance tells authors to use design-appropriate reporting checklists and has published explicit reporting guidance for implementation science articles. Its reporting work points to EQUATOR and notes that TIDieR can improve intervention description across evaluative study designs, not only trials. CFIR user-guide materials emphasize matching constructs, questions, coding, and interpretation to the implementation project.
Those signals matter before submission. A paper that treats the framework as a label rather than an analytic tool is easy for reviewers to downgrade.
Implementation Science Review Matrix
Review layer | What it checks | Early failure signal |
|---|---|---|
Implementation problem | Adoption, fidelity, reach, feasibility, sustainment, de-implementation | Problem is clinical only |
Framework | CFIR, RE-AIM, EPIS, NPT, Proctor outcomes | Framework appears after methods |
Strategy | Actor, action, target, dose, timing, materials | Strategy cannot be replicated |
Context | Setting, workflow, organization, policy, resources | Context is background only |
Outcomes | Implementation, service, clinical separation | Outcome categories are mixed |
Reporting | TIDieR, CONSORT, StaRI, SQUIRE, COREQ | Checklist missing |
Journal fit | Implementation Science, Implementation Science Communications, quality, HSR | Audience mismatch |
This matrix keeps the page distinct from health services research.
What To Send
Send the manuscript, target journal, protocol, framework diagram, implementation strategy description, reporting checklist, intervention materials, stakeholder roles, fidelity or adaptation logs, outcome definitions, qualitative codebook if relevant, statistical analysis plan, and prior reviewer comments if available.
If the work is a hybrid trial, include the clinical and implementation outcome hierarchy. If it is qualitative, include sampling, coding, reflexivity, and framework-use details.
What A Useful Review Should Deliver
A useful implementation science pre-submission review should include:
- implementation-contribution verdict
- framework-use critique
- implementation strategy and TIDieR check
- context and adaptation review
- outcome-classification critique
- reporting-guideline check
- journal-lane recommendation
- submit, revise, retarget, or diagnose deeper call
The review should not only say "add context." It should identify which context determines whether the finding transfers.
Common Fixes Before Submission
Before submission, authors often need to:
- distinguish the intervention from the implementation strategy
- integrate the framework into aims, methods, analysis, and discussion
- separate implementation, service, and clinical outcomes
- add detail about actors, actions, dose, timing, and materials
- report fidelity, adaptation, and reach more clearly
- add a design-appropriate checklist
- narrow effectiveness claims
- retarget from an implementation journal to health services, quality improvement, public health, or clinical venues when the contribution is not implementation knowledge
These fixes can prevent a paper from sounding like an ordinary intervention study.
Reviewer Lens By Paper Type
An implementation trial needs strategy description, cluster or site logic, implementation outcomes, fidelity, adaptation, and context. A hybrid study needs a clear hierarchy between effectiveness and implementation aims. A de-implementation paper needs the low-value practice, stakeholder incentives, and replacement behavior to be clear. A qualitative implementation paper needs framework-driven sampling, coding, and interpretation. A process evaluation needs a theory of change, mechanism evidence, and implementation detail.
The AI manuscript review can flag whether the blocking risk is framework use, strategy reporting, outcomes, context, or journal fit.
How To Avoid Cannibalizing Health Services Research Pages
Use this page when the submission risk depends on implementation frameworks, strategy reporting, implementation outcomes, fidelity, reach, adoption, acceptability, feasibility, adaptation, or sustainment. Use health services research review when the manuscript is mainly about healthcare delivery, utilization, access, quality, EHR or claims data, workforce, or system management.
That distinction keeps the page focused on the implementation-science buyer's actual problem.
What Not To Submit Yet
Do not submit an implementation science paper if the implementation strategy cannot be described as who did what, to change whose behavior, at what dose, through which materials, in which context. If that sentence is not answerable, reviewers will not see a transferable strategy.
Also pause if the framework is only decorative. The manuscript should show how the framework shaped the research question, data collection, coding, analysis, or interpretation. A named framework that does no work can hurt credibility more than no framework at all.
For hybrid studies, pause if readers cannot tell which finding is clinical effectiveness and which finding is implementation evidence. The manuscript can include both, but the abstract and tables must keep the logic separated.
For de-implementation work, pause again if the low-value practice and replacement behavior are not explicit. Reviewers need to know what behavior should stop, what should happen instead, and which implementation strategy is expected to move that behavior.
Submit If / Think Twice If
Submit if:
- the implementation problem is explicit
- strategy reporting is replicable
- framework use shapes the analysis
- implementation outcomes are distinct
- context and adaptation are clear
- target journal matches the contribution
Think twice if:
- the paper is mostly local quality improvement
- framework language is decorative
- implementation and clinical outcomes are mixed
- reporting checklists are incomplete
Readiness check
Run the scan while Science's requirements are in front of you.
See how this manuscript scores against Science's requirements before you submit.
Bottom Line
Pre-submission review for implementation science papers should protect the link between implementation evidence and implementation claim. The manuscript needs framework discipline, strategy detail, context, outcome separation, and a journal target that rewards implementation knowledge.
Use the AI manuscript review if you need a fast readiness diagnosis before submitting an implementation science paper.
- https://implementationscience.biomedcentral.com/submission-guidelines/preparing-your-manuscript/research
- https://implementationscience.biomedcentral.com/articles/10.1186/s13012-017-0546-3
- https://implementationscience.biomedcentral.com/articles/10.1186/s13012-025-01450-7
- https://www.equator-network.org/reporting-guidelines/
Frequently asked questions
It is a field-specific review that checks whether an implementation science manuscript is ready for journal submission, including framework use, implementation strategy description, context, outcomes, reporting checklists, study design, and journal fit.
They often attack decorative framework use, thin strategy reporting, unclear implementation outcomes, missing TIDieR or CONSORT materials, poor context description, weak stakeholder logic, and conclusions that sound like effectiveness claims rather than implementation evidence.
Health services research review focuses on care delivery, utilization, access, quality, and system data. Implementation science review focuses on adoption, fidelity, reach, acceptability, feasibility, sustainment, implementation strategies, and theory or framework use.
Use it before submitting implementation trials, hybrid studies, de-implementation studies, process evaluations, framework-guided qualitative studies, or implementation strategy papers where context and reporting could decide review.
Final step
Submitting to Science?
Run the Free Readiness Scan to see score, top issues, and journal-fit signals before you submit.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Start here
Same journal, next question
- Science Submission Guide
- How to Avoid Desk Rejection at Science (2026)
- Science Journal Review Time 2026: Time to First Decision and Full Timeline
- q.e.d Science Review 2026: Strong on Claim Logic, More Nuanced on Data Rights
- Rejected from Science? The 7 Best Journals to Submit Next
- Science 'Under Review': What Each Status Means and Realistic Timelines
Supporting reads
Conversion step
Submitting to Science?
Anthropic Privacy Partner. Zero-retention manuscript processing.