How to Choose the Right Journal for Your Paper (A Practical Guide)
Research Scientist, Neuroscience & Cell Biology
Works across neuroscience and cell biology, with direct expertise in preparing manuscripts for PNAS, Nature Neuroscience, Neuron, eLife, and Nature Communications.
Is your manuscript ready?
Run a free diagnostic before you submit. Catch the issues editors reject on first read.
Decision cue: If you need a yes/no submission call today, compare your draft with 3 recent accepted papers from this journal and only submit when scope, methods depth, and claim strength line up.
Related: How to choose a journal • How to avoid desk rejection • Pre-submission checklist
I want to tell you about two papers.
Both were solid studies. Good data, clear methods, interesting findings. Paper A was a mechanistic study showing how a particular immune cell drives inflammation in the gut. Paper B was a large cohort study linking a blood biomarker to cardiovascular outcomes.
Paper A got sent to a clinical gastroenterology journal. Desk rejected in four days. "Does not meet the clinical focus of our journal."
Paper B got sent to a basic science immunology journal. Desk rejected in three days. "Lacks mechanistic depth."
Both papers eventually got published. Paper A went to a molecular immunology journal, got reviewed, accepted with minor revisions. Paper B went to a clinical cardiology journal, same thing. Both ended up in journals with similar impact factors to the ones that rejected them.
The science didn't change. The journal choice did.
I see this pattern constantly. Researchers lose months, sometimes half a year, bouncing between journals because they chose the wrong target. Not because their work wasn't good enough. Because they didn't understand what the journal actually wanted.
Journal selection isn't a formality. It's a strategic decision. And most people approach it backwards.
The impact factor trap
Let's get this out of the way first because it's the most common mistake.
Most people pick a journal the same way: they rank journals by impact factor, aim for the highest one they think they have a shot at, and work down the list after each rejection.
This is a terrible strategy.
Impact factor tells you how often the average paper in a journal gets cited. It tells you nothing about whether your paper fits that journal. A Cell paper and a Nature paper might have similar impact factors, but they want fundamentally different things. Cell wants mechanistic depth and completeness. Nature wants broad significance and conceptual advance. Same IF neighborhood, completely different editorial taste.
I've watched researchers spend 18 months cascading down an impact factor ladder: Nature, then Cell, then EMBO Journal, then PLOS Biology, then a specialty journal. Each rejection taking 2-4 months. At the end, they publish somewhere they could have targeted from the start.
The time cost is brutal. Every month your paper sits in review at the wrong journal is a month your competitors are publishing, a month your findings aren't out there being cited, a month closer to someone else scooping you.
Impact factor should be one factor in your decision. It shouldn't be the first one. And it definitely shouldn't be the only one.
How editors actually think about fit
To choose the right journal, you need to understand how editors think. And editors think differently than you might expect.
When a new submission arrives, the editor isn't asking "is this good science?" That comes later. The first question is: "is this right for us?"
That means:
Does it match our scope? Not the broad scope statement on the website. The actual scope, as reflected by what they've published recently. Journals drift over time. A journal that published your type of work three years ago might have shifted focus since then.
Will our readers care? Every journal has a specific audience. An ecology journal's readers care about different things than a molecular biology journal's readers, even when the underlying science overlaps. The editor is thinking about whether your paper will get read, cited, and discussed by the people who subscribe to their journal.
Does it fit our current portfolio? Editors think about their journal like a magazine editor thinks about an issue. They want variety. If they just published three papers on CRISPR screens, they might not want a fourth right now, even if yours is excellent. Conversely, if they're trying to build strength in a new area, they might be more receptive to work they'd normally pass on.
Is the story complete enough for us? Different journals have different bars for completeness. Some want a complete, multi-technique study. Others are happy with a focused result. Sending a preliminary finding to a journal that expects completeness guarantees a rejection.
None of these questions are about whether your science is good. They're all about fit. And fit is something you can assess before you submit, if you know what to look for.
The 5-step journal selection process
This is the process I recommend. It takes a few hours upfront but saves months of wasted time.
Step 1: Define what your paper actually is
Before you look at any journal, get clear on what you have. Answer these questions honestly:
- What's the main finding? One sentence.
- What type of paper is it? Mechanistic study? Clinical trial? Methods paper? Descriptive/observational? Review? Commentary?
- How complete is the story? Is it a definitive answer or a suggestive finding? One technique or multiple validations?
- Who needs to know about this? Specialists in your subfield? The broader discipline? Clinicians? Policy makers?
- What's the practical impact? Does it change how people think, or how they act?
Be honest here. "This should be in Nature" is not an assessment. It's a wish. If your paper uses one technique to show a correlation in a specific model system, that's a focused contribution. It might be a great paper. But it's not the same type of paper as a multi-year, multi-technique study that rewrites a textbook chapter.
Knowing what you have helps you find journals that want exactly that.
Step 2: Build your list from the bibliography, not from rankings
The best way to find the right journal is to look at where similar work gets published. Not similar in topic, but similar in scope, technique, and significance level.
Pull up the 10-15 most relevant papers in your reference list. Where did they get published? That's your starting pool. These journals have already demonstrated that they're interested in this type of work, at this level of depth, for this audience.
Pay special attention to papers published in the last two years. Older references might have gone to journals that have since changed scope or editorial team.
Now expand the list. For each journal on your initial list, check what else they've published recently in your area. Do you see a pattern? Some journals might have published one paper on your topic (an outlier). Others might have a steady stream (a real interest area). Prioritize the ones with a steady stream.
Your goal is a list of 5-8 journals, roughly ranked by fit. Not by impact factor. By fit.
Step 3: Read the journals, not the guidelines
Author guidelines tell you formatting requirements. They don't tell you what the journal actually values. For that, you need to read the papers.
For each journal on your shortlist, read 3-5 recent papers in your area. Pay attention to:
Depth of mechanism. Do the papers typically have one key experiment, or are they multi-figure, multi-technique studies? If every paper in the journal has 8 figures and supplementary data that's longer than the main text, and your paper has 4 figures, you might be underpowered for this venue.
Narrative style. Some journals want papers that tell a big story with broad framing. Others want focused, technical reports. Read the introductions. Read the discussions. How do they frame significance? Match your framing to theirs.
Completeness bar. In some journals, every claim has three supporting experiments. In others, a clean result with one solid technique is enough. Know what you're walking into.
The types of model systems they publish. If a journal primarily publishes human patient data and mouse models, and your work is entirely in cell lines, that's a signal. Not a dealbreaker, but a signal.
This homework takes time. But it's the highest-return time investment in the entire publication process. One afternoon of reading could save you six months of cascading rejections.
Step 4: Check practical factors
Once you have your shortlist narrowed to 3-4 journals based on fit, now consider the practical stuff:
Review timeline. Some journals are fast (decision in 3-4 weeks). Others are slow (3-4 months). You can usually find this information on the journal's website or in databases like SciRev where authors report their experiences. If you're in a competitive area, speed matters.
Open access options and cost. If your funder requires open access, check whether the journal offers it and what it costs. APCs at top journals can run $3,000-$11,000. Some institutions have agreements that cover these costs. Check before you submit, not after you're accepted with a bill.
Acceptance rate. This is useful context but don't overweight it. A 5% acceptance rate at Nature is a different game than a 30% acceptance rate at a specialty journal, but both have published great science. The question is whether your paper is competitive at this specific journal, not whether the odds are generally good.
Transfer options. Many publisher families (Nature, Cell, Elsevier, Wiley) offer manuscript transfer between their journals. If you submit to Nature and get rejected, they might offer to transfer your paper (with reviews) to Nature Communications or another sister journal. This saves time. It's worth knowing which journals offer this before you submit.
Step 5: Rank and decide
You should now have 2-3 strong candidates. Rank them by:
- Fit (is this exactly the kind of paper they publish?)
- Audience (will the people who need to see this work find it here?)
- Realistic shot (given your paper's scope and completeness, is this journal achievable?)
- Timeline (can you afford to wait for this journal's review process?)
Your top choice should be the journal where fit is strongest, not where the impact factor is highest. A paper that's a perfect fit for a specialty journal will have a better outcome than the same paper as a marginal submission to a general interest journal.
When to aim high (and when not to)
I'm not saying never submit to Nature or Cell. Sometimes you should aim high. The question is whether you're doing it for the right reasons.
Aim high when:
- Your finding genuinely changes how people think about a problem
- The implications extend well beyond your subfield
- The evidence is complete and multi-dimensional
- You can articulate why a broad audience needs to see this
Don't aim high when:
- Your main reason is "it would be great for my career"
- The finding is solid but incremental
- The evidence supports the claim but just barely
- You'd need to oversell the significance to justify the venue
There's no shame in publishing in a specialty journal. Some of the most-cited papers in any field are in mid-tier specialty journals, because that's where the right readers are. A paper that gets cited 200 times in a focused journal has more impact than a paper that gets cited 20 times in a glamour journal.
The best researchers I know think about journal selection strategically, not aspirationally. They match the paper to the venue. And they publish faster and get cited more because of it.
Journal red flags
A few things to watch for that signal a journal might not be right for you:
They haven't published your type of work in 2+ years. Even if the scope statement says they cover your area, the editorial team might have moved on. Recent publication history is more reliable than mission statements.
The editorial board has no one in your field. If nobody on the board does work related to yours, who's going to champion your paper? Who's going to pick qualified reviewers?
Review times are consistently terrible. Check SciRev or ask colleagues. If a journal routinely takes 6+ months for a first decision, think hard about whether you can afford that wait. Especially in competitive fields.
They charge for submission. Most reputable journals don't charge submission fees. Publication fees (APCs) after acceptance are different. But paying to submit is unusual and sometimes a sign of a predatory or low-quality journal.
You've never heard of anyone in your field publishing there. If the journal doesn't come up in your literature searches and nobody at conferences mentions it, it's probably not reaching your target audience.
The cascade strategy (done right)
Most people cascade wrong. They start at the top of an impact factor list and work down. Each rejection feels like a demotion.
A better approach: before you submit anywhere, plan your first three targets.
Target 1: Your best-fit, highest-impact option. This is where the paper belongs if everything goes well.
Target 2: A strong alternative with a slightly different audience or a faster turnaround. Not a "lesser" journal. A different journal that's also a good fit.
Target 3: Your reliable option. A journal where you're confident the paper fits and the review process is efficient.
Having all three picked in advance does two things. First, when you get rejected from Target 1, you don't spiral into "where do I send this?" panic. You already know. Second, it forces you to think about fit for multiple journals upfront, which often reveals weaknesses in your framing that you can fix before the first submission.
When you move from Target 1 to Target 2, don't just resubmit the same paper. Adjust the framing. Different journals want different things emphasized. The introduction that frames your work as a broad biological discovery for Nature needs to be rewritten as a focused mechanistic contribution for a specialty journal. Same data, different story.
What about preprints?
Posting to a preprint server (bioRxiv, medRxiv, arXiv) before journal submission is increasingly common and, in most fields, totally fine. Almost all major journals now accept papers that have been posted as preprints.
Preprints can actually help your journal selection process:
- You get early feedback. Comments on your preprint can reveal framing problems or missed connections before you submit to a journal.
- You establish priority. If you're worried about being scooped, a preprint timestamps your work.
- Editors notice good preprints. Some journals actively scout preprints. If your work gets attention, an editor might invite you to submit. That's the best possible starting position.
The main exception is a handful of journals (some medical journals, for instance) that still have restrictions on preprints. Check the policy before posting.
Special situations
Your paper crosses multiple fields
Interdisciplinary work is hard to place because it doesn't fit neatly into any one journal's scope. The immunology part isn't deep enough for an immunology journal. The clinical part isn't extensive enough for a clinical journal. The computational part isn't novel enough for a bioinformatics journal.
For these papers, look for journals that explicitly publish interdisciplinary work. Journals like PNAS, eLife, Science Advances, and Nature Communications are designed for work that bridges fields. Alternatively, think about who the primary audience is. If clinicians need to see this, go clinical. If the method is the contribution, go to a methods journal. Pick the axis that matters most.
Negative results
Negative results are genuinely hard to publish, but there are homes for them. Some specialty journals have specific sections for negative results. Journals like PLOS ONE evaluate methodological rigor rather than novelty, making them more receptive to well-conducted negative studies. If your negative result directly contradicts a published positive result, some journals will find that very interesting.
Replication studies
Similar to negative results. Journals like eLife, PLOS Biology, and some specialty journals are increasingly interested in replication, especially of high-profile findings. Frame it as a replication study with additional insights, not as "we tried to do what they did and it didn't work."
You're a first-time author
Don't let anyone tell you that journals care who's submitting. Editors evaluate papers, not CVs. A first-author paper from an unknown lab at a small university will get the same screening as one from a famous lab at Harvard.
That said, your cover letter matters more when you don't have name recognition. Make the significance obvious. Don't assume the editor will connect the dots.
The cost of getting it wrong
Let me put some numbers on this.
Average time from submission to first decision at a typical journal: 6-12 weeks. If you get desk rejected, it's faster (1-2 weeks), but you still lose time reformatting and resubmitting.
Say you submit to three wrong journals before finding the right one. At 6 weeks each, that's 4-5 months wasted. During which your paper isn't published, isn't being cited, and isn't on your CV.
Now imagine you spent 3 hours upfront doing the journal selection process I described above and got it right on the first or second try. You'd have your paper published 3-4 months earlier.
In academic careers, 3-4 months matters. For job applications, grant deadlines, and tenure reviews, the difference between "published" and "under review" is enormous.
The time you invest in choosing the right journal is the most valuable time you can spend in the entire publication process. It's not glamorous work. It won't feel as productive as running one more experiment. But it determines how much of your other work actually sees the light of day.
A checklist before you submit
Run through this before you hit the submit button:
- Have I read 3-5 recent papers from this journal in my area?
- Does my paper match the scope, depth, and style of what they publish?
- Is the audience of this journal the audience that needs to see my work?
- Is my paper competitive at this specific journal (not just "good enough")?
- Do I have a Target 2 and Target 3 already identified?
- Is my cover letter specific to this journal (not a generic template)?
- Have I checked review timelines and open access policies?
If you can check all of these, you've done more homework than 90% of submitting authors. And you'll see the results in faster decisions, fewer rejections, and more time doing science instead of reformatting manuscripts.
Related reading
- Desk Rejected? Here's Why - wrong journal choice is the number one cause
- 5 Famous Papers That Were Desk Rejected - even Nobel winners picked the wrong journal
- How to Write a Cover Letter for Journal Submission - the cover letter has to explain why this journal
- 10 Signs Your Paper Isn't Ready to Submit - make sure the paper is ready before you pick the target
- Journal Acceptance Rates by Field - know the odds before you submit
- What is impact factor?
- How to find a journal impact factor
- Nature Medicine submission guide
Not sure which journal is right for your paper? Our reviewers have published in and reviewed for dozens of top journals. They can tell you where your manuscript fits best and what to change before you submit. That's what pre-submission review is for.
The Bottom Line
Journal selection is the upstream decision that shapes everything else. A paper submitted to the right journal with the right framing gets through faster and with fewer revision cycles. If you're still uncertain about your target after working through this framework, a pre-submission diagnostic can give you an outside perspective.
Sources
- Published editorial guidelines from high-impact journals
- International Committee of Medical Journal Editors (ICMJE) reporting standards
- CONSORT, PRISMA, STROBE, and ARRIVE reporting guidelines
- Pre-Submission Checklist , 25-point audit before you submit
See also
Free scan in about 60 seconds.
Run a free readiness scan before you submit.
Related Journal Guides
Apply these insights to specific journals you're considering:
More Articles
Is your manuscript ready to submit?
Anthropic Privacy Partner - zero retention