Is Your Paper Ready for Bioinformatics? The Computational Biology Tool Standard
Bioinformatics (Oxford) requires novel algorithms with reproducible, freely available code. Learn the 25-30% acceptance rate, Application Note format, and how it compares to BMC Bioinformatics.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Next step
Choose the next useful decision step first.
Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.
If you've built a computational biology tool, written an algorithm for sequence analysis, or developed a new method for structural prediction, Bioinformatics (Oxford) is almost certainly on your shortlist. Published by Oxford University Press since 1985, it's the standard venue where the computational biology community goes to find working software and tested algorithms. BLAST, Bowtie, SAMtools, and hundreds of other tools that define daily bioinformatics workflows were first described in its pages. That track record isn't accidental. The journal has a specific editorial identity: it publishes methods that people actually use.
But publishing there isn't just about having a working tool. You'll need to clear an algorithmic novelty bar, meet strict code availability standards, and pick the right manuscript format. Here's what that looks like in practice.
Bioinformatics at a glance
Bioinformatics accepts roughly 25-30% of submissions and publishes around 1,500 papers per year across Original Papers and Application Notes. The impact factor sits at approximately 5.8, which doesn't sound dramatic until you realize it's the top-tier venue for computational method papers. Review turnaround runs 4-8 weeks for most submissions.
Metric | Value |
|---|---|
Impact Factor (2024 JCR) | ~5.8 |
Annual published papers | ~1,500 |
Acceptance rate | ~25-30% |
Time to first decision | 4-8 weeks |
Peer review type | Single-blind |
Open access option | Yes (hybrid), APC ~$3,500 |
Publisher | Oxford University Press |
Indexed in | PubMed, Web of Science, Scopus |
Self-archiving | Accepted manuscript after 12-month embargo (green OA) |
That acceptance rate is deceptive. It doesn't capture how many papers are returned because the code won't install, the GitHub link is dead, or the method is really just an existing pipeline with a new name and a web interface. Those aren't formal rejections in some tracking systems, but they're functionally the same thing.
What the editors are screening for
Bioinformatics editors aren't reading your paper the way a biology journal editor would. They don't care if your results are biologically exciting. They care whether your method is new, whether it works, and whether someone else can run it.
Algorithmic novelty is non-negotiable. This is the single biggest source of rejections. If your tool wraps existing algorithms in a new interface, that's not enough. If you've built a pipeline that chains together BLAST, MUSCLE, and a phylogenetic tree builder with some glue scripts, that's a workflow, not a method. The editors want to see a genuinely new computational approach, or at the very minimum, a substantial improvement to an existing algorithm with formal analysis of why it's better.
Here's a test: can you describe what your algorithm does differently in one paragraph without mentioning any existing tool by name? If you can't, you probably haven't introduced enough novelty for this journal.
Benchmarking must be rigorous and fair. You can't just show your tool is faster than one competitor on one dataset. Editors expect comparisons against multiple established methods, on multiple datasets, using metrics appropriate to the problem. And they'll notice if you've cherry-picked datasets where your method happens to shine. Use standard benchmarks when they exist. If you're proposing a new benchmark, justify why the existing ones aren't adequate.
The code has to work. Period. This sounds obvious, but it's where a shocking number of submissions fall apart. Bioinformatics editors and reviewers will actually try to install your software. If your GitHub repo has a broken build, missing dependencies, or documentation that assumes the user has your exact computing environment, you're getting sent back. More on this below.
Application Notes: the format most people underestimate
Application Notes are Bioinformatics' most distinctive format, and they're worth understanding even if you don't think they apply to you.
An Application Note is a 2-page paper (roughly 1,300 words plus one figure) describing a new software tool or database. That's it. Two pages. No room for extensive benchmarking, no space for biological case studies, no elaborate introduction. Just: here's the tool, here's what it does, here's how to get it.
Why would anyone want to publish a 2-page paper? Because Application Notes in Bioinformatics are some of the most cited papers in all of computational biology. SAMtools, BWA, Trimmomatic, and dozens of other standard tools were published as Application Notes. They're short, but they're the papers people actually cite when they use your software.
The format works well when your contribution is primarily a well-engineered piece of software rather than a new algorithm. You've built something that solves a practical problem, it works reliably, and the community needs it. You don't need to prove you've invented a new algorithmic concept. You need to prove your tool fills a gap and is ready for others to use.
Application Note requirements are strict:
- The software must be freely available for academic use
- Source code must be accessible (GitHub, Bitbucket, or similar)
- A working URL must be provided and verified before publication
- Documentation must be sufficient for independent installation and use
- One figure maximum, one table maximum
My advice: if you're on the fence between an Original Paper and an Application Note, ask yourself whether the story is really about the algorithm or about the software. If a reviewer stripped out everything except the computational approach, would there be enough novelty for 8 pages? If not, the Application Note format is probably the honest choice, and it's not a lesser one.
Code availability: what Bioinformatics actually checks
Most journals have code availability policies. Bioinformatics enforces theirs. This isn't a suggestion buried in the author guidelines that everyone ignores. Reviewers receive specific instructions to test whether software can be installed and run.
Here's what you need before submitting:
A public repository with source code. GitHub is the standard, but GitLab and Bitbucket work too. The repo shouldn't be empty or contain only compiled binaries. Source code means source code.
Installation instructions that actually work. Have someone outside your lab try to install your tool from scratch, on a clean machine, following only your README. If they can't do it in under 30 minutes, your documentation isn't ready. This is the most common failure point I see. Developers test installation on their own machines, where all the dependencies are already in place, and assume it'll work everywhere.
Dependencies that are pinned and documented. Don't just list "Python 3" as a requirement. Specify the version. List every package. Better yet, provide a conda environment file, a Docker container, or both. Reviewers who hit dependency conflicts won't spend an hour debugging your build system. They'll recommend rejection.
Test data and expected outputs. Include a small test dataset and the expected results. This lets reviewers verify the software runs correctly without needing your full dataset or deep domain knowledge of the specific biological problem.
A working URL. This sounds trivial, but papers have been delayed or rejected because the web server hosting the tool went down during review. If you're running a web service, make sure it's on stable infrastructure, not your graduate student's laptop.
Original Papers: the full algorithmic story
Original Papers are the longer format (typically 7-8 pages) and carry higher expectations for both novelty and validation. This is where you'd submit a new alignment algorithm, a novel approach to variant calling, or a statistical method for differential expression analysis.
The bar is fundamentally different from an Application Note. You aren't just showing that software works. You're arguing that a computational idea is better than what existed before, and you need formal evidence.
What reviewers expect in an Original Paper:
- A clear problem statement that the community recognizes as important
- A description of your method with enough detail that someone could reimplement it
- Benchmarking against at least 2-3 established methods
- Results on at least 2-3 datasets, ideally including both simulated and real data
- Runtime and memory analysis (not just accuracy)
- An honest discussion of limitations
That last point matters more than people realize. If your method doesn't work well on small sample sizes, say so. If it's slower than a competitor but more accurate, quantify the trade-off. Reviewers trust authors who are upfront about limitations far more than authors who claim their method is better in every dimension.
How Bioinformatics compares to competing journals
The computational biology publishing landscape has several strong venues, and picking the right one matters.
Factor | Bioinformatics (OUP) | BMC Bioinformatics | PLOS Computational Biology | Genome Biology | NAR |
|---|---|---|---|---|---|
Impact Factor (2024) | ~5.8 | ~3.0 | ~4.3 | ~12.3 | ~16.6 |
Acceptance rate | ~25-30% | ~40-50% | ~20-25% | ~15-20% | ~30-40% |
Best for | Novel algorithms and tools | Application-focused tools | Methods with biological insight | Genomics methods with broad impact | Databases and web servers |
Code requirements | Strict, tested by reviewers | Required but less rigorously tested | Required | Required | Required for web server issue |
APC (OA) | ~$3,500 | ~$2,490 | ~$2,900 | ~$4,490 | ~$3,100 |
Bioinformatics vs. BMC Bioinformatics. This is the most common comparison. BMC Bioinformatics is more receptive to tool papers that apply existing methods in new combinations. It doesn't demand the same level of algorithmic novelty. If your contribution is primarily engineering rather than algorithmic, BMC Bioinformatics is a realistic alternative. But the prestige gap is real, and most computational biologists notice where a tool was published.
Bioinformatics vs. PLOS Computational Biology. PLOS Comp Biol wants the biology to matter, not just the computation. If your paper is really about a biological discovery enabled by a new method, PLOS Comp Biol might be a better fit. If the paper is about the method itself, Bioinformatics is the natural home.
Bioinformatics vs. Genome Biology. Genome Biology (IF ~12.3) sits a tier above in impact and selectivity. It's the right target if your method addresses a problem in genomics and has broad enough applications to interest experimentalists, not just other tool developers. Don't submit a niche algorithm to Genome Biology. But if you've built the next GATK or the next DESeq2, it's worth the shot.
Bioinformatics vs. NAR. Nucleic Acids Research has its annual Web Server Issue and Database Issue, which are dedicated venues for exactly those types of tools. If you've built a web-accessible resource or database, NAR's specialized issues might actually be a better fit than Bioinformatics.
Common rejection patterns
After reading enough reviewer reports and talking to colleagues, certain failure patterns come up again and again at Bioinformatics.
The "pipeline paper." You've strung together five existing tools, added a config file, and called it a method. There's nothing wrong with building pipelines, but Bioinformatics isn't where they get published. If there's no new computational idea underneath the pipeline, editors will catch it at triage.
Benchmarking against straw men. You compare your deep learning model against a 15-year-old BLAST-based approach and declare victory. Reviewers will ask why you didn't compare against the current state of the art. They'll also check whether you used default parameters for competitors or actually tuned them fairly.
The "it's faster" paper without accuracy analysis. Speed is great, but not at the cost of correctness. If your tool is 10x faster but produces worse results on edge cases, reviewers want to see that trade-off quantified.
Dead links and broken installs. I can't overstate this. A reviewer who spends 45 minutes trying to get your software running and fails isn't going to write a kind review. Test your installation process on a fresh machine before submitting.
Biological validation as an afterthought. You've shown your method works on simulated data, which is necessary. But you haven't shown it works on real biological data, which is also necessary. Bioinformatics reviewers want to see both.
Pre-submission self-assessment
Before you submit, run through these honestly:
- Can you describe your algorithmic contribution without referencing existing tools? If your novelty is "like BLAST but for proteins" or "like DESeq2 but with a different prior," you need to articulate what's genuinely new about the computational approach.
- Can someone outside your lab install and run your software in 30 minutes? If you haven't tested this, you aren't ready to submit. Full stop.
- Have you benchmarked against the current state of the art? Not last decade's state of the art. The current one. On standard datasets.
- Does your benchmarking include both simulated and real data? Simulated data lets you measure accuracy when you know the ground truth. Real data proves the method works in practice.
- Is your source code publicly available right now? Not "will be available upon publication." Now. Bioinformatics wants the code accessible at submission time.
- Have you included runtime and memory benchmarks? A method that's 2% more accurate but requires 100x more memory might not be useful in practice. Quantify the resource requirements.
A pre-submission manuscript review can help catch framing issues, missing benchmarks, and clarity problems before your paper reaches reviewers who'll test your code.
Bottom line
Bioinformatics isn't looking for the most biologically interesting result. It's looking for the best computational methods. Your algorithm needs to be genuinely new, not just a repackaging of existing approaches. Your software needs to actually work when someone else tries to install it. And your benchmarking needs to be honest, thorough, and fair to competitors. If you can check those three boxes, you're in good shape. If you can't, spend the time fixing the gaps rather than hoping reviewers won't notice. They will.
Sources
- Bioinformatics author guidelines, Oxford University Press: https://academic.oup.com/bioinformatics/pages/instructions-for-authors
- Clarivate Journal Citation Reports, 2024 edition
- Oxford University Press open access pricing: https://academic.oup.com/bioinformatics/pages/open-access
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Dataset / benchmark
Biomedical Journal Acceptance Rates
A field-organized acceptance-rate guide that works as a neutral benchmark when authors are deciding how selective to target.
Reference table
Journal Submission Specs
A high-utility submission table covering word limits, figure caps, reference limits, and formatting expectations.
Before you upload
Choose the next useful decision step first.
Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.
Use the scan once the manuscript and target journal are concrete enough to evaluate.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Supporting reads
Conversion step
Choose the next useful decision step first.
Use the scan once the manuscript and target journal are concrete enough to evaluate.