Sensors submission guide
Sensors's submission process, first-decision timing, and the editorial checks that matter before peer review begins.
Senior Researcher, Oncology & Cell Biology
Author context
Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.
Readiness scan
Before you submit to Sensors, pressure-test the manuscript.
Run the Free Readiness Scan to catch the issues most likely to stop the paper before peer review.
Key numbers before you submit to Sensors
Acceptance rate, editorial speed, and cost context — the metrics that shape whether and how you submit.
What acceptance rate actually means here
- Sensors accepts roughly ~50-60% of submissions — but desk rejection runs higher.
- Scope misfit and framing problems drive most early rejections, not weak methodology.
- Papers that reach peer review face a different bar: novelty, rigor, and fit with the journal's editorial identity.
What to check before you upload
- Scope fit — does your paper address the exact problem this journal publishes on?
- Desk decisions are fast; scope problems surface within days.
- Open access publishing costs ~$2,100 CHF if you choose gold OA.
- Cover letter framing — editors use it to judge fit before reading the manuscript.
How to approach Sensors
Use the submission guide like a working checklist. The goal is to make fit, package completeness, and cover-letter framing obvious before you open the portal.
Stage | What to check |
|---|---|
1. Scope | Manuscript preparation |
2. Package | Submission via MDPI system |
3. Cover letter | Editorial assessment |
4. Final check | Peer review |
Quick answer: Submitting to Sensors is manageable at the portal level and harder to get right at the editorial level. The MDPI platform is straightforward. The real filter is whether the paper presents a real sensing advance with credible evidence the platform works in conditions close to actual use. Papers tested only in idealized buffer conditions or lacking selectivity and real-sample validation consistently fail at the first editorial pass.
From our manuscript review practice
Of manuscripts we've reviewed for Sensors, sensor-design papers where device performance meets specifications but real-world deployment context is untested receive the most consistent desk rejections. The characterization in controlled conditions is thorough, but when the sensor has not been validated in field conditions, against interference sources, or with actual users, editors see lab validation without practical proof.
Sensors: Key Metrics
Metric | Value |
|---|---|
Impact Factor (per Clarivate JCR 2024) | 3.4 |
Acceptance rate | ~45% |
Publisher | MDPI |
Source: Clarivate Journal Citation Reports 2024; MDPI journal information
Sensors is one of the largest sensor-focused open-access journals, covering chemical, biosensor, physical, and environmental sensing across a broad range of materials, platforms, and detection strategies.
Sensors Key Submission Requirements
Requirement | Details |
|---|---|
Submission system | |
Article types | Article, Review, Communication, Letter |
Word limit | No strict limit; Articles typically 5,000-8,000 words |
Figures | High-resolution files; calibration, selectivity, and real-sample figures required |
Cover letter | Required; must explain the sensing advance and why it exceeds an incremental variant |
APC | Required for all accepted articles (open access journal) |
Quick answer: how to submit to Sensors
Submitting to Sensors is easy at the portal level and easy to underestimate at the editorial level. The submission system itself is straightforward: choose the right section, prepare the manuscript and metadata cleanly, upload the figures and supporting files, and complete the author disclosures through the MDPI platform. The real friction is not the website. It is whether the paper looks complete enough, application-aware enough, and experimentally credible enough to survive the first editorial pass.
Sensors is broad and comparatively accessible, but the editors still screen for one practical thing: does this paper present a real sensing advance with evidence that the platform works in conditions close to actual use? A manuscript that looks elegant only in idealized buffer conditions or only on one narrow proof-of-concept setup usually loses momentum fast.
Before you open the submission portal
Work through this checklist before you upload:
- Confirm the manuscript belongs in the right section and article type. A strong sensor paper can still look misplaced if it is routed into the wrong thematic area.
- Make sure the title and abstract state the sensing problem, the platform, and the practical value without hype.
- Check whether the paper includes selectivity, stability, reproducibility, and realistic-sample evidence where the claim requires it.
- Verify that calibration plots, limit-of-detection logic, and figure labels are internally consistent.
- Prepare a cover letter that explains why the paper is more than an incremental sensor variant.
- Make sure ethics, conflicts, funding, data availability, and author metadata are complete before entering the system.
The fastest way to create avoidable delay at Sensors is to treat the manuscript like a rough preprint package and hope the portal handles the rest. It will not.
Step-by-step submission flow
Step | What to do | What often goes wrong |
|---|---|---|
1. Choose article type and section | Match the paper to the most accurate scope bucket. | Authors pick a broad label that hides the manuscript's actual audience. |
2. Finalize title, abstract, and keywords | Make the sensing advance and use case visible early. | The abstract sounds technical but never explains why the sensor matters. |
3. Prepare figures and supplementary files | Label calibration, selectivity, and real-sample figures clearly. | Supplement files become a data dump instead of a useful support package. |
4. Upload manuscript and metadata | Enter author details, affiliations, funding, conflicts, and data statements carefully. | Small metadata mistakes create admin back-and-forth later. |
5. Review the generated proof | Check equation formatting, symbol rendering, figure order, and table placement. | Sensor papers often rely on notation and units that break quietly in system proofs. |
6. Submit and monitor editorial follow-up | Respond quickly if the office requests clarification or file cleanup. | Slow responses signal a package that was not truly ready. |
The portal is simple enough that most authors can finish the mechanics quickly. What slows the process is usually that the manuscript still has unresolved scientific packaging issues.
Common mistakes and avoidable delays
The same avoidable problems appear repeatedly in Sensors submissions:
- Reporting impressive sensitivity without adequate selectivity testing.
- Demonstrating analyte detection only in idealized solutions, not in realistic matrices.
- Describing a sensor architecture without showing practical operating stability.
- Presenting one device or one batch as if it proves reproducibility.
- Framing a small technical tweak as a field-level sensing advance.
- Uploading figures whose legends do not make the operating conditions or sample context obvious.
These are not just reviewer problems. Editors can usually spot them at the initial screen because they signal that the paper is not submission-ready yet.
Readiness check
Run the scan while Sensors's requirements are in front of you.
See how this manuscript scores against Sensors's requirements before you submit.
What editors and reviewers will notice first
Editors at Sensors are effectively reading the paper with a practical engineering and application filter.
What editors are actually screening for
Editorial criterion | What passes | Desk-rejection trigger |
|---|---|---|
Clear sensing problem | The manuscript identifies a specific sensing target, environment, and application from the first page; the use case is obvious without requiring the editor to infer it from the methods section | The paper describes a detection mechanism without explaining what real measurement problem it solves; the application is deferred to the discussion or described only as "potential future use" |
Characterization completeness | Selectivity, stability, repeatability, and real-sample evidence are present at the level the performance claim requires; the editor can assess whether the sensor works in conditions close to actual use | The paper reports sensitivity or limit of detection without selectivity testing, real-sample spiking, or matrix validation; the characterization is complete only under idealized buffer conditions |
Mechanism or signal logic | The paper explains why the signal behaves as reported and how it scales with analyte concentration; the sensing rationale is physically plausible and experimentally supported | Performance data is reported without a mechanistic explanation for the selectivity or sensitivity observed; the sensor appears to work but the paper cannot explain why it works |
Reproducibility | Performance data is shown across multiple independently fabricated devices, sample batches, or measurement sessions; variability between devices is reported explicitly | Sensor performance is shown for a single fabricated device or single measurement session; reproducibility is assumed rather than demonstrated across the platform |
What a stronger Sensors package looks like
A stronger submission usually has these traits:
- the abstract states both the sensor advance and the practical context
- the main figures separate calibration, selectivity, and real-sample performance clearly
- the methods make replication look possible rather than mysterious
- the discussion explains limitations honestly instead of inflating the application
- the cover letter explains why the paper matters to a broad sensor audience rather than only to one materials niche
That is important because many Sensors papers are not rejected for lack of technical effort. They are rejected because the package does not persuade the editor that the sensor is complete, useful, and reproducible enough to merit review.
What to put in the cover letter
The best cover letters for Sensors do not simply repeat the abstract. They make the editorial case in practical terms. State the sensing problem directly: explain what measurement or detection problem the paper improves and why that problem matters outside your own laboratory setup. If the work modifies an existing sensing strategy, explain why the change matters in terms of capability, deployment, reproducibility, or application range rather than making a generic novelty claim. Point to the most important validation evidence, especially real-sample testing, stability data, or deployment-relevant results, because those details make the package feel credible rather than proof-of-concept. If the manuscript could also fit in Biosensors and Bioelectronics, Analytica Chimica Acta, Small, or Journal of Materials Chemistry A, explain why Sensors is the right venue for the particular audience and contribution.
How to decide whether the paper is ready now
Before submitting, run through this editorial check. The paper is probably ready if the main figures already answer the obvious selectivity and reproducibility questions, the practical application is visible without a long explanation, the supplement supports the paper instead of carrying the real story, and another group could plausibly replicate the device or assay from the methods as written. The paper probably needs more work if the strongest evidence is still one calibration curve or one limit-of-detection number, if you are relying on discussion text to create application relevance the data do not directly show, or if reviewers would need to infer stability, repeatability, or matrix tolerance rather than see it clearly in the figures. Also ask whether the paper would feel important to readers outside your immediate sensor niche; if the answer is no, the manuscript may need broader application framing or a more specialized journal target before submission.
Where authors usually overestimate readiness
Most authors overestimate readiness when the package has one visually strong result and several weaker validation layers around it. Sensors editors see that pattern often. A manuscript can look compelling to the lab that built the platform and still look premature to an editor who wants to know whether the device, assay, or architecture is robust enough to justify external review.
That is why readiness should be judged by the weakest necessary support element, not by the strongest chart in the paper.
Submit If
- the sensor solves a recognizable measurement problem
- the manuscript includes realistic performance evidence, not only idealized tests
- selectivity, stability, and reproducibility are addressed at the level your claims require
- the figures tell a coherent story without forcing the reader to hunt through the supplement
- you can explain why the paper is more than another material-plus-signal combination
Fix first if
- the paper depends mainly on a striking sensitivity number
- the real-world application is still mostly aspirational
- the study has one strong calibration plot but weak validation elsewhere
- the mechanism discussion is too thin for the confidence of the claims
- the package would frustrate a reviewer trying to reproduce the work
What to read next
- Sensors submission process
- Sensors impact factor
- Is Sensors a good journal?
- How to avoid desk rejection at Sensors
Before you upload, run your manuscript through a Sensors submission readiness check to catch the issues editors filter for on first read.
Submit If
- the manuscript identifies a specific sensing target, environment, and application from the first page so the measurement problem is obvious
- selectivity, stability, repeatability, and real-sample evidence are present at the level the performance claim requires: not only idealized buffer conditions
- reproducibility is demonstrated across multiple independently fabricated devices or measurement sessions with explicit variability reporting
- the paper explains why the signal behaves as reported and how it scales with analyte concentration with physical plausibility and experimental support
Think Twice If
- sensitivity or limit-of-detection numbers are impressive but selectivity testing against interferents or real-sample spiking is missing or thin
- sensor performance is demonstrated only in idealized aqueous buffer conditions without validation in realistic sample matrices
- sensor performance is shown for a single fabricated device or single measurement session without reproducibility statistics across independently fabricated sensors
- the sensing mechanism or signal origin is described vaguely without experimental or theoretical support for why the signal works as claimed
In our pre-submission review work
In our pre-submission review work with manuscripts targeting Sensors, five patterns generate the most consistent desk rejections worth knowing before submission.
- Sensitivity reported without selectivity or real-sample evidence (roughly 35%). The Sensors instructions for authors position the journal as publishing work that addresses real sensing problems with evidence the platform works under conditions relevant to practical use, requiring that submissions demonstrate not only that a sensor can detect a target analyte but that the detection holds in the presence of interferents and in sample matrices representative of the intended application. In our experience, roughly 35% of desk rejections involve manuscripts where impressive sensitivity or low limits of detection are the primary result but the submission does not include selectivity testing against common interferents, real-sample spiking, or matrix validation that would tell the editor whether the sensor is practically useful beyond a clean buffer control. Editors specifically screen for manuscripts where the sensing claim is backed by the kind of evidence a deployment-ready sensor would need, not only the kind of evidence that demonstrates the working principle.
- Sensor performance demonstrated only in idealized buffer conditions (roughly 25%). In our experience, we find that roughly 25% of submissions characterize sensor response exclusively in idealized aqueous buffer conditions, phosphate-buffered saline, or similarly clean laboratory media without testing the platform in real or realistic sample matrices such as serum, urine, environmental water, or food extracts that represent the stated application target. In practice, Sensors editors assess whether the practical performance claim is supported by evidence that goes beyond the proof-of-concept stage, and manuscripts where the entire characterization is performed under laboratory-optimal conditions without any attempt to test the platform in a realistic sample context are consistently identified as failing to meet the journal's applied sensing standard regardless of how technically clean the buffer-phase results are.
- Reproducibility not shown across multiple devices or sample batches (roughly 20%). In our experience, roughly 20% of submissions present sensor performance data derived from a single fabricated device, a single measurement session, or a single synthesis batch without providing reproducibility statistics across multiple independently fabricated sensors, multiple operators, or multiple sample preparations. Sensors editors are specifically looking for manuscripts where the reproducibility of the sensing platform is demonstrated rather than assumed, and submissions where the variability between devices, between batches, or between measurement sessions is not reported or where performance is presented as if a single device result is representative of the platform are consistently identified as failing to support the claim that the sensor is a reliable and deployable sensing technology.
- Sensing mechanism or signal origin not explained adequately (roughly 15%). In our experience, roughly 15% of submissions report strong analytical performance without providing a physically plausible and experimentally supported explanation for why the sensor produces the observed signal, how the signal scales with analyte concentration, or what interaction or transduction mechanism accounts for the selectivity and sensitivity claimed. Sensors reviewers and editors expect to understand not only that the signal changes but why it changes in the way the data show, and manuscripts where the sensing mechanism is described only vaguely or where the signal origin is treated as self-evident without experimental or theoretical support are consistently identified as editorially incomplete for a journal that expects sensing work to be mechanistically grounded.
- Cover letter asserts novelty without explaining the sensing advance (roughly 10%). In our experience, roughly 10% of submissions arrive with cover letters that claim the sensor represents a novel advance in sensitivity, selectivity, or fabrication without clearly explaining what specific measurement problem the platform solves, why the performance improvement matters in the context of the intended application, and how the submission differs from the closest comparable sensors already published in Sensors or nearby journals. Editors use the cover letter to assess whether the manuscript has a compelling sensing identity beyond incremental material or method variation, and letters that assert novelty at a general level without making the application case and the comparative advantage concrete consistently correlate with manuscripts that also fail to articulate why the sensor matters beyond a technical demonstration.
Before submitting to Sensors, a Sensors submission readiness check identifies whether your sensing evidence, real-sample validation, and platform reproducibility meet the editorial bar before you commit to the submission.
Frequently asked questions
Sensors uses the MDPI submission platform. Choose the right section and article type, prepare the manuscript and metadata cleanly, upload figures and supporting files, and complete author disclosures. A cover letter should explain why the paper is more than an incremental sensor variant.
Sensors screens for one practical thing: does the paper present a real sensing advance with evidence the platform works in conditions close to actual use? Papers need selectivity, stability, reproducibility, and realistic-sample evidence. Manuscripts that look elegant only in idealized buffer conditions usually lose momentum fast.
Yes, Sensors is an open-access journal published by MDPI. Accepted articles require an article processing charge (APC). The journal is broad and comparatively accessible but still screens submissions for completeness and application-awareness.
Common mistakes include treating the manuscript like a rough preprint package, submitting sensor work tested only in idealized conditions without realistic-sample evidence, inconsistent calibration plots and limit-of-detection logic, routing the paper into the wrong thematic section, and missing selectivity, stability, or reproducibility data where the claim requires it.
Sources
Final step
Submitting to Sensors?
Run the Free Readiness Scan to see score, top issues, and journal-fit signals before you submit.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Start here
Same journal, next question
- How to Avoid Desk Rejection at Sensors
- Sensors submission process
- Is Your Paper Ready for Sensors? MDPI's Cross-Disciplinary Sensing Journal
- Sensors Impact Factor 2026: 3.5, Q2, Rank 24/79
- Is Sensors a Good Journal? Fit Verdict
- Sensors APC and Open Access: CHF 2,600, Discounts, and Whether the Fee Makes Sense
Supporting reads
Conversion step
Submitting to Sensors?
Anthropic Privacy Partner. Zero-retention manuscript processing.