Journal Guides7 min readUpdated Apr 2, 2026

Remote Sensing Submission Guide

Remote Sensing's submission process, first-decision timing, and the editorial checks that matter before peer review begins.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Readiness scan

Before you submit to Remote Sensing, pressure-test the manuscript.

Run the Free Readiness Scan to catch the issues most likely to stop the paper before peer review.

Check my readinessAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal
Submission at a glance

Key numbers before you submit to Remote Sensing

Acceptance rate, editorial speed, and cost context — the metrics that shape whether and how you submit.

Full journal profile
Impact factor4.1Clarivate JCR
Acceptance rate~50-60%Overall selectivity
Time to decision~60-90 days medianFirst decision
Open access APC~$1,900-2,200Gold OA option

What acceptance rate actually means here

  • Remote Sensing accepts roughly ~50-60% of submissions — but desk rejection runs higher.
  • Scope misfit and framing problems drive most early rejections, not weak methodology.
  • Papers that reach peer review face a different bar: novelty, rigor, and fit with the journal's editorial identity.

What to check before you upload

  • Scope fit — does your paper address the exact problem this journal publishes on?
  • Desk decisions are fast; scope problems surface within days.
  • Open access publishing costs ~$1,900-2,200 if you choose gold OA.
  • Cover letter framing — editors use it to judge fit before reading the manuscript.
Submission map

How to approach Remote Sensing

Use the submission guide like a working checklist. The goal is to make fit, package completeness, and cover-letter framing obvious before you open the portal.

Stage
What to check
1. Scope
Manuscript preparation
2. Package
Submission via MDPI system
3. Cover letter
Editorial assessment
4. Final check
Peer review

Quick answer: A strong Remote Sensing submission is not just a workable model on one case study. It is a paper with clear methodological or practical value, strong validation, and a reason readers outside one narrow setting should care.

This Remote Sensing submission guide focuses on the real pre-submit question: whether the manuscript is broad enough, rigorous enough, and remote-sensing-centered enough to survive the first editorial read.

From our manuscript review practice

Of manuscripts we've reviewed for Remote Sensing, satellite or airborne data papers where the algorithm is novel but validation uses only coarse-resolution reference datasets receive the most consistent desk rejections. The methodology is sound, but when ground truth comes from lower-resolution satellite data rather than high-resolution aerial imagery or field measurements, editors cannot verify accuracy claims.

Remote Sensing: Key Metrics

Metric
Value
Impact Factor (per Clarivate JCR 2024)
4.2
Acceptance rate
~40%
Publisher
MDPI

Source: Clarivate Journal Citation Reports 2024; MDPI journal information

Remote Sensing is a broad-scope open-access MDPI journal covering all areas of remote sensing science. Its relatively high acceptance rate compared to Nature-family journals reflects the broad scope, but the editorial bar for validation rigor and methodological breadth is still applied consistently on first read.

What this page is for

This page is about package readiness, not post-upload workflow.

Use it when you are still deciding:

  • whether the validation package is strong enough
  • whether the paper is reproducible enough for editorial trust
  • whether the remote-sensing contribution is central enough
  • whether the package is stable enough to survive the first editorial read now

If you want the upload flow, early statuses, and where the process usually slows after submission, that belongs on the submission-process page.

If you are preparing a Remote Sensing submission, the main risk is not the portal. The main risk is sending a paper that is technically competent but too local, too thinly validated, or too weakly connected to the broader remote-sensing conversation.

Remote Sensing is realistic when four things are already true:

  • the paper has enough validation to look credible quickly
  • the contribution extends beyond one local application
  • remote sensing is central to the manuscript, not just the data source
  • the result is broad enough for a mixed remote-sensing audience

If one of those conditions is weak, the process often becomes much harder at editorial screening.

What should already be in the package

Before the formal submission starts, the package should already contain:

  • a clear statement of the remote-sensing contribution
  • validation that is easy to audit, not buried
  • fair baselines and appropriate metrics
  • a paper that explains what readers beyond the exact study area can reuse
  • methods and supplements that make the workflow reproducible enough to trust

When those pieces are still loose, the problem is not the portal. It is that the package is not ready for Remote Sensing yet.

What the journal is actually screening for

Remote Sensing covers many topics, but editors are still making a focused early judgment:

  • does the paper belong in remote sensing?
  • is the validation strong enough?
  • is the contribution reusable or at least broadly informative?
  • does the manuscript read like a complete article rather than a one-off application?

The broad scope helps if your paper connects method, data, and interpretation clearly. It hurts if the manuscript relies on the breadth of the journal to excuse weak positioning.

Strong fit shape

The strongest submissions usually have:

  • a clear remote-sensing or geospatial contribution
  • a validation strategy readers can trust
  • enough methodological or practical insight to matter beyond one site
  • a paper structure that helps a broad audience understand the significance

This does not require a universal model or a global study. It requires the result to travel beyond the exact local use case.

Weak fit shape

The most common shape problem is a manuscript that feels like:

  • a routine application of an existing workflow
  • an under-benchmarked model paper
  • an environmental case study where remote sensing is only incidental
  • a narrow demonstration without broader methodological consequence

Those papers may still be useful, but they are harder to defend in this journal.

1. The validation logic

Validation is one of the fastest screening questions in Remote Sensing.

Editors want to see:

  • fair baselines
  • clear error analysis
  • sensible benchmark choices
  • enough data or evaluation detail to make the claim believable

If the validation is shallow, the paper feels weaker immediately, no matter how interesting the result sounds.

2. The remote-sensing relevance

The manuscript should make it obvious why remote sensing is central to the story.

That can mean:

  • the sensing method is the contribution
  • the data-processing pipeline is the contribution
  • the interpretation of sensed information changes how the target problem is understood

If the paper would still be the same paper without the remote-sensing context, the fit is usually weaker.

3. The breadth of the payoff

Editors are also asking whether readers outside the exact case study will care. This is why transferability, methodological clarity, and discussion of broader use matter so much. A result that only works in one setting, for one sensor configuration, or on one particular dataset is harder to defend in a broad-scope journal even when the local execution is technically competent. The manuscript should explain what the lesson is, who can apply it, and under what conditions the approach is likely to remain valid.

What editors notice quickly

Warning sign
Why it matters
The paper is local, but the framing pretends it is broad
Editors notice quickly when the broader lesson is not actually earned by the evidence; claiming broad relevance without transfer evidence weakens the package on first read and is one of the most common reasons a technically competent paper still stalls
The validation exists, but it is too hard to audit
If the strongest comparisons and limitations are buried in supplements rather than visible in the main manuscript, the package looks weaker than it should; accessible validation is easier to trust than validation that requires assembly from scattered materials
Remote sensing is only the data source
If the paper would read the same way with another data source, the fit argument is usually weak; remote sensing should be central to the contribution, not incidental to the analysis of a domain-science question
Reproducibility is too thin
Black-box modeling, vague pipeline description, or weak benchmark detail make trust harder on first read; editors expect enough methodological transparency to evaluate whether the workflow can be reused or extended by others

Common pre-submit mistakes

The most common avoidable mistakes are:

  • writing a local case study as if the local setting alone proves broader value
  • using standard methods with weak benchmarking
  • failing to explain why the paper belongs in remote sensing rather than an adjacent domain journal
  • hiding validation limits
  • assuming a broad journal means broad editorial tolerance

These mistakes often slow the process before reviewers even enter the picture.

Readiness check

Run the scan while Remote Sensing's requirements are in front of you.

See how this manuscript scores against Remote Sensing's requirements before you submit.

Check my readinessAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal

What editors want to believe before review

Before the paper goes out, the editor usually wants to believe:

  • the remote-sensing contribution is central, not incidental
  • the validation package will stand up to a critical read
  • the lesson of the paper travels beyond one site or project
  • the manuscript already knows its limits and has framed them honestly

That combination is what makes a broad-scope remote-sensing paper feel reviewer-ready rather than merely interesting.

Make the transferable lesson explicit

Even if the study is local, the manuscript should make clear what others can reuse: a workflow, a validation approach, a comparative result, or a generalizable interpretation. State the transferable element explicitly rather than leaving it implied. If the main takeaway only works for readers studying the exact same region, crop type, or sensor configuration, the manuscript needs another layer of framing that extracts the broader methodological lesson and makes it easy for a remote-sensing reader from a different application domain to see why the paper is relevant to their own work.

Stress-test the benchmark logic

Before submission, ask:

  • are the baselines fair?
  • are the evaluation metrics appropriate?
  • is the comparison with prior work explicit?
  • would a skeptical reviewer say the validation is underpowered?

If those answers are not reassuring, the paper usually needs more work first.

Make the remote-sensing contribution central

The introduction, methods framing, and conclusion should all point to the same thing: why this is a remote-sensing paper and why that matters. That means the abstract should not bury the remote-sensing element beneath domain-science context, the methods should explain what the use of remotely sensed data enables rather than just listing sensor specifications, and the conclusion should frame the contribution as a remote-sensing advance rather than only a result for one study area. Editors screening across many submission types can tell quickly when remote sensing is the real contribution and when it is the data infrastructure for a different scientific question.

Make the validation readable, not just present

Many remote-sensing manuscripts technically contain enough benchmarking but still make the editor work too hard to see it. Put the strongest comparisons, error logic, and fairness of baselines where they are easy to find. A good validation package only helps if the editorial read can recognize it quickly.

A quick submission table

Submission question
Stronger answer
Weaker answer
Is validation convincing?
Fair baselines, clear error logic, enough evidence
Thin benchmarks and weak comparisons
Does the result travel?
Readers can reuse the method or lesson
The value ends at one local case
Is remote sensing central?
Sensing and interpretation drive the contribution
Remote sensing is only incidental
Is the audience broad enough?
The paper speaks to many remote-sensing readers
The manuscript stays too narrow

What to check in the submission package itself

Even broad-scope journals still read the package for confidence. Before you submit, make sure the package itself shows discipline:

  • the title makes the remote-sensing contribution obvious
  • the abstract says what is reusable, not only what happened in one study area
  • the first figure or table proves the validation logic early
  • the cover letter explains why the paper belongs in Remote Sensing rather than a neighboring environmental or engineering journal

Those signals matter because a broad-scope journal does not want to guess why your paper belongs there.

How to judge whether the paper is broad enough

One of the most useful pre-submit checks for Remote Sensing is to ask what a reader outside your exact topic will take from the paper.

That answer should be concrete:

  • a transferable workflow
  • a benchmarking lesson
  • a validation standard other teams can reuse
  • a domain insight that changes how sensed information should be interpreted

If the answer is only “this worked in our study area,” the manuscript is still too narrow for a strong editorial first read.

When Remote Sensing is the wrong target even if the study is publishable

The paper is often a weak fit when:

  • remote sensing is only the data source and not the scientific contribution
  • the analysis is mostly a local case study with little methodological carryover
  • the benchmarking is too light to support a broad audience claim
  • the manuscript would make more sense in a domain-specific application journal

That does not mean the work is weak. It means the editorial fit argument is weak, which is often enough to slow the submission immediately.

Final checklist before upload

  • the remote-sensing contribution is obvious on page one
  • the validation package is strong and easy to audit
  • the manuscript explains what readers can reuse
  • the work belongs in remote sensing, not just an adjacent application area
  • the introduction and conclusion make the same argument about why the paper matters

If all five are true, the submission is much more likely to look review-ready.

That checklist is especially useful for this journal because broad scope can hide weak positioning. A paper that passes those five tests usually looks deliberate enough to survive the first editorial screen.

Where to go next

Submit If

  • the validation logic is strong and easy to audit: fair baselines, clear error analysis, sensible benchmark choices, and enough evidence to make the claim believable
  • remote sensing is central to the contribution, not incidental: the sensing method or data-processing pipeline is the advance, not just the data source
  • readers outside the exact case study will understand what they can reuse: a transferable workflow, benchmarking lesson, or decision principle
  • the paper goes beyond routine monitoring to deliver methodological or practical insight that differs from simply documenting what was measured at one site

Think Twice If

  • the validation package is too local to credibly benchmark the method: evidence comes only from one location, dataset, or set of operating conditions
  • remote sensing is only the data infrastructure for a domain-science question; the paper would read the same with another data source
  • the lesson is too narrowly tied to one local context, crop type, or sensor configuration without establishing broader methodological carryover
  • the paper is mostly a descriptive case study without a broader scientific consequence that transforms how the target problem should be understood

In our pre-submission review work

In our pre-submission review work with manuscripts targeting Remote Sensing, five patterns generate the most consistent desk rejections worth knowing before submission.

  • Validation package too local to credibly benchmark the method (roughly 35%). The Remote Sensing instructions for authors position the journal as a venue for research with clear methodological or practical value that extends beyond one study setting, requiring that the validation package be strong enough for a broad remote-sensing audience to evaluate the claim rather than accepting it on the basis of a single case study. In our experience, roughly 35% of desk rejections involve manuscripts where the method or workflow is technically competent but the validation is restricted to one location, one dataset, or one set of conditions: baselines are not comparable to what the field uses elsewhere, error analysis is thin or site-specific, and there is no evidence that the result would survive evaluation against a different study area or a different dataset representative of the broader class of problems the paper addresses. Remote Sensing editors evaluate whether the claim is believable for a mixed readership that includes researchers with very different application contexts, and manuscripts where the validation is convincing only for the authors' own use case consistently fail the editorial standard the journal applies before sending a manuscript for review.
  • Remote sensing used as data source rather than as the contribution (roughly 25%). In our experience, roughly 25% of submissions present environmental, ecological, or domain-science results in which the sensing data are the input to the analysis but the scientific contribution is entirely within the domain science rather than within remote sensing: the paper would read the same way if the data came from field measurements or model output, the remote-sensing methodology adds no scientific insight beyond providing the input dataset, and the framing of the contribution does not explain what the use of remotely sensed data enables that could not have been achieved otherwise. Remote Sensing expects manuscripts in which the sensing method, data pipeline, or interpretation of sensed information is central to the scientific contribution rather than incidental to a domain-science study, and submissions where remote sensing appears in the title and data section but not in the scientific advance are consistently identified as mismatched to the journal's editorial scope.
  • Transferable lesson absent from a technically competent application (roughly 20%). In our experience, roughly 20% of submissions demonstrate that a method or workflow performs well in one study area but do not provide a transferable lesson that readers in other application contexts can act on: the paper shows results for one site but does not identify what conditions make the approach work, what failure modes the user needs to avoid, or what parameters need to be recalibrated for a different environment. Remote Sensing publishes across many application domains and expects papers to provide value to readers who are not studying the same geographic area or the same application context, and manuscripts that are technically complete within their own study but do not generalize beyond it consistently face skepticism about whether the contribution is broad enough to merit space in a broad-scope journal.
  • Benchmarking data buried or missing fair baselines for comparison (roughly 15%). In our experience, roughly 15% of submissions contain the benchmarking data needed to evaluate the claim but present it in a way that makes the evaluation difficult: comparisons are buried in supplementary material rather than presented in the main figures, the choice of baselines is not explicitly justified against what the relevant literature would consider appropriate, or the error analysis describes absolute performance without contextualizing it against what alternative methods achieve on the same data. Remote Sensing editors and reviewers evaluate whether the advance over existing methods is credible and easy to verify on a fast read of the main manuscript, and papers where the evidence for the claimed improvement requires careful assembly from supplementary tables and figures consistently look less convincing than papers where the benchmarking story is visible in the main display items.
  • Cover letter treats the application as the core contribution (roughly 10%). In our experience, roughly 10% of submissions include cover letters that describe the environmental or application context of the study, the domain-science importance of the topic, and the relevance of the results to a specific management or policy question without explaining what specifically the manuscript contributes to remote sensing as a scientific field. Remote Sensing editors assess whether the paper advances the ability of the remote-sensing community to sense, process, or interpret remotely acquired data more effectively, and cover letters that describe application value without articulating the remote-sensing scientific advance consistently correlate with manuscripts where the central contribution has not been clearly defined even within the paper itself.

SciRev community data author-reported review times provide additional community benchmarks when planning your submission timeline.

Before submitting to Remote Sensing, a Remote Sensing submission readiness check identifies whether your validation package, methodological contribution, and breadth of relevance meet the editorial bar before you commit to the submission.

Editors consistently screen submissions against these patterns before sending to peer review, so addressing them before upload reduces desk-rejection risk.

Frequently asked questions

Remote Sensing uses the MDPI online submission portal. Prepare a manuscript with clear methodological or practical value, strong validation, and a reason readers outside one narrow setting should care. Choose the right section and article type, upload the manuscript with supporting files, and complete author metadata.

Remote Sensing wants papers with clear methodological or practical value, strong validation, and broad relevance beyond one narrow case study. A workable model on one case study alone is insufficient. The journal requires manuscripts that are remote-sensing-centered and rigorous enough for the first editorial read.

Yes, Remote Sensing is an open-access journal published by MDPI. Accepted articles require an article processing charge (APC). The journal publishes across all areas of remote sensing science and technology.

Common mistakes include presenting a single case study without broader validation, weak methodological novelty, insufficient reproducibility information, manuscripts that are not genuinely remote-sensing-centered, and papers where readers outside one narrow setting would not find value.

References

Sources

  1. 1. Remote Sensing journal homepage, MDPI.
  2. 2. Remote Sensing instructions for authors, MDPI.
  3. 3. MDPI ethics and publication policies, MDPI.

Final step

Submitting to Remote Sensing?

Run the Free Readiness Scan to see score, top issues, and journal-fit signals before you submit.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Check my readiness