Manuscript Preparation11 min readUpdated Apr 27, 2026

Pre-Submission Review for Robotics Papers

Robotics papers need pre-submission review that checks hardware, experiments, baselines, videos, safety, reproducibility, and journal fit.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Readiness scan

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal
Working map

How to use this page well

These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.

Question
What to do
Use this page for
Getting the structure, tone, and decision logic right before you send anything out.
Most important move
Make the reviewer-facing or editor-facing ask obvious early rather than burying it in prose.
Common mistake
Turning a practical page into a long explanation instead of a working template or checklist.
Next step
Use the page as a tool, then adjust it to the exact manuscript and journal situation.

Quick answer: Pre-submission review for robotics papers should test whether the robot platform, task definition, control or learning method, hardware assumptions, baselines, experiments, ablations, videos, safety analysis, reproducibility package, and target journal fit support the manuscript's robotics claim. Robotics reviewers are skeptical when an algorithm result is not proven on the embodied system it claims to improve.

If you need a manuscript-specific readiness diagnosis, start with the AI manuscript review. If the paper is mainly a model or benchmark without embodied validation, see pre-submission review for machine learning or pre-submission review for artificial intelligence.

Method note: this page uses IEEE Transactions on Robotics author information, International Journal of Robotics Research submission guidance, ACM Transactions on Human-Robot Interaction scope signals, IEEE robotics multimedia expectations, and Manusights robotics review patterns reviewed in April 2026.

What This Page Owns

This page owns robotics-specific pre-submission review. It applies to robot learning, manipulation, locomotion, navigation, control, planning, perception for robotics, field robotics, autonomous vehicles, human-robot interaction when the robot system dominates, surgical robotics methods, multi-robot systems, and embodied AI papers where hardware, sensing, actuation, or task execution is central.

Intent
Best owner
Robotics manuscript needs field critique
This page
General ML method dominates
Machine learning review
Human study and interaction design dominate
HCI or HRI-focused review
Medical procedure dominates
Surgery review
Computer vision benchmark dominates
Computer vision review

The boundary is embodied system evidence.

What Robotics Reviewers Check First

Robotics reviewers often ask:

  • what robot, sensors, actuators, controller, and environment were used?
  • is the task defined tightly enough to evaluate success?
  • are baselines tuned fairly and matched to the robot setup?
  • do experiments test real hardware, simulation, or both?
  • are ablations, failure cases, and safety limits shown?
  • do videos or multimedia evidence support the claims?
  • can another lab reproduce the setup, policy, controller, or evaluation?
  • does the paper fit IEEE Transactions on Robotics, IJRR, Science Robotics, ACM THRI, ICRA or IROS extensions, or an AI venue?

The manuscript has to prove the method works as robotics, not only as code.

In Our Pre-Submission Review Work

In our pre-submission review work, robotics manuscripts most often fail when the real-world system evidence is too thin for the strength of the claim.

Task-definition gap: success is reported, but the task, constraints, objects, environment, and failure conditions are not precise enough.

Hardware opacity: sensor calibration, robot limits, controller settings, payload, latency, friction, or safety constraints are underreported.

Simulation overreach: simulation results are written as if they demonstrate hardware robustness.

Baseline weakness: the new method beats a weak, undertuned, or poorly adapted comparison.

Video-evidence gap: the paper describes behavior that reviewers need to see, but multimedia evidence is missing or too polished to reveal failures.

A useful review should identify the first robot-specific objection that would make reviewers doubt the result.

Public Field Signals

IEEE Transactions on Robotics warns that failure to follow author guidelines can result in return without review. IJRR asks authors to address data availability and supports multimedia extensions, which matters in robotics because behavior, failure modes, and task execution are often easier to judge in video than in prose. ACM Transactions on Human-Robot Interaction explicitly spans robotics, computer science, engineering, design, and behavioral and social sciences, so HRI submissions need both system and human-evidence discipline.

These signals point to the same readiness standard: robotics manuscripts need experimental traceability, not only formal method description.

Robotics Review Matrix

Review layer
What it checks
Early failure signal
Robot system
Platform, sensors, actuators, controller, latency
Hardware details are thin
Task
Environment, objects, constraints, success, failure
Evaluation target keeps moving
Experiments
Real robot, simulation, field, ablations, repetitions
Result is too narrow
Baselines
Tuning, fairness, implementation, classical controls
Comparison is easy to beat
Safety
Human risk, collision, failure recovery, limits
Real-world use is underdiscussed
Reproducibility
Code, parameters, logs, videos, datasets
Another lab could not rebuild it
Journal fit
T-RO, IJRR, Science Robotics, THRI, ICRA, AI
Audience mismatch

This matrix keeps the page distinct from AI and machine learning pages.

What To Send

Send the manuscript, target journal, robot platform specs, hardware setup photos, controller details, sensor calibration notes, task definition, simulation details, real-world experiment logs, baseline implementations, ablation plan, failure videos, safety notes, code and data availability plan, figures, supplement, and prior reviewer comments.

For HRI robotics papers, include participant protocol, consent, risk mitigation, task environment, interaction script, and analysis plan. For field robotics, include site conditions, weather, terrain, sensor failures, and recovery procedures.

What A Useful Review Should Deliver

A useful robotics pre-submission review should include:

  • robotics contribution verdict
  • task and system-definition critique
  • hardware, controller, and sensor reporting review
  • baseline, ablation, and experiment-design check
  • safety, failure, and multimedia-evidence review
  • reproducibility and data/code readiness note
  • journal-lane recommendation
  • submit, revise, retarget, or diagnose deeper call

The review should not only say "run more experiments." It should name the experiment that would change reviewer confidence.

Common Fixes Before Submission

Before submission, authors often need to:

  • define the robot task and success criteria more tightly
  • add hardware, controller, and sensor details
  • separate simulation evidence from real-world evidence
  • add failure analysis and ablations
  • strengthen or retune baselines
  • add videos that show representative successes and failures
  • clarify safety limits and human-risk controls
  • retarget from robotics to ML, HCI, automation, medical devices, or domain venues when the embodied contribution is secondary

These fixes make the robotics claim easier to trust.

Reviewer Lens By Paper Type

A manipulation paper needs object diversity, grasp or contact failure analysis, and repeatability. A navigation paper needs environment variation, localization failure handling, and path-planning comparison. A control paper needs stability, robustness, and hardware limits. A robot-learning paper needs simulation-to-real boundaries, baseline discipline, and ablation. An HRI robotics paper needs participant evidence and ethical safeguards. A field robotics paper needs site conditions and failure recovery. A surgical robotics methods paper needs safety and clinical-task framing without pretending it is already a clinical outcome study.

The AI manuscript review can flag whether the blocking risk is hardware detail, experiments, baselines, video evidence, or journal fit.

How To Avoid Cannibalizing AI Or HCI Pages

Use this page when the manuscript's submission risk depends on robot hardware, embodied validation, control, sensing, actuation, task execution, failure behavior, field deployment, or robotics venue fit. Use AI or machine learning review when the main contribution is a general algorithm. Use HCI review when the main claim is about user experience, interaction design, or human study evidence rather than the robot system.

That distinction keeps the page focused on the robotics buyer's actual problem.

What Not To Submit Yet

Do not submit a robotics paper if the task definition is loose. Reviewers need to know exactly what the robot had to do, under what constraints, and what counted as failure.

Also pause if the strongest result exists only in simulation. Simulation can be useful, but the manuscript should not imply embodied robustness unless the evidence supports that claim.

For robot-learning papers, pause if baselines are weak. A new policy that beats an undertuned controller or a poorly adapted prior model may not survive review.

For HRI or safety-relevant work, pause if participant risk, robot failure, or operator intervention is hidden. Robotics reviewers are often more forgiving of honest failures than of sanitized demonstrations.

Submit If / Think Twice If

Submit if:

  • robot system and task are clear
  • experiments match the claim
  • baselines and ablations are fair
  • videos support the behavior claims
  • safety and failure cases are visible
  • target journal matches the robotics contribution

Think twice if:

  • simulation is doing the main proof work
  • task success is underspecified
  • hardware assumptions are hidden
  • videos show only the best cases

Readiness check

Run the scan to see how your manuscript scores on these criteria.

See score, top issues, and what to fix before you submit.

Check my manuscriptAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal

Bottom Line

Pre-submission review for robotics papers should protect the link between embodied evidence and robotics claim. The manuscript needs task clarity, system detail, fair experiments, failure analysis, reproducibility, and a journal target that fits the robot contribution.

Use the AI manuscript review if you need a fast readiness diagnosis before submitting a robotics paper.

  • https://www.ieee-ras.org/publications/t-ro/t-ro-information-for-authors/
  • https://journals.sagepub.com/author-instructions/ijr
  • https://www.acm.org/media-center/2017/july/thri-new-journal
  • https://www.ieee-ras.org/publications/t-mrb/information-for-authors

Frequently asked questions

It is a field-specific review that checks whether a robotics manuscript is ready for journal submission, including robot platform, task definition, control method, experimental design, baselines, hardware limits, videos, safety, reproducibility, and journal fit.

They often attack weak real-world experiments, unclear task definition, undertuned baselines, missing ablations, insufficient hardware detail, poor failure analysis, unsupported generalization, missing multimedia evidence, and mismatch between robotics, HRI, automation, and AI venues.

AI and machine learning review focus on models, datasets, benchmarks, and general algorithmic claims. Robotics review focuses on embodied systems, hardware constraints, control, sensing, actuation, task execution, safety, real-world validation, and robot-specific reproducibility.

Use it before submitting robot learning, manipulation, navigation, control, HRI, field robotics, surgical robotics, autonomous systems, or robot perception papers where experiments and journal fit could decide review.

Final step

Find out if this manuscript is ready to submit.

Run the Free Readiness Scan. See score, top issues, and journal-fit signals before you submit.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Check my manuscript