Product Comparisons9 min readUpdated Mar 13, 2026

Rigorous AI Review 2026: Interesting ETH Project, But Read the Terms Carefully

Rigorous is interesting because it is explicit about being an ETH Zurich project exploring AI-supported review, but the terms make clear it is not formal peer review.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Next step

Choose the next useful decision step first.

Use the guide or checklist that matches this page's intent before you ask for a manuscript-level diagnostic.

Open Journal Fit ChecklistAnthropic Privacy Partner. Zero-retention manuscript processing.Run Free Readiness Scan

Quick answer: Rigorous is worth paying attention to if you want AI-generated methodological feedback from a project with serious academic roots. It is less compelling if you want a mature, privacy-maximal, accountable pre-submission review service.

Method note: This page was updated in March 2026 using Rigorous's public home, about, and terms pages. We did not upload a manuscript to the service for this update.

What Rigorous actually is

Rigorous publicly positions itself as an ETH Zurich project exploring how LLMs and AI agents can support scientific review and publication workflows.

That framing matters.

The site is not just selling "AI review." It explicitly presents itself as:

  • a project from ETH Zurich
  • a research-driven attempt to rethink scientific review in the AI era
  • a methodology-focused manuscript feedback tool

That makes Rigorous feel closer to a serious academic tooling project than a polished end-state commercial service.

What Rigorous does well

1. The methodology focus is clear

The home page emphasizes actionable feedback on methodology, clarity, and impact.

That is a good choice. Methodological weakness is one of the most common reasons rough drafts are not ready for scrutiny.

2. The academic provenance is real

Rigorous is explicit about the ETH Zurich origin and names the project founders on the About page. That makes the product feel less like anonymous AI marketing and more like a genuine research initiative.

3. The product claims are relatively restrained

Rigorous does not appear to promise magical reviewer replacement. The public language is closer to AI-supported review assistance than a claim that peer review has been solved.

That is a positive sign.

What buyers should watch carefully

1. The terms are more permissive than the homepage privacy line suggests

The home page says your manuscript is never shared and is processed securely by AI review modules.

The public terms add important detail:

  • manuscripts are temporarily stored on Backblaze
  • content is processed by third-party LLMs such as OpenAI APIs
  • use of the service means consenting to that third-party processing

That is not unusual for AI tools, but it is more specific and more permissive than a simple "fully private" reading would suggest.

2. The service explicitly says the output is not formal peer review

This point matters.

Rigorous's terms say the feedback is generated automatically by AI and does not constitute formal peer review.

That is the correct legal and practical framing. It also means buyers should not mistake the output for journal-calibrated external review.

3. The terms are cautious about confidentiality

The terms also say users are responsible for ensuring they do not share confidential, sensitive, or proprietary information.

That should make any lab with strict confidentiality needs pause and decide whether this tool fits their data-handling requirements.

Where Rigorous is strongest

Rigorous is a sensible fit if:

  • you want AI methodological feedback before submission
  • you are comfortable using a research-preview-style tool
  • you value the academic provenance more than polished commercial packaging
  • the manuscript is not so sensitive that third-party processing is a blocker

Where Rigorous falls short

1. It is still AI-only

Rigorous may be useful for surfacing issues. It does not give you accountable human judgment on novelty, journal fit, or likely reviewer objections.

2. It feels more like a project than a mature review service

That is not necessarily a flaw, but it changes expectations. Labs looking for predictable service operations, simple pricing, and clear review deliverables may find it less commercialized than alternatives.

3. The privacy position is mixed

The site has a reassuring top-line privacy message, but the underlying terms make clear the manuscript is processed by third-party infrastructure and models. That is a meaningful distinction.

Rigorous vs Manusights

This is the practical split:

Question
Better fit
"Can AI quickly flag methodology and structure issues here?"
Rigorous
"Is this manuscript actually ready for journal submission?"
Manusights

Rigorous is better for exploratory AI-assisted feedback.

Manusights is better when the decision needs accountability, field judgment, and a clearer service model.

For the direct comparison, read Manusights vs Rigorous AI Review.

Bottom line

Rigorous is one of the more credible AI review experiments because it is explicit about being a research-driven project and does not oversell itself as formal peer review.

That makes it interesting.

But the terms also make clear that third-party processing is involved and that the output is not formal peer review. For exploratory use, that can be fine. For high-stakes submissions, it is usually not enough on its own.

Related:

Navigate

Jump to key sections

References

Sources

  1. Rigorous home
  2. Rigorous about
  3. Rigorous terms

Reference library

Use the core publishing datasets alongside this guide

This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: how selective journals are, how long review takes, and what the submission requirements look like across journals.

Open the reference library

Before you upload

Choose the next useful decision step first.

Move from this article into the next decision-support step. The scan works best once the journal and submission plan are clearer.

Use the scan once the manuscript and target journal are concrete enough to evaluate.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Open Journal Fit Checklist