Product Comparisons4 min readUpdated Apr 20, 2026

Rigorous AI Review 2026: Interesting ETH Project, But Read the Terms Carefully

Rigorous is interesting because it is explicit about being an ETH Zurich project exploring AI-supported review, but the terms make clear it is not formal peer review.

Senior Researcher, Oncology & Cell Biology

Author context

Specializes in manuscript preparation and peer review strategy for oncology and cell biology, with deep experience evaluating submissions to Nature Medicine, JCO, Cancer Cell, and Cell-family journals.

Readiness scan

Find out what this manuscript actually needs before you pay for a larger service.

Run the Free Readiness Scan to see whether the real issue is scientific readiness, journal fit, figures, citations, or language support before you buy editing or expert review.

Diagnose my paperAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal

Quick answer: Rigorous is worth paying attention to if you want AI-generated methodological feedback from a project with serious academic roots. It is less compelling if you want a mature, privacy-maximal, accountable pre-submission review service.

Method note: This page was updated in March 2026 using Rigorous's public home, about, and terms pages. We did not upload a manuscript to the service for this update.

In our pre-submission review work

In our pre-submission review work, Rigorous is one of the more credible AI-review experiments because the current public site still reads like a research project rather than an overclaimed commercial shortcut. The live homepage emphasizes methodology and impact, and the public terms are unusually direct about third-party LLM processing and the fact that the output is not formal peer review.

That honesty is useful, but it also narrows the right use case. We would treat Rigorous as an exploratory logic and methodology pass, not as the final submission gate for a sensitive or career-critical manuscript.

What Rigorous actually is

Rigorous publicly positions itself as an ETH Zurich project exploring how LLMs and AI agents can support scientific review and publication workflows.

That framing matters.

The site is not just selling "AI review." It explicitly presents itself as:

  • a project from ETH Zurich
  • a research-driven attempt to rethink scientific review in the AI era
  • a methodology-focused manuscript feedback tool

That makes Rigorous feel closer to a serious academic tooling project than a polished end-state commercial service.

1. The methodology focus is clear

The home page emphasizes actionable feedback on methodology, clarity, and impact.

That is a good choice. Methodological weakness is one of the most common reasons rough drafts are not ready for scrutiny.

2. The academic provenance is real

Rigorous is explicit about the ETH Zurich origin and names the project founders on the About page. That makes the product feel less like anonymous AI marketing and more like a genuine research initiative.

3. The product claims are relatively restrained

Rigorous does not appear to promise magical reviewer replacement. The public language is closer to AI-supported review assistance than a claim that peer review has been solved.

That is a positive sign.

1. The terms are more permissive than the homepage privacy line suggests

The home page says your manuscript is never shared and is processed securely by AI review modules.

The public terms add important detail:

  • manuscripts are temporarily stored on Backblaze
  • content is processed by third-party LLMs such as OpenAI APIs
  • use of the service means consenting to that third-party processing

That is not unusual for AI tools, but it is more specific and more permissive than a simple "fully private" reading would suggest.

2. The service explicitly says the output is not formal peer review

This point matters.

Rigorous's terms say the feedback is generated automatically by AI and does not constitute formal peer review.

That is the correct legal and practical framing. It also means buyers should not mistake the output for journal-calibrated external review.

3. The terms are cautious about confidentiality

The terms also say users are responsible for ensuring they do not share confidential, sensitive, or proprietary information.

That should make any lab with strict confidentiality needs pause and decide whether this tool fits their data-handling requirements.

Where Rigorous is strongest

Rigorous is a sensible fit if:

  • you want AI methodological feedback before submission
  • you are comfortable using a research-preview-style tool
  • you value the academic provenance more than polished commercial packaging
  • the manuscript is not so sensitive that third-party processing is a blocker

1. It is still AI-only

Rigorous may be useful for surfacing issues. It does not give you accountable human judgment on novelty, journal fit, or likely reviewer objections.

2. It feels more like a project than a mature review service

That is not necessarily a flaw, but it changes expectations. Labs looking for predictable service operations, simple pricing, and clear review deliverables may find it less commercialized than alternatives.

3. The privacy position is mixed

The site has a reassuring top-line privacy message, but the underlying terms make clear the manuscript is processed by third-party infrastructure and models. That is a meaningful distinction.

Rigorous vs Manusights

This is the practical split:

Question
Better fit
"Can AI quickly flag methodology and structure issues here?"
Rigorous
"Is this manuscript actually ready for journal submission?"
Manusights

Rigorous is better for exploratory AI-assisted feedback.

Manusights is better when the decision needs accountability, field judgment, and a clearer service model.

Capability comparison

Capability
Rigorous
Manusights
Methodology and logic feedback
Stronger
Present, but not the main pitch
Formal managed review service posture
No
Yes
Citation verification against live databases
No
Yes
Figure-level analysis
No
Yes
Journal-specific readiness scoring
No
Yes
Privacy posture for sensitive manuscripts
Mixed
Stronger

For the direct comparison, read Manusights vs Rigorous AI Review.

Before choosing any service, manuscript readiness check in 1-2 minutes. It scores desk-reject risk for your target journal and identifies top issues - at no cost. The $29 Manusights diagnostic adds citation verification against 500M+ papers (CrossRef, PubMed, arXiv), vision-based figure analysis of every panel, section-by-section scoring (1-5 scale), journal-fit ranking with alternatives, and a prioritized A/B/C experiment fix list. For career-critical submissions, Manusights expert review ($1,000+) provides a named field-matched scientist with 12-18 specific revision recommendations and cover letter strategy.

Choose Rigorous if:

  • you want AI-generated methodological feedback from a project with genuine academic roots
  • you are comfortable with third-party LLM processing of your manuscript text
  • you are using it as an exploratory check, not as your final pre-submission review
  • you value the ETH Zurich research framing over commercial polish

Think twice if:

  • your manuscript contains unpublished findings you need to protect before submission
  • you need formal pre-submission peer review with accountability (Rigorous is explicit that it is not that)
  • privacy is non-negotiable and you need zero-retention manuscript processing
  • you want journal-specific fit analysis rather than general methodology feedback

Readiness check

Find out what this manuscript actually needs before you choose a service.

Run the free scan to see whether the issue is scientific readiness, journal fit, or citation support before paying for more help.

Diagnose my paperAnthropic Privacy Partner. Zero-retention manuscript processing.See sample reportOr find your best-fit journal

Bottom line

Rigorous is one of the more credible AI review experiments because it is explicit about being a research-driven project and does not oversell itself as formal peer review.

That makes it interesting.

But the terms also make clear that third-party processing is involved and that the output is not formal peer review. For exploratory use, that can be fine. For high-stakes submissions, it is usually not enough on its own.

  • Manusights vs Rigorous AI Review
  • Best pre-submission manuscript review service
  • AI peer review vs human expert review

Before you submit

A desk-rejection risk and journal-fit check identifies the specific framing and scope issues that trigger desk rejection before you submit.

Frequently asked questions

Rigorous is an ETH Zurich-linked AI manuscript review project that emphasizes methodology, clarity, and impact feedback before submission. Its public positioning reads more like a research-driven tool than a mature managed review service.

No. Rigorous's public terms explicitly say the feedback is generated automatically by AI and does not constitute formal peer review. It is better understood as AI-assisted pre-submission feedback.

Rigorous states that manuscripts are processed with third-party LLMs such as OpenAI APIs and may be temporarily stored on infrastructure such as Backblaze. That is a meaningful distinction from zero-retention or reviewer-only handling models.

Rigorous is most useful as an exploratory methodology and argument check before submission, especially for teams comfortable with a research-preview-style tool. It is less suitable as the final gate for confidential or career-critical submissions.

References

Sources

  1. Rigorous home
  2. Rigorous about
  3. Rigorous terms

Final step

Run the scan before you spend more on editing or external review.

Use the Free Readiness Scan to get a manuscript-specific signal on readiness, fit, figures, and citation risk before choosing the next paid service.

Best for commercial comparison pages where the buyer is still choosing the right help.

Anthropic Privacy Partner. Zero-retention manuscript processing.

Internal navigation

Where to go next

Diagnose my paper