Manusights vs Rigorous AI Review: Production Platform vs Research Project
Rigorous AI Review is a free ETH Zurich research project that sends manuscripts to OpenAI for processing. Manusights is a production platform with SOC 2 Type II certification and Anthropic zero-retention. Here is what each delivers - and the privacy difference that matters.
Founder, Manusights
Author context
Founder of Manusights. Writes on the pre-submission review landscape — what services actually deliver, how they compare, and where each one fits in a realistic manuscript workflow.
Readiness scan
Find out what this manuscript actually needs before you pay for a larger service.
Run the Free Readiness Scan to see whether the real issue is scientific readiness, journal fit, figures, citations, or language support before you buy editing or expert review.
Quick answer: Rigorous AI Review is a free academic research project from ETH Zurich that provides AI-generated methodology feedback. Manusights is a production review platform with citation verification, figure analysis, and journal-specific scoring. The most consequential difference isn't features, it's what happens to your manuscript data. Rigorous sends manuscripts to OpenAI for processing and stores them on Backblaze with no specified deletion timeline. Manusights uses Anthropic zero-retention with SOC 2 Type II certification.
manuscript readiness check in 60 seconds. It provides a readiness score and desk-reject risk with SOC 2 Type II privacy.
Method note: This comparison was updated April 2026 using Rigorous's official website, terms of service, about page, and GitHub repository. We did not upload a manuscript for this update.
What Rigorous actually is
Rigorous AI Review was created by Robert Jakob and Kevin O'Sullivan as a research project at ETH Zurich, funded by ETH SPH (Swiss School of Public Health). The project is open-source under an MIT license, with all code available on GitHub.
The tool provides AI-generated feedback on methodology, clarity, and impact. Users upload a manuscript, specify a target journal, and receive a structured report. The web version (v0.2) launched at rigorous.review with an interactive interface and progress tracking.
Rigorous is genuinely interesting as an academic experiment. Researchers from ETH Zurich, Stanford, Harvard, MIT, Bern, and St. Gallen have tested the service. The focus on methodology feedback addresses a real gap in the drafting process.
But Rigorous is a research project, not a production service. Its own terms of service make this distinction explicit, and that distinction matters for two reasons: data handling and output reliability.
The privacy architecture difference
This is the most consequential distinction between the two platforms. Read Rigorous's terms carefully before uploading any manuscript.
What Rigorous's terms of service say:
- Manuscripts are "temporarily stored on secure servers provided by Backblaze, a third-party cloud storage service, to facilitate processing"
- Manuscript content is "processed using large language models (LLMs) such as OpenAI APIs to generate feedback", meaning data is "sent to and processed by OpenAI's systems"
- The service "may utilize anonymized and aggregated data from our service for research purposes" with findings potentially published in academic journals
- No deletion timeline is specified. The terms say "temporarily stored" but do not define how long manuscripts remain on Backblaze
- Users are told to ensure they don't submit "confidential, sensitive, or proprietary information"
That last point is worth pausing on. Rigorous itself tells you not to upload confidential manuscripts. If you're working with unpublished clinical trial data, proprietary pharmaceutical methods, patentable inventions, embargoed results, or institutional data under compliance requirements, Rigorous's own terms advise against using the service.
What Manusights's data handling looks like:
- Anthropic zero-retention partnership: manuscripts are processed once, then permanently deleted
- SOC 2 Type II certified infrastructure
- TLS 1.2+ encryption in transit, AES-256 encryption at rest
- Manuscripts are never used for model training
- No third-party storage, processing and deletion happen within the same certified pipeline
This isn't about which AI model is smarter. It's about what happens to your unpublished work after processing. For a researcher uploading a draft 6 weeks before submitting to Nature Medicine, the difference between "stored on Backblaze, processed by OpenAI, no deletion timeline" and "processed once via Anthropic zero-retention, permanently deleted, SOC 2 Type II certified" may be the only comparison that matters.
What each service actually delivers
Beyond privacy, the analytical scope differs substantially.
Rigorous provides:
- AI-generated comments on methodology, clarity, and impact
- Structured feedback report
- Free access with no stated limitations
Rigorous does not provide:
- Citation verification against any database
- Vision-based figure analysis
- Journal-specific readiness scoring or desk-reject risk
- Quantified readiness score
- Prioritized revision checklist (A/B/C by impact)
- Human expert escalation
- Any formal guarantee about output quality (their terms state the output "does not constitute formal peer review")
Manusights provides (at each tier):
- Free scan ($0): Readiness score (0-100), desk-reject risk, top issues, journal-fit verdict, 60 seconds
- AI diagnostic ($29): Citation verification against 500M+ papers (CrossRef, PubMed, OpenAlex, Semantic Scholar, bioRxiv, medRxiv), vision-based figure analysis, journal-specific scoring across 5 dimensions, prioritized A/B/C revision checklist, 30 minutes
- Expert review ($1,000+): Named field expert or CNS-level editor, full NDA, 3-7 day turnaround
Comparison table
Capability | Manusights | Rigorous AI Review |
|---|---|---|
Price | $0 free scan / $29 diagnostic / $1,000+ expert | Free |
Citation verification (500M+) | Yes ($29) | No |
Figure analysis (vision-based) | Yes ($29) | No |
Journal-specific desk-reject risk | Yes ($0 free scan) | No |
Readiness score (0-100) | Yes ($0) | No |
Methodology feedback | Yes (included in diagnostic) | Yes (primary focus) |
Prioritized fix list (A/B/C) | Yes | No |
Human expert escalation | Yes ($1,000+) | No |
Data handling | SOC 2 Type II, Anthropic zero-retention, permanent deletion | Backblaze storage, OpenAI processing, no deletion timeline |
Output disclaimer | Delivered as actionable review | "Does not constitute formal peer review" |
Open-source code | No | Yes (MIT license, GitHub) |
Maturity | Production platform | Research project (v0.2) |
Institutional backing | Commercial (SOC 2 audited) | Academic (ETH SPH funded) |
Which difference matters most by manuscript type
If your manuscript looks like this | Better first tool | Why |
|---|---|---|
Early draft, low confidentiality risk, mostly methodology questions | Rigorous | Free feedback on design, clarity, and impact is enough for a first pass |
Submission-ready draft with citation, figure, and journal-fit risk | Manusights | You need a broader readiness check, not just methodology comments |
Patentable, embargoed, or institutionally sensitive work | Manusights | Rigorous's own terms say not to submit confidential or proprietary information |
Curiosity-driven comparison of AI review tools | Rigorous first, then Manusights | The pairing lets you compare free methodology feedback with a production readiness score |
What we see in pre-submission review work
In our pre-submission review work, the manuscripts that are tempted by Rigorous usually split into two buckets.
The first bucket is early-stage academic drafting, where the team wants a free second opinion on methods and is not especially worried about confidentiality. Rigorous can make sense there.
The second bucket is much riskier: authors with selective-journal targets, unpublished data, or commercial sensitivity who are attracted by the free price but have not matched that choice to the manuscript's risk profile. That is where the decision usually gets made on the wrong variable. The real question is not whether zero dollars beats twenty-nine dollars. It is whether a research project with OpenAI processing and unspecified deletion is the right place to upload this particular draft.
Readiness check
Find out what this manuscript actually needs before you choose a service.
Run the free scan to see whether the issue is scientific readiness, journal fit, or citation support before paying for more help.
Where Rigorous is useful
Free methodology feedback on non-sensitive manuscripts. For a PhD student checking whether their experimental design has obvious flaws before showing the paper to their advisor, free is hard to beat. If the manuscript doesn't contain anything confidential and you're OK with the data handling, Rigorous provides useful methodology comments at zero cost.
Academic credibility. ETH Zurich is a world-class research institution. The project reflects genuine interest in how AI can support peer review, and the open-source approach (MIT license) promotes transparency.
Exploratory use. For researchers curious about what AI peer review looks like, Rigorous is a low-stakes way to experiment, provided the manuscript isn't sensitive.
Upcoming features. Rigorous's GitHub shows Agent2_Outlet_Fit (journal/conference fit evaluation) in development. If shipped, this would address one of the current gaps. Worth watching, though it's not available yet.
Where Manusights is the better choice
When privacy matters. Any manuscript with unpublished data, proprietary methods, patentable material, or institutional compliance requirements should use a service with certified privacy guarantees. Rigorous's own terms tell users not to submit confidential information. Manusights' SOC 2 Type II and Anthropic zero-retention are designed for exactly these manuscripts.
When you need more than methodology feedback. Rigorous provides methodology, clarity, and impact comments. That's one layer of review. Manusights adds citation verification, figure analysis, journal-specific scoring, and a prioritized revision checklist. For a manuscript 4 weeks from submission, you need all of these.
When you need a production-grade assessment. Rigorous explicitly says its output is not formal peer review. Manusights delivers a structured diagnostic report designed to inform submission decisions with quantified risk scoring.
When the stakes are high. For career-defining submissions where judgment calls about novelty and positioning can determine acceptance, human expert review matters. Manusights provides a path to named field experts ($1,000+) and CNS editors ($1,500-$2,000) with full NDA protection. Rigorous is AI-only.
Quick decision framework
Choose Rigorous if:
- The manuscript is not privacy-sensitive
- You want free AI methodology feedback
- You're comfortable with OpenAI processing and Backblaze storage
- You don't need journal-specific scoring, citation verification, or figure analysis
- You're in the early drafting stage and want a second opinion on experimental design
Choose Manusights if:
- The manuscript contains unpublished or proprietary data
- You need citation verification, figure analysis, or journal-specific scoring
- You need a quantified readiness assessment for a selective journal
- You want the option to escalate to human expert review
- You need production-grade privacy certifications for institutional compliance
The recommended approach
If your manuscript is not privacy-sensitive, try Rigorous for free methodology feedback. Then manuscript readiness check (60 seconds) for readiness scoring and journal-fit assessment. If you need citation verification, figure analysis, or a prioritized fix list, add the $29 Manusights diagnostic.
If your manuscript is privacy-sensitive, skip Rigorous entirely. Start with Manusights. The free scan and $29 diagnostic both operate under SOC 2 Type II and Anthropic zero-retention protections.
Frequently asked questions
Rigorous AI Review is a free academic research project from ETH Zurich that provides AI-generated methodology feedback via OpenAI APIs. Manusights is a production review platform with citation verification against 500M+ papers, vision-based figure analysis, journal-specific readiness scoring, and SOC 2 Type II privacy. They occupy different categories: research experiment vs production tool.
Rigorous's terms of service explicitly state that manuscripts are stored on Backblaze and processed via OpenAI APIs. The terms also state users should not submit confidential, sensitive, or proprietary information. No deletion timeline is specified. Manusights uses Anthropic zero-retention with SOC 2 Type II, manuscripts are processed once and permanently deleted.
Yes. Rigorous AI Review is completely free. It is funded by ETH SPH (Swiss School of Public Health) as an academic research project. Manusights also offers a free scan for readiness scoring and journal-fit assessment.
Rigorous provides AI-generated feedback on methodology, clarity, and impact. It does not verify citations, analyze figures, score journal-specific readiness, or provide human expert review. The output disclaimer states it does not constitute formal peer review.
If your manuscript is not privacy-sensitive, you can use Rigorous for free methodology feedback, then run the free Manusights scan for readiness scoring and journal-fit assessment. If your manuscript contains unpublished data or proprietary methods, skip Rigorous and start with Manusights.
Rigorous was created by Robert Jakob and Kevin O'Sullivan as a research project at ETH Zurich, funded by ETH SPH (Swiss School of Public Health). The project is open-source under MIT license. Researchers from ETH Zurich, Stanford, Harvard, MIT, Bern, and St. Gallen have tested the service.
Sources
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.
Checklist system / operational asset
Elite Submission Checklist
A flagship pre-submission checklist that turns journal-fit, desk-reject, and package-quality lessons into one operational final-pass audit.
Flagship report / decision support
Desk Rejection Report
A canonical desk-rejection report that organizes the most common editorial failure modes, what they look like, and how to prevent them.
Dataset / reference hub
Journal Intelligence Dataset
A canonical journal dataset that combines selectivity posture, review timing, submission requirements, and Manusights fit signals in one citeable reference asset.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Final step
Run the scan before you spend more on editing or external review.
Use the Free Readiness Scan to get a manuscript-specific signal on readiness, fit, figures, and citation risk before choosing the next paid service.
Best for commercial comparison pages where the buyer is still choosing the right help.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Supporting reads
Conversion step
Run the scan before you spend more on editing or external review.
Anthropic Privacy Partner. Zero-retention manuscript processing.