Our Story
Great science shouldn't take years to publish because nobody told you what was wrong.
My father is a scientist. A good one. And I watched him struggle to get published for years.
It wasn't the science. His research was solid. The problem was everything around it: framing that didn't land, a discussion section that buried the main finding, statistical presentation that gave reviewers easy targets. He'd submit, wait three months, get a vague rejection, and start over. Months of work, sitting in a queue, going nowhere.
I didn't fully understand what was happening until later. I studied computer science, went into healthcare consulting, then private equity. I spent years evaluating companies and technologies built on scientific research. I saw the system from the outside: which discoveries got funded, which papers shaped clinical decisions, which research changed how people thought about problems. And I started to notice a pattern.
The science that got published wasn't always the best science. It was the best-packaged science.
Then I moved to Boston, and I heard the same story everywhere.
A postdoc who spent 14 months bouncing between journals before publishing a finding that ended up highly cited. A PI whose student's paper got desk rejected three times because the cover letter didn't explain why the journal should care. A researcher who lost a grant renewal because their best work was still “under review” when the deadline hit.
The researchers I met in the Harvard and MIT ecosystem told me these stories constantly. And then they'd say something like:
“I reviewed a paper last week that had a great finding. Would have been desk rejected at any top journal. The framing was all wrong. If someone had just told them to fix the introduction, it would have sailed through.”
These were scientists who'd published in Nature, Cell, Science. People who sat on editorial boards and reviewed dozens of papers a year. They could spot the problems in a manuscript in minutes. They knew exactly what editors would flag, what reviewers would push back on, which claims would trigger skepticism.
But that expertise was locked behind informal networks.
The feedback you get depends on where you are.
Well-connected lab
- Senior colleague tears your draft apart before you submit
- PI knows which journals want what, and tells you
- Lab meeting catches the weak figure and the overclaimed discussion
- Paper goes out polished. Gets accepted faster.
Everyone else
- Submit your best guess and hope for the best
- Wait 3 months for a vague rejection letter
- Resubmit elsewhere, wait another 3 months
- A year gone. Same paper. Fixable problems nobody mentioned.
Your paper's fate shouldn't depend on whether you happen to know someone who reviewed for Nature last month.
So we built the feedback system that should have existed all along.
So I went to the source.
Not to language editors. Not to AI companies. To the scientists who actually make publication decisions: researchers publishing in Cell, Nature, and Science. People who sit on editorial boards, review dozens of manuscripts a year, and know from direct experience what gets a paper accepted and what gets it desk rejected in nine days.
Manusights was built by this group. Not advised by them, not inspired by them. Built by them. The reviewers in our network co-developed the diagnostic criteria, trained the AI on their own peer review documents alongside published research, and are the same people who conduct the expert reviews today. When our system flags a methodology concern or a framing problem, it's applying judgment that came from scientists currently publishing at the highest level of their fields.
That's not something that can be replicated by building better software. It requires the scientists themselves.
Every researcher should have access to the kind of honest, expert feedback that well-connected labs get informally. Not language editing. Not formatting checks. Real scientific critique from people who have already reviewed for your target journal.
Our reviewers are paid, accountable, and bring the same rigor they'd bring to a journal review. The difference is their feedback goes to you before submission, when you can still fix things.
The verification standard: why we exist in a different category.
In 2025, 21% of peer reviews at ICLR were fully AI-generated. 100 hallucinated citations passed review at NeurIPS. The market responded with cheap AI tools that generate plausible-sounding feedback for $5. They have the same core problem: they don't verify their own output.
Manusights starts from a different premise: verify first, then analyze.
Every citation in every report is checked against CrossRef and PubMed before delivery. If a reference can't be confirmed, it doesn't appear. No hallucinated DOIs. No fabricated author names. No citations to papers that don't exist. Other AI tools hallucinate citations freely. Ours won't.
Beyond verification, we trained on something no public model has access to: the actual peer review documents our network has written for Cell, Nature, and Science submissions. Not published papers. The reviews themselves. The ones that say “the n of 3 in Figure 2 is insufficient” or “this discussion overclaims causation from correlational data.”
That training data doesn't exist anywhere else. You can't scrape it from the internet. You can't build it without the reviewers. It took years of actual review work to accumulate.
The result is a diagnostic that verifies its own output and reflects how active reviewers think. For the author trying to get accepted, that combination is the whole game.
Ready to see your manuscript through a reviewer's eyes?
Start with the free Pre-Submission Diagnostic. It flags the weak points before you submit.
The Team
35+ scientists who've been on both sides of peer review
Our reviewers don't just give feedback. They publish in top journals, serve on editorial boards, and know from experience what makes the difference between acceptance and rejection.
500+
Papers published
35+
In Cell, Nature, Science
10+
Disciplines covered
“I kept reviewing papers that had good science but obvious presentation problems. The kind of thing I could fix in a 20-minute conversation. I joined Manusights because I wanted that conversation to happen before the rejection letter.”
Manusights reviewer, former editorial board member
Our reviewers come from institutions including
Why we don't publish reviewer names
Our reviewers are active scientists with review assignments at the same journals our clients are submitting to. Naming them publicly would compromise their ability to serve as anonymous peer reviewers for those journals. It's the same reason journals don't publish their reviewer lists.
What we can say: they're PIs, associate professors, and senior postdocs at institutions including Harvard Medical School, Stanford, MIT, Johns Hopkins, UCSF, University of Cambridge, Karolinska Institutet, and Max Planck. They hold active review assignments at journals including Cell, Nature, Science, NEJM, and The Lancet. They co-developed the diagnostic criteria the AI uses, and they're the same people who conduct the expert reviews.
Several of our reviewers came through research collaborations in the Harvard/MIT/Stanford ecosystem. Others were referred by colleagues already in the network. We don't recruit through job boards or freelancing platforms. Every reviewer has a track record of CNS-level publications and active editorial involvement in their field.
If you need to verify our reviewer quality before purchasing, email Erik directly and we'll share relevant credentials for your field under NDA.
What we believe
Your science deserves honest feedback.
Not encouragement. Not grammar corrections. Honest critique from someone who knows your field, has no reason to be polite, and will tell you exactly what a journal reviewer would say.
Expert judgment can't be automated.
Knowing that a particular antibody doesn't work at low concentrations, or that Reviewer 2 will flag this statistical approach. That's experience. You can't shortcut it.
Feedback should come before rejection.
The current system makes you wait months to learn what was wrong with your paper. We think you should know before you submit, when you can still do something about it.
Access shouldn't depend on connections.
If you're at a top lab, you get this feedback informally over coffee. If you're not, you're on your own. That's the gap we exist to close.
If you can't verify it, don't publish it.
AI tools that hallucinate citations are making science worse, not better. We verify every reference against CrossRef and PubMed. If an AI review tool doesn't check its own work, it's part of the problem.
Transparency builds trust.
We validated our system against real peer review and published the results. We think every AI tool making claims about scientific accuracy should do the same.
Your work stays yours
You're trusting us with unpublished research. We take that seriously.
Formal NDAs
Every reviewer signs a confidentiality agreement
Anthropic zero-retention
Enterprise infrastructure ensures nothing is stored
Single reviewer access
Only your assigned reviewer sees your work
100% IP retention
Your ideas and data remain entirely yours
My dad eventually got his papers published. It just took longer than it should have. Manusights exists so the next researcher doesn't have to figure it out alone.

Erik Jia, Founder
Ready to give your manuscript a real shot?
Two ways to get started, depending on where you are.
Stage 1 · free
AI Pre-Submission Diagnostic
Six-section cited report with live literature search. Flags what would get your paper desk-rejected. In your inbox in 30 minutes.
Get Free Readiness ScanStage 2 · $1,000+
Human Expert Review
Field-matched scientist who has published in and reviewed for your target journal. Written report in 3-7 days, under NDA.
See expert reviewNot sure which? Send us a message and we'll tell you honestly.