Peer Review9 min read

AI-Generated Peer Reviews: How Common Are They and What Researchers Can Do

Research Scientist, Neuroscience & Cell Biology

Works across neuroscience and cell biology, with direct expertise in preparing manuscripts for PNAS, Nature Neuroscience, Neuron, eLife, and Nature Communications.

Is your manuscript ready?

Run a free diagnostic before you submit. Catch the issues editors reject on first read.

Run Free Readiness ScanFree · No account needed

Quick answer

Yes, AI tools can review manuscripts before submission. Options range from automated checkers to professional human-expert review services. Key distinction: AI tools identify structural and language issues; human expert review assesses whether your specific claims are supported by your data and whether your target journal will accept the work. Manusights offers both an AI diagnostic and human expert review from researchers who publish in Cell, Nature, and Science.

At ICLR 2025, researchers analyzed 70,000 peer reviews submitted to one of AI's most prestigious conferences and found that roughly 21% were fully AI-generated.

Not AI-assisted. Not "I used ChatGPT to polish my grammar." Fully written by a large language model, start to finish. The analysis, published by Pangram Labs, covered every review submitted to the conference in a single year.

Around the same time, GPTZero found over 100 hallucinated citations in papers that had already been accepted at NeurIPS. Fake references that don't exist, invented by an LLM, waved through by reviewers who apparently never checked.

And in January 2026, OpenAI launched Prism, a free, LaTeX-native workspace that bakes GPT-5.2 directly into the scientific writing process. Their pitch: "In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science."

AI peer review isn't a future concern. It's here. It's on both sides of the process. And if you're a researcher submitting your work to a journal or conference right now, you need to understand what that means for your paper.

AI peer review by the numbers

Let's be specific about the scale.

A February 2026 study published on arXiv analyzed AI-generated content across ICLR and Nature Communications reviews. Their findings: approximately 20% of ICLR reviews and 12% of Nature Communications reviews in 2025 were classified as AI-generated. The trend line goes in one direction. Up, and steeply.

Meanwhile, on the paper-writing side, Pangram's analysis of ICLR submissions found that over 10% of submitted papers were majority AI-generated. The more AI-generated text in a submission, the worse the reviews it received. But plenty of these papers still made it through.

Here's the uncomfortable arithmetic: if 10-20% of papers are AI-written and 20% of reviews are AI-generated, a non-trivial chunk of the scientific literature is now AI writing about AI writing, reviewed by AI. Nobody human meaningfully touched it.

How to tell if your paper was reviewed by AI

When you get your reviews back, read them with fresh eyes. Not for what they say about your paper, but for how they say it.

Generic praise that could apply to any paper. "The authors present an interesting study on an important topic." "The methodology is generally sound." If you could swap the review onto a completely different manuscript and it would still make sense, that's a red flag.

Cookie-cutter structure with no real engagement. AI reviews love templates. Summary paragraph, then "Strengths" (three bullet points), then "Weaknesses" (three bullet points), then "Minor Comments." The problem is when every section feels like it was generated from the abstract alone, without evidence that the reviewer read the actual paper.

Hallucinated citations in the review itself. This is the smoking gun. If a reviewer says "as shown in Smith et al. (2023)" or "consistent with the findings of Chen and colleagues (Nature, 2024)," check those references. AI-generated reviews sometimes cite papers that don't exist. A human reviewer might misremember a citation. Only an AI invents one from scratch.

Suspiciously fast turnaround. If a journal's average review time is 4-6 weeks and you get a detailed review back in 48 hours, someone might have outsourced the work.

Absence of field-specific judgment. Human reviewers bring their own experience. They'll say things like "This approach has been tried before by the Johnson lab with mixed results" or "The authors should consider whether batch effects could explain Figure 3." AI reviews rarely contain this kind of domain-grounded specificity.

Weird hedging patterns. LLMs default to diplomatic nothingness. "While the authors present some evidence, it would be beneficial to further elaborate on..." Human reviewers tend to be more direct, even blunt.

Why AI reviews are often worse

An AI can summarize your paper. It can check whether your references are formatted correctly. It can probably flag obvious statistical errors. But that's not what peer review is supposed to do.

Peer review exists to answer questions that require actual expertise:

Is this actually new, and does the methodology work in practice? An LLM can search its training data for similar-sounding abstracts, but it can't tell you whether the specific combination of approach, model system, and finding genuinely advances the field. A reviewer who has personally run Western blots on this protein knows that it's notoriously difficult to detect at low expression levels. A reviewer who has done single-cell RNA-seq knows that the clustering algorithm the authors used tends to oversplit populations. This practical knowledge doesn't live in papers. It lives in researchers' hands and heads.

Is the interpretation reasonable, and does the framing match the evidence? The data might be real and the statistics correct, but the conclusion might still be wrong. A human reviewer can say "I've seen this artifact before" or "there's a simpler explanation you haven't considered." Authors oversell, and a good reviewer pushes back: "You're claiming this drives metastasis, but you've only shown it in vitro." An AI might parrot a generic "consider toning down the claims" but can't tell you which claims are overreaching and why.

The result is that AI reviews tend to be generically correct but specifically useless. They'll tell you to "strengthen the statistical analysis" without telling you what's wrong with it. They're the review equivalent of a participation trophy.

The deeper problem

Here's the part that actually worries me.

Peer review has always been imperfect. But it had one thing going for it: a human being with relevant expertise made a judgment call about whether your science held up. That judgment was grounded in experience, and the possibility of being called on it kept people honest.

What happens when that human drops out of the loop? We're already seeing the answer. Authors use AI to write papers. Reviewers use AI to review them. The AI reviewing the paper can't tell that the AI-written paper has hallucinated data, because the reviewer-AI doesn't actually know anything. It's pattern-matching text.

In July 2025, Nature reported that researchers had started hiding prompt injection attacks in their papers. Invisible text instructions designed to manipulate AI reviewers into writing positive reviews. Authors are optimizing their manuscripts not for human readers, but for the LLMs they expect will review them.

The NeurIPS hallucinated citations scandal makes more sense in this context. Over 100 fake references made it into accepted papers. That doesn't happen if humans are carefully checking citations. It happens when the entire chain has been partially automated. AI writing reviewing AI writing.

The answer isn't AI that reviews better. It's humans who review carefully.

What you can do about it

You can't control who reviews your paper or whether they use AI. But you can take steps to protect your work.

1. If you suspect an AI-generated review, flag it

If you receive a review that's clearly generic, contains hallucinated references, or shows no evidence of engagement with your actual data, write to the editor. Be specific. Say "The reviewer cites Smith et al. (2024) in their comment about Figure 3, but this paper doesn't exist."

2. Choose journals with integrity measures

Some journals have taken concrete steps: Science has explicitly banned AI-generated reviews; Springer Nature requires disclosure of AI use. When choosing where to submit, consider this. A journal that takes review integrity seriously is more likely to give your paper a fair evaluation.

3. Make your paper AI-review-proof

Papers that are highly specific, tightly argued, and deeply technical are harder for AI to review poorly. If your methods section is precise enough that only someone who's actually done the experiment would understand the decisions you made, an AI-generated review will expose itself by missing those nuances.

4. Get human feedback before you submit

If there's a chance your paper will be reviewed by an AI that can't evaluate methodology or novelty, then you need human expert feedback before that happens. Pre-submission review used to be about catching weaknesses that reviewers would flag. Now it's also about getting substantive, expert feedback that the official review process may no longer provide.

5. Respond to bad reviews assertively

If you get back comments that are clearly shallow or AI-generated, you can note where a reviewer's comments don't engage with the specifics of your work. Editors notice when a reviewer's comment is generic and the author's response reveals that the reviewer didn't actually read the paper.

The system is more resilient than its critics think, mostly because the people in it genuinely care about getting things right. But if you care about the quality of feedback your paper receives, be proactive. Get expert eyes on your manuscript before you submit. Choose journals that take integrity seriously. The peer review system is only as good as the people participating in it.

One more step before you submit

Choosing the right journal is half the battle. The other half is knowing whether your manuscript actually meets the bar for it.

The most common mistake is not picking the wrong journal: it's submitting to the right journal before the paper is ready. Journals at the Cell/Nature/Science tier desk-reject 60-80% of submissions without peer review, often for reasons that are fixable before submission.


At Manusights, every review is done by a human scientist with domain expertise. The kind of person who's actually run the experiments, sat on editorial boards, and knows what good science looks like from the inside. If you want expert feedback on your manuscript before it enters a review system you can't control, that's what we're here for.

The Bottom Line

AI tools are useful as a pre-review pass before human review. They catch language and structure issues quickly. What they can't replace is domain-expert judgment about whether the scientific claim holds up, whether the methodology is appropriate, and whether the paper is scoped correctly for the target journal.

Sources

  • Published editorial guidelines from high-impact journals
  • International Committee of Medical Journal Editors (ICMJE) reporting standards
  • CONSORT, PRISMA, STROBE, and ARRIVE reporting guidelines
  • Pre-Submission Checklist , 25-point audit before you submit

See also

Free scan in about 60 seconds.

Run a free readiness scan before you submit.

Drop your manuscript here, or click to browse

PDF or Word · max 30 MB

Security and data handling

Manuscripts are processed once for this scan, then deleted after analysis. We do not use submitted files for model training. Built with Anthropic privacy controls.

Need NDA coverage? Request an NDA

Only email + manuscript required. Optional context can be added if needed.

Upload Manuscript Here - Free Scan