Is AI Reviewing Your Paper? How Common It Is and What Researchers Can Do
21% of ICLR 2025 reviews were fully AI-generated. Not AI-assisted: fully written by an LLM. Here's what that means for your next submission and how to protect your work.
Research Scientist, Neuroscience & Cell Biology
Author context
Works across neuroscience and cell biology, with direct expertise in preparing manuscripts for PNAS, Nature Neuroscience, Neuron, eLife, and Nature Communications.
Readiness scan
Find out if this manuscript is ready to submit.
Run the Free Readiness Scan before you submit. Catch the issues editors reject on first read.
How to use this page well
These pages work best when they behave like tools, not essays. Use the quick structure first, then apply it to the exact journal and manuscript situation.
Question | What to do |
|---|---|
Use this page for | Getting the structure, tone, and decision logic right before you send anything out. |
Most important move | Make the reviewer-facing or editor-facing ask obvious early rather than burying it in prose. |
Common mistake | Turning a practical page into a long explanation instead of a working template or checklist. |
Next step | Use the page as a tool, then adjust it to the exact manuscript and journal situation. |
Quick answer: Is AI reviewing your paper? Increasingly, yes. The strongest public estimate comes from Pangram Labs' ICLR analysis, which flagged roughly 21% of reviews as fully AI-generated, while a separate 2026 arXiv study estimated about 12% AI-generated review text in Nature Communications reviews from 2025. For authors, the practical takeaway is simple: assume some feedback may be AI-assisted, learn how to spot shallow AI review, and get real human judgment on the scientific risks before submission.
AI peer review is no longer just a conference curiosity. Conferences like ICLR now explicitly remind reviewers that they remain responsible for any LLM-generated content under their name, and Nature Portfolio materials say reviewers should ask the editor before using AI tools to help write a review. That means the problem is now visible enough that organizers and publishers are writing policy around it directly.
AI peer review by the numbers
Let's be specific about the scale.
A February 2026 study published on arXiv analyzed AI-generated content across ICLR and Nature Communications reviews. Their findings: approximately 20% of ICLR reviews and 12% of Nature Communications reviews in 2025 were classified as AI-generated. The trend line goes in one direction. Up, and steeply.
Meanwhile, on the paper-writing side, Pangram's analysis of ICLR submissions found that over 10% of submitted papers were majority AI-generated. The more AI-generated text in a submission, the worse the reviews it received. But plenty of these papers still made it through.
Here's the uncomfortable arithmetic: if 10-20% of papers are AI-written and 20% of reviews are AI-generated, a non-trivial chunk of the scientific literature is now AI writing about AI writing, reviewed by AI. Nobody human meaningfully touched it.
AI-assisted review vs AI-generated review
Review mode | What is happening | Why it matters to authors |
|---|---|---|
Human review | A reviewer reads the paper and writes the judgment themselves | You still get accountable field judgment |
AI-assisted review | A reviewer uses AI to help draft or polish the review | The scientific judgment may still be human, but the quality signal gets murkier |
AI-generated review | The model effectively writes the review itself | The review can look polished while missing the actual science |
In our pre-submission review work
In our pre-submission review work, the practical problem is not just that AI-generated reviews exist. It is that they often fail in predictable ways that matter to authors: generic praise, no figure-level engagement, no sign the reviewer understands the specific method, and no clear judgment about whether the central claim is actually exposed.
That changes what authors should do before submission. If there is a reasonable chance the official review will be shallow or AI-assisted, then the manuscript needs a stronger human screening layer before it enters that system. Otherwise the first real expert judgment may arrive too late, after the paper has already burned a review cycle.
How to tell if your paper was reviewed by AI
When you get your reviews back, read them with fresh eyes. Not for what they say about your paper, but for how they say it.
Generic praise that could apply to any paper. "The authors present an interesting study on an important topic." "The methodology is generally sound." If you could swap the review onto a completely different manuscript and it would still make sense, that's a red flag.
Cookie-cutter structure with no real engagement. AI reviews love templates. Summary paragraph, then "Strengths" (three bullet points), then "Weaknesses" (three bullet points), then "Minor Comments." The problem is when every section feels like it was generated from the abstract alone, without evidence that the reviewer read the actual paper.
Hallucinated citations in the review itself. This is the smoking gun. If a reviewer says "as shown in Smith et al. (2023)" or "consistent with the findings of Chen and colleagues (Nature, 2024)," check those references. AI-generated reviews sometimes cite papers that don't exist. A human reviewer might misremember a citation. Only an AI invents one from scratch.
Suspiciously fast turnaround. If a journal's average review time is 4-6 weeks and you get a detailed review back in 48 hours, someone might have outsourced the work.
Absence of field-specific judgment. Human reviewers bring their own experience. They'll say things like "This approach has been tried before by the Johnson lab with mixed results" or "The authors should consider whether batch effects could explain Figure 3." AI reviews rarely contain this kind of domain-grounded specificity.
Weird hedging patterns. LLMs default to diplomatic nothingness. "While the authors present some evidence, it would be beneficial to further elaborate on..." Human reviewers tend to be more direct, even blunt.
Why AI reviews are often worse
An AI can summarize your paper. It can check whether your references are formatted correctly. It can probably flag obvious statistical errors. But that's not what peer review is supposed to do.
Peer review exists to answer questions that require actual expertise:
Is this actually new, and does the methodology work in practice? An LLM can search its training data for similar-sounding abstracts, but it can't tell you whether the specific combination of approach, model system, and finding genuinely advances the field. A reviewer who has personally run Western blots on this protein knows that it's notoriously difficult to detect at low expression levels. A reviewer who has done single-cell RNA-seq knows that the clustering algorithm the authors used tends to oversplit populations. This practical knowledge doesn't live in papers. It lives in researchers' hands and heads.
Is the interpretation reasonable, and does the framing match the evidence? The data might be real and the statistics correct, but the conclusion might still be wrong. A human reviewer can say "I've seen this artifact before" or "there's a simpler explanation you haven't considered." Authors oversell, and a good reviewer pushes back: "You're claiming this drives metastasis, but you've only shown it in vitro." An AI might parrot a generic "consider toning down the claims" but can't tell you which claims are overreaching and why.
The result is that AI reviews tend to be generically correct but specifically useless. They'll tell you to "strengthen the statistical analysis" without telling you what's wrong with it. They're the review equivalent of a participation trophy.
The deeper problem
Here's the part that actually worries me.
Peer review has always been imperfect. But it had one thing going for it: a human being with relevant expertise made a judgment call about whether your science held up. That judgment was grounded in experience, and the possibility of being called on it kept people honest.
What happens when that human drops out of the loop? We're already seeing the answer. Authors use AI to write papers. Reviewers use AI to review them. The AI reviewing the paper can't tell that the AI-written paper has hallucinated data, because the reviewer-AI doesn't actually know anything. It's pattern-matching text.
In July 2025, Nature reported that researchers had started hiding prompt injection attacks in their papers. Invisible text instructions designed to manipulate AI reviewers into writing positive reviews. Authors are optimizing their manuscripts not for human readers, but for the LLMs they expect will review them.
The NeurIPS hallucinated citations scandal makes more sense in this context. Over 100 fake references made it into accepted papers. That doesn't happen if humans are carefully checking citations. It happens when the entire chain has been partially automated. AI writing reviewing AI writing.
The answer isn't AI that reviews better. It's humans who review carefully.
What you can do about it
You can't control who reviews your paper or whether they use AI. But you can take steps to protect your work.
1. If you suspect an AI-generated review, flag it
If you receive a review that's clearly generic, contains hallucinated references, or shows no evidence of engagement with your actual data, write to the editor. Be specific. Say "The reviewer cites Smith et al. (2024) in their comment about Figure 3, but this paper doesn't exist."
2. Choose journals with integrity measures
Some journals have taken concrete steps: Science has explicitly banned AI-generated reviews; Springer Nature requires disclosure of AI use. When choosing where to submit, consider this. A journal that takes review integrity seriously is more likely to give your paper a fair evaluation.
3. Make your paper AI-review-proof
Papers that are highly specific, tightly argued, and deeply technical are harder for AI to review poorly. If your methods section is precise enough that only someone who's actually done the experiment would understand the decisions you made, an AI-generated review will expose itself by missing those nuances.
4. Get human feedback before you submit
If there's a chance your paper will be reviewed by an AI that can't evaluate methodology or novelty, then you need human expert feedback before that happens. Pre-submission review used to be about catching weaknesses that reviewers would flag. Now it's also about getting substantive, expert feedback that the official review process may no longer provide.
Readiness check
Run the scan to see how your manuscript scores on these criteria.
See score, top issues, and what to fix before you submit.
5. Respond to bad reviews assertively
If you get back comments that are clearly shallow or AI-generated, you can note where a reviewer's comments don't engage with the specifics of your work. Editors notice when a reviewer's comment is generic and the author's response reveals that the reviewer didn't actually read the paper.
The system is more resilient than its critics think, mostly because the people in it genuinely care about getting things right. But if you care about the quality of feedback your paper receives, be proactive. Get expert eyes on your manuscript before you submit. Choose journals that take integrity seriously. The peer review system is only as good as the people participating in it.
What to do if you think AI reviewed your paper
Situation | Best response |
|---|---|
Generic review with no manuscript-specific detail | Answer the comments professionally, then flag the lack of specificity to the editor |
Reviewer cites papers that do not exist | Point to the exact hallucinated citation and ask the editor to assess review quality |
Feedback ignores core figures or methods entirely | Show where the comments fail to engage with the actual manuscript |
The review is thin but not obviously AI-generated | Treat it as weak review quality rather than making an AI accusation |
One more step before you submit
Choosing the right journal is half the battle. The other half is knowing whether your manuscript actually meets the bar for it.
The most common mistake is not picking the wrong journal: it's submitting to the right journal before the paper is ready. Journals at the Cell/Nature/Science tier desk-reject 60-80% of submissions without peer review, often for reasons that are fixable before submission. If you need the broader context, AI Peer Review in 2026: The ICLR Problem Isn't Going Away, How to Write a Rebuttal Letter to Journal Reviewers, Desk Rejected? Here's Why (And What to Do Now), How peer review works, and What pre-submission peer review includes cover the adjacent decisions.
At Manusights, every review is done by a human scientist with domain expertise. The kind of person who's actually run the experiments, sat on editorial boards, and knows what good science looks like from the inside. If you want expert feedback on your manuscript before it enters a review system you can't control, that's what we're here for.
The Bottom Line
AI tools are useful as a pre-review pass before human review. They catch language and structure issues quickly. What they can't replace is domain-expert judgment about whether the scientific claim holds up, whether the methodology is appropriate, and whether the paper is scoped correctly for the target journal.
Before submitting, a manuscript readiness and journal-fit check can catch the fit, framing, and methodology gaps that editors screen for on first read.
When AI Review Is Enough vs. When You Need a Human
Not every manuscript needs the same level of feedback. Here's a practical framework:
AI review is probably sufficient when:
- You're checking formatting, structure, and language before submission
- You want a fast first pass to catch obvious gaps in methods or references
- The manuscript has already been through internal lab review and you're polishing
- You need to verify journal-specific requirements (word counts, figure specs, reference style)
You need human expert review when:
- The methodology is novel or unconventional and you're unsure whether reviewers will accept it
- Your claims are strong and you need someone who knows the field to pressure-test them
- You're targeting a high-rejection journal (Cell, Nature, Science tier) where desk rejection is 60-80%
- The paper crosses disciplines and you're not sure the framing works for your target audience
- You've been rejected once and need to understand why before resubmitting
Use both when:
- AI catches the structural and formatting issues first (fast, cheap), then a domain expert evaluates the science, framing, and journal fit. This is the most efficient workflow for high-stakes submissions.
Submit If / Think Twice If
Submit if:
- you want to know whether AI may already be influencing the feedback your paper gets
- you need a practical framework for spotting shallow AI-generated review comments
- you are deciding how much human review to get before a high-stakes submission
Think twice if:
- you are treating AI-generated review detection as a precise science rather than a quality signal
- you plan to accuse a reviewer directly instead of documenting concrete weaknesses for the editor
- the real problem is still the manuscript itself, not the quality of the future reviewer
Last Verified
AI tool features, pricing, and journal policies on AI-generated reviews verified against each provider's website, April 2026. The ICLR and NeurIPS studies cited are from published analyses by Pangram Labs and GPTZero. Journal policies on AI use were checked against Nature, Springer Nature, and Science editorial policy pages.
Frequently asked questions
A February 2026 study found roughly 20% of ICLR reviews and 12% of Nature Communications reviews in 2025 were classified as AI-generated. The trend is increasing.
Look for generic praise that could apply to any paper, cookie-cutter structure with no real engagement, hallucinated citations that don't exist, suspiciously fast turnaround, absence of field-specific judgment, and excessive hedging.
No. AI tools are effective at language clarity, sentence structure, and detecting common errors. They are not reliable for domain-specific scientific judgment, whether the methodology is appropriate, whether the evidence supports the conclusion, or whether the paper is scoped right for the target journal.
Use them for a first-pass on language, clarity, and common structural issues. Then get human expert feedback on the scientific substance. The two complement each other, AI review is fast and cheap; expert review covers what AI misses.
Some journals are using AI screening tools, primarily for plagiarism, basic methodology checks, and language quality. Most high-IF journals still rely on human peer reviewers for the substantive assessment. That is unlikely to change in the near term.
Flag the review to the editor with specific evidence, for example, hallucinated citations or generic comments that don't engage with your actual data. Most editors take this seriously, especially at journals that have explicit policies against AI-generated reviews.
Sources
Reference library
Use the core publishing datasets alongside this guide
This article answers one part of the publishing decision. The reference library covers the recurring questions that usually come next: whether the package is ready, what drives desk rejection, how journals compare, and what the submission requirements look like across journals.
Checklist system / operational asset
Elite Submission Checklist
A flagship pre-submission checklist that turns journal-fit, desk-reject, and package-quality lessons into one operational final-pass audit.
Flagship report / decision support
Desk Rejection Report
A canonical desk-rejection report that organizes the most common editorial failure modes, what they look like, and how to prevent them.
Dataset / reference hub
Journal Intelligence Dataset
A canonical journal dataset that combines selectivity posture, review timing, submission requirements, and Manusights fit signals in one citeable reference asset.
Dataset / reference guide
Peer Review Timelines by Journal
Reference-grade journal timeline data that authors, labs, and writing centers can cite when discussing realistic review timing.
Best next step
Use this page to interpret the status and choose the next sensible move.
The better next step is guidance on timing, follow-up, and what to do while the manuscript is still in the system. Save the Free Readiness Scan for the next paper you have not submitted yet.
Guidance first. Use the scan for the next manuscript.
Anthropic Privacy Partner. Zero-retention manuscript processing.
Where to go next
Supporting reads
Conversion step
Use this page to interpret the status and choose the next sensible move.
Guidance first. Use the scan for the next manuscript.