Methodology

How our free tools work

We document everything our two free tools do — and don’t do — so you can decide whether to act on a result. No black boxes; no hand-wavy “AI says.”

Data handling

  • Your text is never used to train any model. Anthropic’s API runs in zero-retention mode under our contract. OpenAI is not used for either tool today.
  • 24-hour deletion. Cached results are keyed by content hash and expire after 7 days for journal-fit, 24 hours for citation checks. The original abstract or claim is not stored alongside the cache key.
  • No account, no email gate. Both tools work without sign-up. Rate limits are per-IP, not per-account.

Journal Fit Predictor

Corpus

1,030 venues: 1,000 journals from OpenAlex filtered to journals with an ISSN, more than 500 publications in their lifetime, and more than 50,000 lifetime citations. Plus 30 hand-curated top-tier CS conferences (NeurIPS, ICML, ICLR, AAAI, CVPR, ICCV, ECCV, ACL, EMNLP, NAACL, KDD, WWW, SIGIR, SIGGRAPH, SIGCOMM, NSDI, OSDI, SOSP, ISCA, MICRO, PLDI, POPL, USENIX Security, CCS, IEEE S&P, NDSS, FOCS, STOC, SODA, CRYPTO).

For each venue we use a one-sentence scope description, four field keywords, an impact-factor value, and an acceptance rate where known. Conferences carry the venue’s most recently published acceptance rate from its Call for Papers; journals leave acceptance rate blank when the publisher does not disclose it.

Impact factor source

Result cards show one of two impact-factor values, labeled accordingly:

  • JCR IF (2024). For ~120 high-traffic journals where we maintain a hand-curated profile (e.g. Nature, Cell, NEJM, JACS, Advanced Materials), the value is the verified Clarivate Journal Citation Reports 2024 impact factor.
  • Citation impact. For the rest of the 1,030-venue corpus, the value is OpenAlex’s 2-year mean citedness — a JCR proxy that correlates closely but is not identical to Clarivate’s methodology. We display it as “Citation impact” rather than “IF” so the distinction is visible at a glance.

Differences between the two are typically small for high-citation journals and larger for niche journals where citation patterns diverge from JCR’s included-citations methodology. Both values are directionally useful for venue selection; neither should be treated as a precise rank.

Retrieval and ranking

We do not embed-and-search. We send the entire 1,030-venue corpus, your title, and your abstract to Claude Haiku 4.5 in a single call and ask it to rank the top 5. The model returns a fit score (0–100), a tier (stretch / realistic / safe), a one-sentence reason, and a one-sentence “what to strengthen” per match. This costs roughly $0.04 per query.

We chose this over embedding-search because the explanation matters more than the rank. Embeddings can pick the right journal but can’t tell you why it’s right or what to fix.

Refusal cases

  • Not an abstract. Marketing copy, lorem ipsum, gibberish, or a prompt-injection attempt returns 400 with a clear error.
  • No corpus match. If the model can’t find a single venue with non-zero fit (rare), the API returns 422 with a scope disclosure.
  • Low confidence. If the top fit score is below 60 we surface an amber banner. Scores below 60 typically mean the corpus doesn’t cover the paper’s field well — most often pure mathematics, social sciences, or earth sciences (covered thinly), or a niche subfield within a covered area.
  • Under 200 characters. The minimum-length gate is per-character, not per-word, because some abstract pastes drop newlines.

What this tool does NOT do

  • It does not assess novelty or scientific merit. The score is scope alignment plus tier realism, nothing more.
  • It does not read your full manuscript — only what you paste.
  • It does not know your editor relationships, special-issue calls, or current desk-rejection patterns at the venue. Treat results as one input.
  • It does not predict acceptance probability. The “tier” field is editorial judgment from the model, not a calibrated probability.

Citation Claim Checker

Paper search

We extract the citation from your sentence (author-year, two-author, three-author, et al., bracket-numbered, or DOI). We then search across Semantic Scholar (200M+ papers), CrossRef, and PubMed, ranked by author-surname match plus year proximity plus title-word overlap with your sentence.

Verdict

When we find a paper with an abstract, we send the abstract plus your claim to Claude Haiku 4.5 with explicit verdict definitions. The model returns one of:

  • Supported. The claim accurately reflects what the paper found.
  • Partially Supported. The paper addresses the topic but the claim overstates, oversimplifies, or misses a qualifier.
  • Not Supported. The paper addresses the same topic but contradicts or does not support the specific claim.
  • Wrong Paper. The search returned a different paper, usually one by an author with the same surname. The claim itself may be fine.
  • Unable to Verify. No paper found, no abstract available, or the abstract doesn’t contain enough information to judge.

What this tool does NOT do

  • It reads only the cited paper’s abstract, not the full text. Methods, supplementary findings, and figures are invisible to the verdict.
  • It does not check retraction status (yet). For a manuscript-level retraction sweep, run the full readiness scan.
  • It assumes the first author’s surname is enough to disambiguate. Common surnames (Lee, Wang, Smith) sometimes return the wrong paper — this is exactly what the “Wrong Paper” verdict is for.
  • It is rate-limited at 20 checks per hour per IP, plus a soft localStorage limit of 3 per day to surface the full-manuscript option.

How to cite these tools

If you reference results from either tool in a manuscript, methods section, or supplementary materials, please cite as:

Manusights. (2026). Journal Fit Predictor [Free academic tool].
  https://manusights.com/tools/journal-fit
  (Accessed: YYYY-MM-DD)

Manusights. (2026). Citation Claim Checker [Free academic tool].
  https://manusights.com/tools/citation-claim-checker
  (Accessed: YYYY-MM-DD)

Want a manuscript-level signal, not a paste-level one? The full readiness scan reads your entire manuscript: it verifies every citation against its source, scores desk-reject risk for your target journal, names reviewer-flag patterns, and produces a prioritized fix list. Free preview, $29 for the full report only if you want it.

Run the full readiness scan