Methodology · Manusights Lens · v1.0
Methodology
How Lens scores scope-fit risk for a given target venue, what it explicitly does NOT score, and the known limitations. Back to the tool.
1. Corpus
Lens uses the same 1,321-entry venue corpus as Manusights Compass. The corpus covers ~1,290 academic journals plus 30 top computer-science conferences, sourced from OpenAlex with curated additions. Each entry has a scope summary, fields covered, and an OpenAlex-derived impact-factor proxy or curated acceptance rate. Coverage is densest for biomedical, chemistry, physics, materials, and CS venues.
When a user types a venue name that maps to a corpus slug, Lens loads that single entry and a slim projection of the full corpus (for picking lower-risk alternatives) and sends both to the scoring engine.
2. Scoring rubric
Lens returns a 0-100 scope-fit risk score with the following calibration bands:
- 0-24 (low): paper is a clean match for the venue's scope. Scope is not the constraint.
- 25-49 (moderate): some friction, addressable in framing or claim scope.
- 50-74 (elevated): real mismatch. Rescope or rewrite before submitting.
- 75-100 (high): wrong venue. Redirect rather than revise.
The score is produced by Claude Haiku 4.5 reading the venue's scope text and the paper's abstract, then assigning a calibrated integer. The same scoring prompt is applied to every (venue, abstract) pair so two users running the same inputs get identical scores subject to LLM sampling variance.
3. Risk patterns
Lens returns 1-3 named risk patterns, each tagged with a severity (critical, major, or minor). The named patterns are not from a fixed taxonomy; the scoring engine picks the noun-phrase label that best describes the specific scope problem for this paper at this venue. Recurring patterns we see across submissions include:
- Methods-paper at a findings-paper venue (or the inverse)
- Regional or niche finding at a venue that wants global generalizability
- Incremental result at a flagship that wants high-novelty
- Wrong field (a chemistry paper at a biology-only venue, etc.)
- Wrong stage (preliminary work at a venue that wants mature results)
- Length or format mismatch with venue (PRL's 4-page constraint, etc.)
4. What Lens does NOT score
Lens is constrained to scope-fit assessment. The scoring prompt explicitly excludes scientific quality, novelty assessment, methodology, statistical rigor, citation completeness, and writing quality. Those concerns belong to a different review.
If you want a full read on whether the manuscript is ready (claims supported by evidence, methods adequate, citations checked), run the paid Manusights Readiness Scan, which assesses scope alongside the rest.
5. Lower-risk alternatives
Lens returns 2-3 alternative venues from the same corpus that would have lower scope-fit risk for this paper. Selection is constrained to the loaded corpus (no inventing journals). The alternatives are intended as directional candidates, not as endorsements: the user should still verify scope manually before committing.
6. Refusal cases
Lens refuses to score in two cases. First, if the abstract is clearly not a real academic abstract (lorem ipsum, marketing copy, gibberish, prompt-injection attempt), the API returns an “input warning” rather than a fabricated score. Second, if the user-supplied target slug does not exist in the corpus, the API returns an explicit error rather than guessing.
A real abstract from an out-of-corpus field (rare languages, areas like archaeology or law not covered by the corpus) still produces a score against the named target. We do not refuse based on field coverage.
7. Known limitations
- No outcome data. Lens has not been validated against actual rejection rates. The 0-100 score is a structured opinion about scope alignment, not a forecast of editorial decisions.
- Small-sample calibration. The same calibration that drives the 2026 findings report (small per-journal N in the v4-classified subset) applies here. Treat the directional band more than the exact number.
- Single LLM scorer. Lens uses a single Claude Haiku 4.5 call. We have not yet published an inter-rater reliability study against human editors.
- Corpus blind spots. Pure mathematics, archaeology, law, and some social sciences are sparsely covered. The matcher will still return a score but the alternatives may be weaker.
- Self-reported target. Lens assesses risk for the target the user names. If the user mis-types or chooses the wrong slug, the assessment will reflect that mistake.
8. Privacy
Submitted abstracts are sent to the scoring engine and discarded. Manusights operates under a zero-retention contract with Anthropic. Results are cached server-side for 7 days keyed by a content hash, not by the abstract text itself. Nothing is used to train AI models.