Run your manuscript through 8 specialist AI agents before you submit. Tier 1–5 acceptance probability + recommended target journals from a 1,214-venue index, in 15 seconds — calibrated against 23,000 real peer reviews. No signup, unlimited use.
A pre-check is the 60-second equivalent of asking a senior colleague: 'is this paper good enough to send out, or do I need another round?' Most authors never get an honest answer to that question because the people qualified to give it are too busy.
Our pre-check answers it from a title, abstract, and keywords — no full draft, no upload. The output is a calibrated Tier 1–5 verdict (Tier 1 = top venue ready; Tier 5 = needs major work) plus the detected scientific field and three identified research gaps surrounding your topic.
How the verdict is calibrated
Pre-check verdicts are not pulled from a generic LLM. The scorer is calibrated against acceptance rates from 15+ real peer review platforms — OpenReview, eLife, SciPost, PLOS ONE, BMJ Open, Nature Communications, PeerJ, F1000Research and others. A Tier 1 paper is one that, in our held-out validation set, was accepted at a top venue. A Tier 5 paper is one that was desk-rejected.
The model learns the field-specific signals editors look for: methodology coverage, novelty hedging, citation density, abstract structure, scope-versus-claim alignment.
Where to publish — matched target journals
Every pre-check run also returns a ranked shortlist of journals your manuscript fits, drawn from a 1,214-venue index built from our 33,000-paper local library. Each recommendation carries a match-score, Open-Access flag, impact-factor proxy, publisher, tier, and 2–3 similar papers we've already indexed in that venue. Beall's-archive cross-reference flags low-curation venues so authors retain agency without surprises.
No API call per request. The retrieval runs locally against a baked FTS5 + RRF index — sub-100 ms p99, $0 per query. The same panel appears at the bottom of the full AI Review.
When to use the pre-check vs the full review
The pre-check is the right call when:
You have a working title and abstract but the full draft isn't done yet — get a verdict before investing 80 more hours.
You're choosing between 2–3 target venues and want a calibrated read on which is realistic.
You just got an R&R and want to know whether the revision lands you above or below the bar.
You're a PhD student and your advisor is on sabbatical for the next month.
What the full 8-agent review adds
When the pre-check says 'almost there', upgrade to the full review for $10 or use a Researcher subscription credit. The full review uploads your PDF and runs 8 specialist agents — Methodology, Formulas, Originality, Literature, Reproducibility, Clarity, Figures, Prior Publication — plus a 12-second concurrent prior-publication check across CrossRef, arXiv, medRxiv, bioRxiv, Unpaywall, and our 900K-paper institutional library.
Full turnaround: under 15 minutes. Output: a per-agent structured report with line-level revision suggestions and a JSON export.
Frequently asked questions
Yes. Unlimited, no signup, no card. The free pre-check exists because the bottleneck in academic publishing is honest pre-submission feedback — we'd rather give that away and earn revenue from authors who go on to use the full review or publish in the open-access journal.
On a held-out set of 1,000 papers with known editorial outcomes, our scorer's tier predictions matched the actual outcome (within ±1 tier) in 78% of cases. It's better than human reviewer-self-reports of 'will this be accepted?' (typically 55–60% accurate) because it's calibrated against real outcomes rather than gut feel.
No. Pre-check inputs are processed in-memory and not stored beyond the session. The training corpus is exclusively the 23,000 publicly-available peer reviews scraped from open-review platforms. Your title and abstract are never added to it.
Yes. The output includes three research gaps the scorer detected around your topic, derived from our 17,000-gap library and OpenAlex's 250M-work graph. Useful for spotting follow-up paper opportunities or for revising your novelty framing.
Yes. Every run returns a ranked shortlist of 5–10 target journals from our 1,214-venue index, with a match-score (A–F letter grade), open-access flag, IF proxy, publisher, tier, and 2–3 similar papers we've indexed in that venue. Beall's-archive cross-reference flags low-curation venues — never hidden, always with rationale. Sub-100 ms, $0 per query: the index is baked weekly from our 33,000-paper local library, so requests never touch a paid API.
Yes — please do. Iterating between the pre-check and revision is exactly the workflow we built it for. Each run is independent; we don't penalise re-submission of the same title.
The pre-check accepts abstracts in English, and we deliberately calibrated against peer-review platforms with significant non-native-English authorship to avoid penalising clear-but-imperfect English. If you have a paper drafted in another language, paste a translated abstract for the pre-check, then run the full review on the original PDF — our Clarity & Language agent handles translation considerations directly.
Three things: (1) our scorer is calibrated against real editorial outcomes, not generic 'is this good?' instinct; (2) the field-detection step routes your abstract to the rubric for that specific discipline; (3) the gap-detection step queries our 17,000-gap library and the 250M-work OpenAlex graph — ChatGPT can't do that. The pre-check is built for one job; ChatGPT is built for thousands.
If Tier 1–2: submit to your target venue with confidence. If Tier 3: usually one focused revision pass closes the gap — the verdict screen shows the specific issues the scorer flagged. If Tier 4–5: consider running the full 8-agent review for line-level guidance, or look at the identified research gaps for repositioning angles.