AI peer review

AI peer review that's actually calibrated

8 specialised AI agents review your manuscript against standards derived from 23,000 real peer reviews across 15+ academic platforms. Full structured report in under 15 minutes.

8
Specialist agents
< 15 min
Turnaround
23K
Training reviews
6
Prior-pub sources

What 'AI peer review' actually means here

Most 'AI peer review' tools wrap a single LLM prompt. Ours decomposes the review into eight specialist agents — Methodology, Formulas & Equations, Originality, Literature Coverage, Reproducibility, Clarity & Language, Figures & Tables, Prior Publication — each with its own prompt calibrated against published rubrics (CONSORT, STROBE, PRISMA, field-specific style guides).

Each agent's prompt is seeded at inference time with 8–40 real peer reviews from our training corpus via FTS5 retrieval. That's the calibration step most tools skip: the model doesn't just know 'what a review looks like' in the abstract; it's shown concrete, field-matched examples of good reviews every time it runs.

How the review runs

Submit a PDF or paste a DOI / arXiv ID. The engine fans out in parallel:

  • A 12-second prior-publication check across CrossRef, Unpaywall, arXiv, medRxiv, bioRxiv, and a 900,000-paper institutional library.
  • The 8 specialist agents run sequentially against our trained rubric.
  • A synthesis step integrates the specialist reports into a single structured review with an overall score.
  • Output is delivered as a readable report and a machine-readable JSON object (for integration into CI or writing tools).

When to use it

The tool is most useful in three situations:

  • Pre-submission triage — 15 minutes before you send a draft to a closed journal or conference.
  • Revision planning — after you get a R&R from a journal, re-run to see what the 'second reviewer' would flag.
  • Self-training — PhD students and early-career researchers use it to internalise what referees look for in their field.

Pricing

We charge per submission, not per seat. The free tier gives you pre-submission scoring (Tier 1-5) + abbreviated reports — enough for the 'should I keep polishing or submit now?' call. Paid submissions unlock the full 8-agent report with per-section feedback, prior-publication evidence, and the JSON export.

The working scholar tier is explicitly cheaper than a conference registration; the free tier is permanent and doesn't require a card.

Frequently asked questions

No. It's the fast first pass — the equivalent of asking a thorough colleague to read your draft before you send it anywhere official. Every published-journal review benefits from being run through an AI pre-pass first; most authors discover ≥3 genuine fixes.
Try the free pre-checkRead the engineering blog

Command palette

Jump anywhere, run any action.