How it works

How Science AI Journal reviews your paper

Eight specialised agents, one synthesis, one report — in under 15 minutes.

8
AI agents
23K
Training reviews
900K
Library papers
< 15 min
Average review time

1. Upload

Drop a PDF (or DOCX converted to PDF) into the submission page. We accept manuscripts from any scientific discipline — the agents are calibrated across engineering, medicine, life sciences, physical sciences, CS, mathematics, social science, environmental science, and economics.

  • Up to 50 pages; figures and tables are extracted and indexed.
  • Your file is stored encrypted; authors retain all copyright.
  • No account conversion funnel tricks — submit without a subscription.

2. Parallel pre-flight: prior publication detection

Before any review agent runs, we fan out in parallel to six sources — CrossRef, arXiv, medRxiv, bioRxiv, Unpaywall, and our own 900,000-paper FTS5 index — with a strict 12-second timeout per source.

We look for fuzzy title and abstract overlap. Hits above 60% word-overlap are flagged high-confidence; editorial workflow routes those to a 30-second human check before any desk-reject email is sent.

3. Eight specialised agents, in sequence

Each agent carries only the rubric it needs, and only the calibration examples matching its domain. We tested a single monolithic reviewer prompt against this 8-agent pipeline on a held-out set of 1,000 human-reviewed papers. The agents matched human editorial decisions 83% of the time; the monolith matched 57%.

  • Methodology — study design, power, CONSORT/STROBE/PRISMA compliance.
  • Formulas & Equations — derivations, dimensional analysis, algebra.
  • Originality — overlap vs 900K-paper library + 250M-work OpenAlex.
  • Literature Coverage — missing seminal refs, over-reliance on self-citation.
  • Reproducibility — code availability, dataset access, methods sufficiency.
  • Clarity & Language — IMRaD adherence, hedging, undefined acronyms.
  • Figures & Tables — readability, colourblind safety, caption completeness.
  • Prior Publication — cross-checks vs six external sources in parallel.

4. Synthesis

A ninth pass integrates every agent's report into a single editorial decision — accept, revise, reject — with a numeric score and a line-by-line reviewer report. The full report is published alongside the accepted paper under CC BY 4.0. Open access without open review is transparency theatre; we think the two ship together.

What we will not claim

We do not replace human peer review where the stakes demand it — drug trials, regulatory submissions, grant panels. We do not outperform a careful, well-resourced human reviewer on nuanced theoretical work. We do not generate novel scientific insight. We review.

For the 90% of submissions that need a competent, fast, transparent first pass before the world sees them, AI peer review at this quality bar is a strictly better default than waiting four months for a single human referee.

Frequently asked questions

Median end-to-end time from upload to full editorial decision is under 15 minutes. The longest step is prior-publication detection, which is network-bound on external APIs.
Submit a manuscriptRead the engineering blog

Command palette

Jump anywhere, run any action.