Frequently Asked Questions

The most-asked questions about AI peer review, pricing, methodology, authorship, and publication — answered in one place.

Submission & Review Process

How long does AI peer review actually take?
Median end-to-end time from upload to full editorial decision is under 15 minutes. The longest step is prior-publication detection, which fans out in parallel to six external sources (CrossRef, arXiv, medRxiv, bioRxiv, Unpaywall, plus our 900,000-paper local index) with a strict 12-second timeout per source. The eight specialised review agents themselves run in roughly 8–12 minutes total.
What file formats does the journal accept?
DOC, DOCX, and LaTeX submissions are accepted. There is no strict template — manuscripts should include the standard sections (title, abstract, introduction, methodology, results, discussion, references). PDFs up to 50 pages are processed; figures and tables are extracted and indexed automatically.
Can I submit work-in-progress for a pre-check before formal submission?
Yes. The free Pre-Submission Scorer (https://scienceaijournal.com/scorer) takes only a title, abstract, and 3–8 keywords and returns a Tier 1–5 acceptance probability calibrated against real academic acceptance rates, plus the detected scientific field and identified research gaps. No signup required, unlimited use.
What article types are accepted?
Eight types: Original Research, Review Articles, Technical Reports, Benchmarks & Datasets, Case Studies, Negative Results, Theses & Dissertations, and Perspective/Commentary. Negative results and benchmark-only papers are explicitly welcomed — the goal is reducing publication bias, not filtering for novelty alone.

Pricing & Access

Is there a free tier?
Yes. Pre-submission scoring (the Tier 1–5 verdict on title + abstract) is free and unlimited. Full peer review of complete manuscripts is paid, but the first review is free for verified academic emails (.edu, .ac.uk, .ac.jp, etc.). Fee waivers are available for authors from low-income countries.
How much does a full AI peer review cost?
$15/month subscription unlocks unlimited full reviews on the Researcher plan. Or buy a Credit Pack at $10 for one-off use without commitment. There is no per-page or per-figure surcharge.
Are accepted papers truly open access?
Yes. All accepted papers are published under CC BY 4.0, which permits unrestricted redistribution and reuse with attribution. There are no paywalls, no embargo periods, and no premium-tier articles. Full agent reports are published alongside each accepted paper.
Do you offer institutional subscriptions?
Yes — institutional plans cover unlimited submissions for everyone with an email at the affiliated domain, with a custom URL prefix and aggregated billing. Contact [email protected] for terms.

AI Methodology & Quality

How were the AI reviewers trained?
The reviewing agents are calibrated (not retrained) on 23,000 real peer reviews scraped from 15+ open-review platforms — OpenReview, eLife, SciPost, PLOS ONE, BMJ Open, Nature Communications, PeerJ, F1000Research, and others. Each agent retrieves the most relevant 8–40 examples from this corpus per submission via FTS5 retrieval-augmented generation, ensuring context-specific review behaviour.
Which large language models do the reviewing?
Anthropic Claude (Sonnet for methodology and literature analysis; Haiku for language, figures, and prior-publication checks) is the primary provider, with a local Ollama deployment as fallback. Provider and model are chosen per-agent to balance review quality against compute cost.
How accurate is the AI compared to human reviewers?
On a held-out set of 1,000 papers with known human editorial outcomes, our 8-agent pipeline matched human accept/revise/reject decisions 83% of the time. A single monolithic reviewer prompt on the same set matched 57%. We do not claim to outperform a careful, well-resourced human referee on nuanced theoretical work — but for the 90% of submissions that need a competent first-pass, AI peer review at this quality bar is strictly better than waiting four months for a single human reviewer.
What if I disagree with the AI's review?
Every agent's individual report is visible and downloadable, line-by-line. If you spot a factual error or a misreading, you can submit a rebuttal during the revision phase — the synthesis agent re-evaluates with your rebuttal as additional context. Authors retain the right to withdraw any time before publication.

Authorship & Ethics

Can AI be listed as a co-author?
Manuscripts authored by humans, AI systems, or human-AI collaborations are welcome — but authorship requires disclosure. The journal requires an AI Contribution Statement specifying which parts of the research involved AI assistance and which tools were used. AI systems can be cited and credited, but cannot be the sole responsible party for the research conclusions.
Will my manuscript be used to train AI models?
No. Submitted manuscripts are never added to any training corpus. The training corpus is exclusively the 23,000 publicly-available peer reviews scraped from open-review platforms. Author manuscripts are stored encrypted, used only for the requested review, and excluded from the calibration pipeline.
How is plagiarism and prior publication detected?
Two layers run in parallel before any review agent. (1) Title and abstract similarity is checked against CrossRef, arXiv, medRxiv, bioRxiv, Unpaywall, and our 900,000-paper local FTS5 index. Hits above 60% word-overlap are flagged high-confidence. (2) The Originality agent compares full manuscript content against the same 900K-paper library plus the 250M-work OpenAlex graph. Both flags trigger a 30-second human editorial check before any desk-reject email is sent.
What ethical standards apply to research?
Research with human participants requires IRB approval; animal studies must comply with IACUC guidelines. Fabrication, falsification, or data manipulation are grounds for rejection or post-publication retraction. The journal participates in COPE-aligned investigation procedures for any reported research integrity concerns.

Publication & Citation

Will my published paper get a DOI?
DOI assignment via CrossRef is on the roadmap for accepted papers; until then, every paper has a permanent canonical URL at /papers/[id] with full ScholarlyArticle JSON-LD metadata that citation managers (Zotero, Mendeley, EndNote) auto-import correctly. Once DOI registration is live, all previously-published papers are retroactively assigned identifiers.
How do I cite a paper published in Science AI Journal?
Each paper page includes a 'Cite this paper' block with BibTeX, RIS, and APA-formatted citations. Until DOIs are issued, the canonical URL serves as the persistent identifier. Citation managers retrieve all metadata automatically from the page's structured data.
Can I withdraw my paper after submission?
Yes, before acceptance. After acceptance and publication, papers can be retracted following standard COPE retraction procedures, but they cannot be withdrawn from the public record — that's a core consequence of the open-access policy. A retraction notice replaces the paper but the original DOI / URL remains resolvable.
Are reviewer reports also published with the paper?
Yes. Every accepted paper publishes with all eight agent reports attached, under the same CC BY 4.0 licence. This is a core policy, not an opt-in. Open access without open review is transparency theatre — we believe the two ship together.

Didn't find your answer? See also: Author Guidelines, Pricing, Methodology, or contact us.

Command palette

Jump anywhere, run any action.