Article Writer
AI Article Writer — research gap to submission-ready manuscript
Most AI writing tools generate fluent prose and invent the citations. The Article Writer does the opposite: a 9-step wizard that starts from a real research gap, runs your real data, grounds every citation against a 900,000-paper library, and re-writes each section until it passes an 8-agent reviewer gauntlet. Output is a compilable LaTeX / DOCX / PDF bundle.
What it does
The Article Writer is a guided wizard, not a chat box. It walks a researcher from "I have a research question" to "I have a manuscript a reviewer would take seriously" — and it refuses to skip the steps that make a paper credible.
Every draft is built on three foundations a generic LLM writer skips: a real research gap as the question, your real data as the evidence, and the target journal's actual house style as the voice. The drafting itself is done by section-aware writers that target evidence-based length norms for your discipline, and every claim that needs a citation is matched against our 900,000-paper library rather than hallucinated.
- Starts from a research gap — yours, or one of 17,000+ evidence-anchored gaps in our library.
- Runs a real Python analysis on uploaded data: tables, figures, and the statistical narrative.
- Grounds citations against a 900,000-paper institutional library — no invented DOIs.
- Matches the target journal's voice via section-aware few-shot examples from real papers in that venue.
- Re-writes each section until it scores 10/10 from all 8 HAKEM review agents.
- Exports a compilable LaTeX bundle, a Pandoc DOCX, and a compiled PDF — plus a cover letter.
How it works (9 steps)
The wizard is sequential by design — each step constrains the next, which is what keeps the draft honest. Here is the full flow:
- Step 1. Pick a department and subdiscipline — Start by telling the wizard your field. It loads a discipline manifest — the field-specific questions, reporting standards, and section conventions a reviewer in that subdiscipline expects.
- Step 2. Choose a research gap — Browse our library of 17,000+ evidence-anchored research gaps, or bring your own. The gap becomes the manuscript's research question and frames every downstream section.
- Step 3. Answer discipline-specific details — The wizard asks the questions a methodology reviewer in your field would ask up front — study design, population, instrument, comparison — so the draft is grounded before a word is written.
- Step 4. Run the analysis — Upload your real data (CSV, spreadsheet, or in-wizard entry) and a Python analysis runner produces the tables, figures, and statistical narrative. No real data yet? It can scaffold a synthetic analysis you replace later.
- Step 5. Set the target journal and title — Autocomplete across 1,214 indexed venues. The writer pulls a per-journal voice fingerprint — section-aware few-shot examples from real papers in that journal — so the draft reads like it belongs there.
- Step 6. Add co-authors and draft every section — Add and order co-authors, then the section writers draft each part against your discipline's evidence-based length targets, with citations grounded against the 900,000-paper library — not invented.
- Step 7. Pass the reviewer gauntlet — Each section is re-written until it scores 10/10 from every one of the 8 HAKEM review agents — methodology, originality, literature, reproducibility, clarity, figures, formulas, prior publication. The auto-loop runs this for you.
- Step 8. Export the bundle — Download a compilable LaTeX bundle (the canonical deliverable), a Pandoc DOCX, or a compiled PDF — plus a cover letter and a review-ready submission package.
- Step 9. Review the submission preview — The wizard detects the target journal's submission portal and lays out the exact step sequence — flagging every irreversible action — before anything is sent.
What makes it different — the reviewer gauntlet
The core idea: don't write a draft and then review it. Write each section so it already passes review.
After a section is drafted, it is scored by the same 8 specialist agents that power our AI Review tool — Methodology, Originality, Literature Coverage, Reproducibility, Clarity & Language, Figures & Tables, Formulas & Equations, and Prior Publication. If any agent scores below 10/10, the section is re-written with that agent's feedback as the constraint, and re-scored. The auto-loop runs this cycle for you until every agent is satisfied. The result is a draft that has already survived the kind of scrutiny it will face at a journal — before you ever submit it.
What it exports
The deliverable is a real submission package, not a block of text to copy-paste:
- Compilable LaTeX bundle — the canonical deliverable, ready for Overleaf or a local TeX build.
- Pandoc DOCX — for journals and co-authors who work in Word with tracked changes.
- Compiled PDF — the camera-ready artifact for a quick read-through.
- Cover letter — drafted to the target journal, summarising the contribution.
- Review-ready package — the manuscript plus the artifacts a portal will ask for.
- Submission preview — the detected portal's exact step sequence, with irreversible actions flagged.
When to use it
The Article Writer is most useful at four moments:
- You have results and a gap, but a blank-page problem — the wizard gives the draft a credible spine in an afternoon.
- You're an early-career researcher learning your field's section conventions — the discipline manifest makes the implicit norms explicit.
- You're targeting a specific journal and want the draft to read like it belongs there — the per-journal voice fingerprint handles house style.
- You want a draft that has already passed an 8-agent review before a human co-author or editor sees it.
What it does NOT do
Explicit limitations, because a writing tool that overpromises is worse than one that's honest:
- It does not invent data. If you don't upload real data, the analysis is a clearly-labelled synthetic scaffold you must replace before submission.
- It does not make you not the author. You are responsible for every claim, and you must disclose AI assistance per your target journal's policy — most now require it.
- It is not a one-click 'paper generator'. The wizard is sequential and asks for real inputs at every step; a credible manuscript still needs your judgement.
- It does not bypass peer review. The reviewer gauntlet is calibrated against real reviews, but it is a rehearsal, not a substitute for a journal's editorial process.
- It does not auto-submit without you. The final step is a preview that flags every irreversible action — submission is your explicit decision.