I Check Every Reference Before Submission — Here's My Exact Workflow

Dr. Emily Carteron 5 hours ago

I've been publishing in peer-reviewed journals for over a decade. Biomedical informatics, machine learning applications in clinical research, cross-disciplinary work that pulls from dozens of subfields. My reference lists typically run 50–80 items. And I've learned, through painful experience, that a single broken reference can delay publication by weeks.

Three years ago, a Q1 journal desk-rejected a manuscript I'd spent six months on. The reason wasn't the methodology or the findings — it was two references in the bibliography that didn't match their DOIs. One had the wrong publication year. The other pointed to a completely different paper. The editor's note was blunt: "Please verify all references and resubmit."

That rejection cost me a revision cycle and, because another group published similar findings in the interim, probably cost me the priority claim. Since then, I check every single reference before submission. No exceptions. Here's the workflow I've developed.

Why Reference Errors Are More Common Than You Think

Most researchers assume their reference list is correct because they used a citation manager. Zotero, Mendeley, EndNote — these tools are excellent for organizing references, but they're not infallible.

Common ways errors creep in:

Citation manager sync issues. You import a paper from Google Scholar, and the metadata is slightly wrong — a missing middle initial, an abbreviated journal name that doesn't match the publisher's official title, a preprint year instead of the publication year. The citation manager stores what it receives without verification.

Co-author contributions. In multi-author papers, different people add references. A postdoc pastes in a citation from memory. A collaborator uses a different citation manager with different metadata sources. A student adds references suggested by ChatGPT — and some of those references don't exist at all.

Copy-paste between manuscripts. You reuse a literature review section from a grant proposal or a previous paper. The references were correct in the original context but now have stale DOIs, retracted papers, or formatting that doesn't match the new target journal's style.

AI writing assistance. Even if you don't use AI to generate references, you might use it to help draft paragraphs that include inline citations. The AI sometimes silently modifies the citation — changing the year, swapping an author name — in ways that look correct at a glance.

I've encountered every one of these in my own manuscripts or in papers I've reviewed. They're not signs of carelessness; they're symptoms of a modern multi-tool, multi-author writing workflow.

My Pre-Submission Reference Check Workflow

I do this for every manuscript, typically two days before submission. Not the night before — I learned that lesson too.

Step 1: Export the reference list as plain text

I copy the full bibliography from my manuscript into a clean text file. No formatting, no embedded Zotero fields — just the raw text of each reference as it will appear in the submitted PDF.

Why plain text? Because I want to check what the reviewer sees, not what my citation manager thinks is there. If there's a sync error between Zotero and the exported bibliography, this catches it.

Step 2: Automated DOI and metadata verification

I paste the entire reference list into Citely's Citation Checker. The tool parses each reference, looks up the DOI in CrossRef, and compares the metadata field by field — author names, title, journal, year, volume, pages.

Running a reference check before submission

This step typically takes less than a minute for a 60-reference list. It catches:

  • DOIs that don't resolve (fabricated or mistyped)
  • DOIs that resolve to a different paper than described
  • Wrong publication years (the single most common error I see)
  • Author name mismatches
  • Journal title discrepancies

On average, I find 2–4 issues in every manuscript. Not because I'm sloppy — because the modern academic writing pipeline has too many points of failure for manual checking to catch everything.

Step 3: Manual review of flagged references

The automated check flags references that need attention. For each flagged item, I:

  1. Open the DOI link in a browser to see what paper it actually points to
  2. Compare the paper I intended to cite with the paper the DOI resolves to
  3. Fix the discrepancy — usually by correcting the year, updating the DOI, or replacing the reference entirely

Most fixes are trivial — a wrong year, a missing page number. But once or twice per year, I find a reference that I can't verify at all. The paper doesn't seem to exist. This has always traced back to a co-author adding a reference from an AI tool without checking it. We have a lab policy about this now.

Step 4: Check for retracted papers

This step is separate from DOI verification. A paper can have a valid DOI and correct metadata but still have been retracted.

I spot-check my highest-stakes references — the ones my argument depends on — against Retraction Watch. For the full list, I check CrossRef's metadata, which includes retraction notices for many publishers.

Citing a retracted paper isn't just embarrassing; in some fields, it's grounds for editorial concern about the integrity of your own work.

Step 5: Verify formatting against the target journal

This is the boring step, but editors care about it. Each journal has its own reference style — author name format, journal abbreviation rules, DOI display format, use of "et al." thresholds.

I compare 3–4 references against the journal's published papers to make sure my formatting matches. If the journal uses abbreviated journal names, I check that every abbreviation follows the ISO 4 standard (or whatever the journal specifies).

I don't use an automated tool for this step — I find it faster to eyeball-match against a recent issue. But I know some colleagues use Scribbr or similar formatting checkers here.

Step 6: Final read-through of in-text citations

The last step: I read through the manuscript and check that every in-text citation (Smith et al., 2024) matches an entry in the bibliography, and vice versa. Orphan citations — references listed in the bibliography but never cited in the text — are a red flag for reviewers. So are in-text citations that don't appear in the reference list.

Most word processors flag these mismatches if you use their citation tools correctly. But if you've done any manual editing of citations (which I always end up doing for formatting reasons), this step catches what the software misses.

How Long This Takes

For a typical manuscript with 50–70 references:

StepTime
Export and prep5 minutes
Automated verification (Citely)1–2 minutes
Manual review of flagged items10–20 minutes
Retraction check5–10 minutes
Formatting verification15–20 minutes
In-text citation cross-check10–15 minutes
Total45–75 minutes

About an hour of work. Compare that with the weeks or months lost to a desk rejection, and the calculus is obvious.

Before I started using automated tools for Step 2, the manual DOI verification alone took 2–3 hours. Checking each DOI by hand on doi.org, comparing metadata against CrossRef records, documenting discrepancies — it was thorough but unsustainable. The automated step reduced the total workflow from half a day to about an hour.

What I Tell My Graduate Students

I supervise a research lab, and every student who submits a paper goes through reference verification training. The rules are simple:

  1. Never submit a paper with references you haven't verified. Not even to a workshop. Not even a preprint. Your name is on it.

  2. If you used AI to help write any part of the paper, verify every reference twice. AI tools generate plausible-looking citations that cite real authors and real journals but combine them incorrectly. These chimera references are almost impossible to catch by eye.

  3. Run the full reference list through an automated checker before showing me the manuscript. I shouldn't be the one catching DOI errors — that's a solved problem in 2026.

  4. Keep your citation manager clean. When you import a reference, verify the metadata against the publisher's page. It takes 30 seconds per reference when you do it at import time. It takes much longer to fix a dirty library before a deadline.

  5. Don't cite papers you haven't read. This sounds obvious, but the temptation to pad a reference list with "relevant-sounding" papers is real, especially for literature review sections. If you can't summarize the paper's main argument, don't cite it.

A Note on Multi-Author Manuscripts

The reference verification problem gets exponentially worse with more authors. In a paper with five co-authors from three institutions, each person might use a different citation manager, different metadata sources, and different habits around AI tools.

What works for our multi-author projects:

  • One person owns the reference list. Usually the corresponding author or whoever manages the manuscript file. All reference additions go through them.
  • Reference verification happens after the final version is assembled, not after each individual contribution. Checking intermediate drafts is wasted effort.
  • We share the Citely verification report with all co-authors before submission so everyone can review the flagged items and confirm their contributions are clean.

Common Errors I've Caught Over the Years

To give you a sense of what reference verification actually finds in practice:

Wrong year (most common). A 2023 paper cited as 2024, or vice versa. Usually caused by confusion between online-first and print publication dates. Journals that use "advance access" publishing are especially prone to this.

Author name substitution. "Kim, J." instead of "Kim, S." — a single initial difference. This happens when citation managers pull metadata from different sources (CrossRef vs. Google Scholar vs. PubMed) that record author names differently.

Journal title mismatch. "Journal of Machine Learning Research" vs. "Journal of Machine Learning" — close but not the same journal. The second one doesn't exist. This is a classic AI hallucination pattern.

DOI pointing to wrong paper. The DOI is valid but belongs to a different paper in the same journal. Usually caused by a copy-paste error where the DOI from one reference was accidentally assigned to another.

Retracted paper cited without notice. Twice I've caught references to papers that were retracted after we added them to our bibliography. In both cases, the retraction happened during the writing period and we hadn't noticed.

Key Takeaways

  • Reference errors are endemic in modern academic writing — multi-author workflows, citation manager sync issues, and AI writing tools all introduce mistakes that are hard to catch manually
  • A systematic pre-submission reference check takes about an hour for a 60-reference paper and can prevent weeks of delay from desk rejections
  • Automated verification tools like Citely reduce the manual DOI-checking step from hours to minutes by batch-comparing references against CrossRef records
  • The most common errors are wrong publication years, author name substitutions, and journal title mismatches — all detectable by automated metadata comparison
  • For multi-author papers, designate one person to own the reference list and run verification after the final version is assembled

👉 Check your reference list before your next submission