For 50% off your first order, use coupon code: WELCOME50

Is Using AI for Manuscript Review Ethical?

By Ray W. Shiraishi, Ph.D.||6 min read

Researchers keep asking me this question: is AI manuscript review ethical? The answer depends on what "use AI" actually means in context, what journal policies say, and where we draw the line between legitimate assistance and academic dishonesty.

What Journal AI Policies Actually Say

Most major publishers have issued guidance on AI use in research and manuscript preparation. The common thread: AI as a tool is fine. AI as an author is not.

The ICMJE (International Committee of Medical Journal Editors), whose guidelines influence thousands of biomedical journals, states that AI tools cannot be listed as authors because they cannot be held accountable for the work. But the ICMJE explicitly permits authors to use AI for tasks like language editing. And it does not prohibit using AI for pre-submission feedback, provided the use is disclosed.

Springer Nature, Elsevier, and Wiley have adopted similar positions. AI can assist with manuscript preparation, but authors bear full responsibility for the content, and AI use should be transparently disclosed. None of these policies prohibit obtaining AI-generated feedback on a draft before submission.

The key principle is accountability. You, as the author, are responsible for every claim in your paper. Using a tool that helps you identify weaknesses does not change that responsibility. It may actually help you fulfill it more thoroughly.

AI Pre-Submission Review vs. Ghostwriting

Here's the issue: people conflate two very different things. Using AI to generate manuscript content and using AI to critique content you already wrote are fundamentally different activities with different ethical implications.

Having an AI tool draft sections of your paper (the introduction, methods, or discussion) raises legitimate concerns about intellectual contribution and authorship integrity. If the ideas and their expression come from an AI, the question of who actually "wrote" the paper gets murky fast.

Getting AI-generated feedback on your draft is a different matter entirely. This is analogous to asking a colleague to read your paper and point out weaknesses, or sending it to a statistical consultant for methodology review. You wrote the paper. The tool identifies areas for improvement. You decide what changes to make. The intellectual contribution remains yours.

The Existing Landscape of Accepted Tools

To evaluate AI-assisted review fairly, consider what's already accepted in academic publishing:

  • Statistical consultants. Researchers routinely hire statisticians to review their analysis plans and results. Nobody considers this unethical. It is best practice.
  • University writing centers. Graduate students regularly get feedback on their manuscripts from professional writing tutors who suggest structural and clarity improvements.
  • Language editing services. Journals themselves recommend professional editing for non-native English speakers. Services like Editage and American Journal Experts provide substantive editing beyond grammar.
  • Grammar and style tools. Grammarly, ProWritingAid, and similar tools are used everywhere without ethical concern, even though they use AI/ML models to suggest improvements.
  • Reference managers. Tools like Zotero and Mendeley automate citation formatting. Nobody questions whether this constitutes inappropriate assistance.

AI-assisted manuscript review fits squarely within this tradition. The author retains full intellectual ownership and decision-making authority over the final manuscript.

Transparency and Disclosure

Even when AI-assisted review is clearly ethical in principle, transparency matters. Most publisher guidelines now ask authors to disclose AI tool usage in an acknowledgments or methods section. Good practice regardless of the specific tool.

A disclosure might read: "An AI-assisted review tool was used to identify potential methodological and statistical issues in a draft of this manuscript. All revisions were made by the authors." This is straightforward, honest, and consistent with how researchers disclose other forms of assistance.

The trend in policy is toward more disclosure, not less. Being transparent about your use of AI tools positions you well regardless of how policies evolve.

Limitations of AI Peer Review

Honest engagement with this topic requires acknowledging what AI review cannot do. Current AI systems have real limitations:

  • They cannot verify empirical claims against source data. They review what you report, not what actually happened in your lab or field site.
  • They may miss highly specialized domain knowledge, particularly in emerging subfields where the training data is sparse.
  • They cannot assess the novelty of a contribution the way a human expert who has spent decades in a field can.
  • They lack the social and political awareness to evaluate whether a paper's framing is sensitive to affected communities.
  • They are susceptible to producing plausible-sounding but incorrect feedback, particularly on nuanced methodological questions.

These limitations do not make AI review unethical. They make it incomplete. The same is true of any single reviewer, human or otherwise. No single review process catches everything.

The Case for Complementary AI + Human Review

The problem is that people frame this as AI versus human review. The more productive framing is complementary review. AI tools are good at systematic checks: statistical test selection, reporting completeness, internal consistency, formatting compliance. Human reviewers excel at evaluating novelty, contextual significance, and the subtleties of interpretation.

In practice, the strongest review process combines both. An AI review catches the mechanical issues: the missing effect sizes, the uncorrected multiple comparisons, the inconsistency between the abstract and the results. That frees human reviewers to focus on the higher-order questions that require judgment and expertise.

This complementary model is not speculative. It mirrors how other professions have integrated AI. Radiologists use AI to flag anomalies but make the diagnosis themselves. Lawyers use AI for document review but craft the legal arguments. In each case, AI handles the systematic screening while humans provide the judgment.

A Practical Framework

If you are considering using AI for pre-submission manuscript review, here is a straightforward ethical framework:

  1. You wrote the paper. The ideas, data collection, analysis, and interpretation are your intellectual contribution.
  2. You decide what to change. AI feedback is advisory. You evaluate each suggestion critically and make revisions based on your own judgment.
  3. You disclose the assistance. Mention AI tool usage in your acknowledgments or methods section, consistent with your target journal's policy.
  4. You remain accountable. Every claim, every analysis, every conclusion in the final manuscript is your responsibility.

Bottom line, if all four conditions are met, using AI-assisted review is no different from using any other tool that helps you produce better research. Want to see what AI-generated reviewer feedback looks like? Try the free Reviewer 2 Generator with your abstract. For a deeper look at manuscript preparation, our pre-submission guides cover what reviewers actually check. The goal of peer review has always been to improve the quality of published science. Any tool that helps achieve that goal, used responsibly and transparently, serves that mission.

Strengthen your manuscript with structured, multi-perspective feedback before submission.

Learn more about PeerGenius

Disclosure: This article was drafted with AI assistance. The analysis, positions, and conclusions are the author's own. All content was reviewed, edited, and approved before publication.