Citing AI Use in Research and Scholarly Work
As the use of generative AI continues to expand in research and academic writing, many publishers and agencies are establishing policies to ensure transparency and responsible practice. The following overview summarizes current guidance on acknowledging AI use in manuscripts, presentations, and other scholarly work. This information incorporates input from Caren Frost, Darryl Butt, Allyson Mower, Zach Mitchell, and Manish Parashar.
When to Acknowledge AI UseMany researchers use AI tools during early‑stage ideation, such as brainstorming, exploring concepts, or gathering background context before independently drafting their work. Across publishers, guidance primarily focuses on documenting AI assistance when it contributes directly to the content of a manuscript—for example, generating text, editing, or analyzing data.
If AI is used only as a thought‑starter and all writing and analysis are completed by human authors, most publishers do not treat this as an activity requiring formal attribution. When AI is used in preparing text, methods, or analyses, it should be documented according to publisher policy.
Because expectations vary widely, researchers should check for specific instructions provided by publishers, societies, and agencies relevant to the work being submitted.
Springer Nature states that large language models (LLMs), including ChatGPT, do not meet authorship criteria, as authorship requires accountability for the content. Any use of an LLM should be documented in the Methods section, or in a suitable alternative section if methods are not part of the manuscript..
Oxford University Press / Oxford AcademicAdditional publisher statements can be found in the University of Utah’s information guide on authorship and AI.
Agency Requirements