How to Disclose ChatGPT Usage in Academic Papers
Practical templates, examples, and journal-aligned guidance for disclosing ChatGPT usage in academic papers.
ChatGPT disclosure is now part of normal academic writing
If ChatGPT touched your paper, say so.
That sentence settles most cases. The harder part is the next question: what should you disclose, and where should you put it?
The answer depends on the job ChatGPT did. Brainstorming a title is not the same as drafting paragraphs. Cleaning up grammar is not the same as generating code, tables, or analytical summaries. Journals now draw those lines more clearly, and editors expect authors to do the same. ICMJE says authors should disclose AI-assisted technologies, name the tool, state the purpose, and describe that use in both the cover letter and the manuscript. ACM allows generative AI use, but requires disclosure in the work. Elsevier asks for a formal declaration for writing assistance and says that AI used as part of the research method belongs in Methods. WAME recommends fuller reporting when chatbots shape analysis, code, tables, or other substantive parts of the manuscript. (icmje.org)
This page gives you practical templates for light, moderate, and heavy ChatGPT use. It also shows where to place the statement, what details to keep, and how to write a disclosure that does not sound vague or evasive.
If you are still deciding whether your case needs disclosure at all, start with Do I Need to Disclose AI Usage in My Paper?. If you need venue-specific guidance, go next to AI Disclosure Policies by Major Journals.
Start by saving the details you will need later
Most bad disclosure statements fail before the writing starts.
The authors did not keep records while they worked. Then, weeks later, they try to reconstruct what happened from memory. That is how you end up with weak phrases like "some AI assistance was used."
Keep five facts from the start.
First, record the exact tool. "ChatGPT" may be enough for light use in the web app, but not if you also used the API, browsing, file uploads, custom GPTs, or several OpenAI tools.
Second, record the model or model family if you can. ChatGPT changes fast, and journal editors may read your paper long after the interface has changed again. If you used the API, save the exact model string from your logs. If you used the ChatGPT interface, note the model name shown at the time.
Third, record the dates of use. For light writing help, month and year often work. For code, analysis, or repeated use across drafts, keep a tighter log.
Fourth, record the task. Say what ChatGPT did. Brainstorming, language editing, paraphrasing, code drafting, table formatting, literature summarization, and analysis support are different uses. Your disclosure should name the one that actually happened.
Fifth, record your verification step. Editors want to know what you checked yourself. That may mean checking citations against primary sources, testing generated code, validating table values, or rewriting AI text after review.
If you already know your use was complex, do not rely on a one-line acknowledgment alone. Generate an AI Usage Card while the details are still fresh, then adapt it for your manuscript and supplement.
Three levels of ChatGPT use
A good disclosure matches the level of use.
If you understate the role of ChatGPT, you create risk. If you overstate it, you make a routine case sound suspicious. The goal is simple accuracy.
Level 1: light use for brainstorming and outlining
Use this level when ChatGPT helped you think, but did not supply text or code that stayed in the paper.
Common examples include brainstorming research questions, testing titles, sketching an outline, generating keyword ideas, or asking for a plain-language explanation of a concept while you drafted your own text.
A short acknowledgment usually works.
The authors used ChatGPT (OpenAI, [model name], accessed [month year]) during the planning stage of this manuscript to brainstorm research questions and outline sections of the paper. No AI-generated text, code, or analysis was included in the final manuscript. The authors wrote, reviewed, and approved all content.
This still counts as AI use. ICMJE says authors should disclose AI-assisted technologies and state the purpose of use. ACM also says that if you are unsure whether disclosure is needed, you should disclose in the work. (icmje.org)
Level 2: moderate use for writing help, editing, or paraphrasing
This is the case that most authors mean when they say, "I only used ChatGPT for polishing."
Use this level when ChatGPT helped rewrite sentences, improve clarity, reduce repetition, simplify phrasing, or edit a draft for style. The ideas are still yours, but some wording or structure reflects AI suggestions.
For many journals, an acknowledgment or dedicated disclosure section is enough. Elsevier asks authors to place a declaration at the end of the manuscript, immediately above the references, when AI assisted in the writing process. ACM says disclosure should appear in the work itself. (elsevier.com)
During the preparation of this manuscript, the authors used ChatGPT (OpenAI, [model name], accessed [month range year]) to improve clarity, grammar, and sentence structure in selected sections of the text. The authors reviewed and revised all suggestions, verified technical accuracy, and take full responsibility for the final content.
That template works because it does the jobs editors care about. It names the tool. It states the task. It confirms human review. It assigns responsibility to the authors.
One nuance matters here. ACM and Elsevier both carve out room for ordinary spelling and grammar tools without disclosure. The line changes once a system generates new content rather than making routine word-processing corrections. If ChatGPT rewrote meaning, generated text, or changed the structure of your prose in a substantive way, disclose it. (acm.org)
Level 3: heavy use for text generation, code, tables, or analysis support
Use this level when ChatGPT generated material that appears in the paper or directly shaped the work you report.
That includes draft text for sections of the manuscript, code used in analysis, scripts for preprocessing, generated table text, figure captions, literature summaries that informed the paper, or analytical outputs that affected the reported methods or results.
At this level, do not hide the disclosure in a vague acknowledgment. Put a short statement in the paper and add fuller detail in Methods or supplementary material. WAME recommends that authors report prompts and analytical uses in the manuscript body when relevant. Elsevier says that when AI tools are used in formal research design or methods, authors should describe that use in the Methods section. ICMJE also says that authors should describe AI use in the submitted work, not only in private correspondence. (wame.org)
The authors used ChatGPT (OpenAI, [model name], accessed [month range year]) during manuscript preparation and analysis support. ChatGPT assisted with drafting preliminary text for [sections], generating initial [Python/R] code for [task], and proposing tabular summaries of [material]. The authors reviewed, corrected, and revised all AI-generated outputs, validated code behavior against expected results, and verified all factual claims against primary sources. The authors take full responsibility for the final manuscript, code, and conclusions.
If your use falls in this category, a structured record helps. That is what AI Usage Cards are for. For examples beyond ChatGPT alone, see AI Usage Cards Examples and Templates.
Where to put the disclosure
Placement matters because journal rules differ.
For light and moderate writing help, many journals accept an acknowledgment-style statement.
For heavier use tied to methods, code, analysis, tables, or figures, put the disclosure in the manuscript body. Methods is usually the right place. If the journal also asks for a separate declaration section, add one there too.
If you submit to a medical journal that follows ICMJE recommendations, include the disclosure in both the cover letter and the manuscript. ICMJE says journals should require authors to disclose AI-assisted technologies and describe their use in both places. (icmje.org)
If you submit to an ACM venue, disclose generative AI use in the work itself. ACM says generative AI tools cannot be authors and that the use of such tools to create content is permitted only if authors fully disclose that use in the work. (acm.org)
If you submit to an Elsevier journal, expect a formal declaration section titled "Declaration of Generative AI and AI-assisted technologies in the writing process" placed immediately above the references. If AI formed part of the research method rather than the writing process, Elsevier says you should describe that use in Methods. (elsevier.com)
If you need a broader map of publisher rules, use AI Disclosure Policies by Major Journals. If your paper sits close to submission, read AI Transparency Requirements for Journal Submissions next.
What a strong disclosure actually says
A strong disclosure is concrete.
It names the tool. It gives a date or date range. It states the task. It says what the authors checked. It does not hide behind phrases like "minor assistance" if the tool drafted text or wrote code.
This simple formula covers most cases:
The authors used ChatGPT (OpenAI, [model], accessed [date range]) for [task]. The authors reviewed and revised all outputs, verified [facts/code/citations/results], and take full responsibility for the final manuscript.When the use affected methods or results, add enough detail for scrutiny. WAME recommends that authors preserve prompts and be ready to report them, especially when chatbots generated new text or transformed content into tables or illustrations. You do not always need every prompt in the main paper, but you should keep them in your records and include them in supplementary material if the editor or journal asks. (wame.org)
A practical test helps here. If a reviewer asked, "What exactly did ChatGPT do?" your disclosure should answer that in one read.
LaTeX examples you can paste into your paper
Many researchers write in LaTeX. These patterns cover the cases that come up most often.
Acknowledgments example
\section*{Acknowledgments}
During the preparation of this manuscript, the authors used ChatGPT
(OpenAI, [model name], accessed [month year]) to improve sentence
clarity and grammar in selected sections. The authors reviewed and
revised all suggestions and take full responsibility for the final
content.Elsevier-style declaration example
\section*{Declaration of Generative AI and AI-assisted technologies in the writing process}
During the preparation of this work, the authors used ChatGPT
(OpenAI, [model name], accessed [month year]) to improve readability
and language in selected passages. After using this tool, the authors
reviewed and edited the content as needed and take full responsibility
for the content of the publication.Methods example for code or analysis support
\section*{AI-assisted methods disclosure}
The authors used ChatGPT (OpenAI, [model name], accessed [month range year])
to generate initial Python code for data cleaning and visualization.
The authors tested all generated code, corrected errors, and verified
that the final scripts reproduced the reported results. ChatGPT was
not used to make final analytical decisions or interpret study findings
without human review.Supplementary note example
\section*{Supplementary note on AI use}
ChatGPT was used in three stages of manuscript preparation:
(1) brainstorming keywords for the literature search,
(2) revising paragraph-level wording for readability, and
(3) generating an initial draft of a plotting script.
A log of prompts, dates of use, and human verification steps is
provided in the supplementary materials.If you want a fuller workflow, see LaTeX Tutorial for AI Usage Cards and How to Use AI Usage Cards in Overleaf.
Common mistakes that get authors into trouble
The first mistake is vagueness.
"ChatGPT was used" tells an editor almost nothing. Name the model if you know it. Name the task. Name the stage of work. If the tool drafted text that stayed in the manuscript after revision, say that.
The second mistake is understatement.
If ChatGPT drafted paragraphs, generated code, or produced tables, do not call that "proofreading." Editors read that as evasive.
The third mistake is skipping the verification step.
ICMJE warns that AI output can sound authoritative while still being wrong, incomplete, or biased. Your disclosure should say what you checked. Facts, quotations, citations, code behavior, statistical outputs, and image provenance all count. (icmje.org)
The fourth mistake is treating every use as writing help.
Some uses belong in Methods, not Acknowledgments. If ChatGPT helped generate code, analytical outputs, tables, or figure descriptions tied to your results, put that in the manuscript body.
The fifth mistake is forgetting privacy and confidentiality.
Do not paste unpublished participant data, reviewer comments, protected health information, or confidential manuscript content into public AI tools unless your institution and the tool terms allow it. WAME warns that feeding confidential manuscripts into chatbots can breach confidentiality. The same logic applies to authors handling sensitive material. (wame.org)
The sixth mistake is failing to keep a record.
You do not need a perfect lab notebook for every prompt, but you do need enough detail to reconstruct what happened. If your use was spread across drafting, coding, and revision, a short acknowledgment will not carry the whole load. Use an AI Usage Card or attach a brief supplement.
ChatGPT is not an author
Do not list ChatGPT as a co-author.
ICMJE says authors should not list AI or AI-assisted technologies as authors. ACM says generative AI tools cannot be authors on ACM works under any conditions. WAME also says that chatbots cannot meet authorship standards because they cannot approve the final manuscript or take responsibility for the work. (icmje.org)
If you need the full reasoning, read Can AI Be a Co-Author on a Research Paper?.
When a short statement is not enough
A one-paragraph disclosure works for simple cases.
It stops working when you used several tools, several models, or different forms of AI assistance across the project. It also stops working when the editor wants more than a generic acknowledgment.
That is where a structured disclosure helps. An AI Usage Card lets you record the tool, model, task, dates, verification steps, and output types in one place. You can attach it as supplementary material, adapt parts of it for the manuscript body, or paste a shorter version into acknowledgments.
This matters even more if your paper combines ChatGPT with transcription tools, coding assistants, translation tools, or image systems. One clean record beats scattered statements across files and email threads.
If you work in a method-heavy field, you may also want examples from AI Disclosure for Social Science Research, AI Disclosure for Qualitative Research, or AI Disclosure in Systematic Reviews and Meta-Analyses.
Good disclosure saves time later
Editors do not expect perfection. They expect honesty and enough detail to judge the manuscript fairly.
A clear disclosure tells them what the tool did, what you checked yourself, and where responsibility sits. That lowers friction during review. It also protects you if someone asks questions after publication.
If you used ChatGPT in any meaningful way, do not wait until submission night to patch together a statement from memory. Generate your free AI Usage Card, keep it with your project files, and copy the relevant text into your acknowledgments, Methods section, cover letter, or supplement.
That takes a few minutes now. It can save you a week of back-and-forth later.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card