AI Disclosure for Social Science Research
A practical guide for social scientists who need to disclose AI use in surveys, coding, writing, analysis, and mixed-methods research.
Social science AI disclosure gets tricky fast
In social science, AI rarely stays in one lane.
You might use a model to draft survey items on Monday, clean code on Tuesday, summarize interviews on Wednesday, and polish prose before submission. Those uses do not raise the same questions. A single line that says "we used ChatGPT" leaves too much out.
Readers need more than the tool name. They need to know what the tool touched, what you gave it, and how you checked the output. Journal policies now push in that direction too. ICMJE says authors should disclose AI-assisted technologies used in submitted work and describe how they used them. It also says AI tools cannot qualify as authors because humans must take responsibility for accuracy, integrity, and originality. (icmje.org)
That need for detail matters even more in social science because your evidence often depends on wording, context, interpretation, and judgment. A model can change all four without leaving obvious marks.
This guide shows how to disclose AI use in social science research in a way that readers, reviewers, and editors can actually assess. If you want the short version first, see How to Disclose ChatGPT Usage in Academic Papers and Why AI Transparency Matters in Research.
Social science projects use AI at many stages
Most projects do not have one clean "AI step."
You may use AI before data collection, during analysis, or only during writing. You may also use it in several places at once. That means your disclosure should follow the workflow, not a template copied from another paper.
In survey research, researchers use AI to draft item wording, rewrite questions for reading level, translate drafts, or suggest response options.
In qualitative work, researchers use AI to summarize transcripts, cluster excerpts, propose codes, or clean rough notes.
In quantitative work, researchers use AI to write R, Python, Stata, or SPSS code, explain error messages, suggest plots, or draft methods text.
In mixed methods studies, the model may touch design, coding, analysis, and reporting. That is where weak disclosure falls apart. The more touchpoints you have, the more your readers need a structured record.
A simple test helps. Ask this: did the tool affect design, data handling, analysis, interpretation, or presentation? If yes, disclose it. If it touched only grammar or spelling, check the journal policy anyway, because some publishers draw a line between assistive editing and generative content. Sage, for example, says authors do not need to disclose assistive AI that improves language or structure, but they must disclose generative AI that produces text, references, images, translations, or other content. (sagepub.com)
Good disclosure answers practical questions
Good disclosure does not perform virtue. It gives facts.
Readers usually want answers to six questions:
- What tool did you use?
- When did you use it?
- What task did it support?
- What material did you provide?
- What checks did you run?
- Who made the final decisions?
That last question carries most of the weight. In social science, readers need to know where researcher judgment ended and where machine output began. If AI suggested codes for interview excerpts, say who reviewed those codes and how the team resolved disagreements. If AI drafted survey items, say who checked wording for bias, validity, and fit with the study population.
Specificity builds trust. Vague language does the opposite.
That is why a structured format helps. What Are AI Usage Cards? explains the idea, and AI Usage Cards Examples and Templates shows what a useful record looks like.
Social science carries risks that disclosure should surface
AI can distort social science evidence in quiet ways.
A model can smooth over hesitation in an interview quote. It can rewrite a participant's phrasing into standard academic English. It can suggest categories that fit internet stereotypes better than your theory or your field site. It can summarize disagreement as consensus because summary systems tend to compress nuance.
That problem gets sharper in work on identity, politics, health, migration, education, labor, policing, or law. In those areas, small wording changes can alter meaning. A model that "improves clarity" may strip out emotion, uncertainty, dialect, or culturally specific language. In qualitative research, that can change the evidence itself. In survey design, it can change what a question measures.
Disclosure will not solve those problems on its own. It does something simpler and still useful. It tells readers where those risks may have entered the process.
Qualitative research needs the most detail
Qualitative methods depend on interpretation. So your disclosure should show where interpretation happened.
If you used AI to summarize transcripts, say whether those summaries served only as reading aids or whether they informed coding. If you used AI to suggest codes, say whether the model proposed initial labels, merged categories, or suggested themes. If you used AI to paraphrase, translate, or redact participant text, explain how you checked fidelity to the original wording.
Do not hide that work inside a line like "AI assisted analysis." That phrase tells readers nothing.
A stronger disclosure draws a clear boundary. For example, you might say that the team used a language model on de-identified transcript excerpts to generate provisional code suggestions, then performed manual coding in NVivo, rejected many suggestions, and finalized the codebook through team discussion. That tells the reader what the tool did, what it did not do, and who stayed responsible.
If your study involves interviews, ethnography, or open-ended responses, you should also think about data governance before you think about wording. Many external AI systems process submitted content on vendor infrastructure. That can conflict with consent terms, IRB restrictions, contractual data use limits, or institutional policy. COPE warns that AI tools raise confidentiality, authorship, and accountability problems in scholarly publishing, and publisher policies often echo that concern. (elsevier.com)
For more on interpretation-heavy projects, see AI Disclosure for Qualitative Research.
Quantitative research still needs disclosure
Some researchers assume that AI use in quantitative work counts as harmless tooling.
That is a mistake.
If AI wrote or revised code, readers need to know that. Large language models often produce code that runs and still does the wrong thing. They can pick the wrong default, mishandle missing data, silently recode variables, or suggest a model that does not match the design.
Say what the tool helped with. Data cleaning. Visualization. Regression setup. Error debugging. Documentation. Then say what you checked. Did you test the script? Did you rerun the analysis independently? Did a coauthor review the code? Did you verify model assumptions yourself?
If AI helped interpret output, say that too. Do not present machine-generated commentary as if it came from your own statistical reasoning. Elsevier's current policy draws a useful distinction here. It says generative AI used in the writing process should be disclosed in a separate declaration, while AI used in research design or methods should be described in the Methods section. (elsevier.com)
That split fits social science well. Writing help belongs in a writing disclosure. Research help belongs in methods.
Surveys and experiments need instrument-level disclosure
Question wording drives results.
So if AI suggested survey items, response scales, vignettes, manipulation checks, recruitment messages, or debrief text, disclose that role even if the final instrument looks fully human-written.
Then explain how you reviewed the material. Did you assess reading level? Did you screen for bias? Did you check cultural fit? Did you pilot test the wording? Did you revise items after expert review?
This matters because generative models tend to default to generic, polished, middle-of-the-road language. That can flatten constructs you actually need to measure. It can also sneak in assumptions about gender, class, race, family, work, disability, religion, or political identity.
A short process note strengthens the paper. It shows that you used AI as a drafting aid, not as a substitute for instrument design.
Protect confidential and sensitive data
This point belongs in every social science AI disclosure guide because people still get it wrong.
Do not paste raw confidential material into a public or consumer AI tool unless your institution and your study approvals allow it.
Interview transcripts, fieldnotes, case records, student files, clinical notes, and linked administrative data can contain direct or indirect identifiers. Even if you remove names, combinations of facts can still expose participants. If you used an AI system at all, readers may need to know what safeguards you used.
State whether you removed identifiers, used synthetic examples, limited prompts to short excerpts, or relied on an approved institutional system. If you never submitted raw participant data to the model, say that plainly.
That sentence does real work. It tells editors and readers that you thought about governance, not just convenience.
Put the disclosure where readers expect it
Do not bury methods-relevant AI use in the acknowledgments.
If AI affected design, coding, analysis, interpretation, or data handling, put the disclosure in the methods section. If the tool only helped with language editing or manuscript preparation, many journals allow or prefer a separate disclosure note near the end of the manuscript. Elsevier asks for a dedicated declaration above the references for generative AI used in the writing process. Nature journals say authors should document LLM use for text assistance in the methods or acknowledgements section, and Nature states that AI tools cannot be authors. (elsevier.com)
That means placement depends on function.
Methods use goes in methods. Writing support goes in the journal's preferred disclosure location. If your project includes several AI touchpoints, add an AI Usage Card and point readers to it. That gives you room to be clear without clogging the manuscript.
For the broader policy picture, see AI Disclosure Policies by Major Journals and AI Transparency Requirements for Journal Submissions.
A simple AI Usage Card structure for social science
A useful card reads like a lab note. It does not read like a press release.
You do not need to dump every prompt into the paper unless a journal, repository, or reviewer asks for them. You do need enough detail for someone else to understand the role of the tool and the checks you applied.
A practical card usually covers five things:
- Tool and access date
- Task supported
- Inputs provided
- Checks and limits
- Final human decisions
Here is a LaTeX example that works in a supplement or appendix.
\section*{AI usage disclosure}
\textbf{Tool:} Claude 3.5 Sonnet via web interface, accessed January 2026
\textbf{Purpose:} Assisted with drafting survey item alternatives and summarizing de-identified pilot feedback
\textbf{Inputs:} Draft survey constructs, non-identifiable example items, and anonymized pilot comments
\textbf{Checks:} The authors reviewed all suggestions for construct validity, wording bias, and fit with the study population. The team revised or rejected model outputs after pilot testing.
\textbf{Data protection:} No direct identifiers or raw confidential participant data were entered into the system
\textbf{Final decisions:} The authors selected all final survey items and analytic interpretations. The AI system did not determine study conclusions.If you want a cleaner workflow, generate the first version with the AI Usage Card generator and then adapt it to your journal style.
Example wording for common social science cases
Most researchers want wording they can use and edit.
Here is one example for a qualitative paper:
\paragraph{AI assistance disclosure.}
The authors used ChatGPT (OpenAI, GPT-4 class model) to generate provisional summaries of de-identified interview excerpts and to suggest candidate codes during early-stage analysis. These outputs served only as analytic aids. Two human coders reviewed the original transcripts, revised or discarded AI-generated suggestions, and finalized the coding scheme through iterative discussion. No identifiable participant information was submitted to the system.Here is one example for a quantitative paper:
\paragraph{AI assistance disclosure.}
The authors used GitHub Copilot and ChatGPT to draft portions of R code for data cleaning and visualization. The authors manually reviewed, tested, and revised all AI-generated code before analysis. The research team, not the AI tools, interpreted the statistical results and determined the study conclusions.Here is one example for a survey or experiment paper:
\paragraph{AI assistance disclosure.}
The authors used Claude to draft alternative survey item wording and vignette text during instrument development. The team reviewed all suggestions for readability, bias, and construct fit, then revised the instrument after pilot testing. No final questionnaire item was adopted without human review and approval.These examples work because they do three things. They name the task. They state the limits. They make human responsibility visible.
Avoid two common mistakes
The first mistake is vagueness.
Phrases like "AI assisted the research process" or "AI helped with analysis" do not help anyone. Name the task, the stage, and the review process.
The second mistake is understatement.
If the tool suggested themes, categories, survey items, translations, or code that shaped your analysis, do not hide that by saying AI only helped with "editing" or "language." Readers can handle the truth. What they do not like is fuzziness.
Specific disclosure protects your credibility. It also helps editors judge your paper on the actual facts instead of guessing what happened behind the scenes.
Make your workflow legible
Social science asks readers to trust your judgment.
That trust gets thinner when AI enters the workflow and nobody explains how. The fix is not a long speech about ethics. The fix is a clear record of what you did.
If you used AI in social science research, document the task, the data, the checks, and the final human decisions. Then turn that record into something readers can follow.
Create an AI Usage Card. You can attach it as a supplement, turn it into a methods note, or adapt it for acknowledgments and submission forms. That takes scattered notes and turns them into a disclosure that editors, reviewers, and readers can actually use.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card