From Draft to Submission: Ethical AI Integration using an LLM use statement template for JAMIA
Rushing a JAMIA submission and unsure how to disclose AI use without risking compliance? In this lesson, you’ll learn to craft a precise, journal-aligned LLM use statement that is transparent, auditable, and fully COPE/ICMJE compliant—from framing scope and tool versions to authorship, originality checks, and data privacy. You’ll see a clear template, scenario-based examples (editing, ideation, code drafting), and a submission-ready checklist with pitfalls to avoid. Surgical, defensible, and editor-ready—so you can move from draft to submission with confidence.
Step 1: Frame the requirement and compliance anchors
A “LLM use statement template for JAMIA” is a journal-aligned, standardized disclosure that authors adapt to reveal whether and how any artificial intelligence tools—including large language models (LLMs), grammar checkers, reference managers with AI features, and code assistants—were involved in the preparation of a manuscript. Its purpose is not aesthetic; it is a formal, traceable record of the roles AI tools played at any stage, from drafting and revising text to producing figures, aiding analysis, or drafting code. In the same way that Methods sections document data provenance and analytic steps, an AI/LLM use statement documents the provenance and oversight of computational assistance in writing and related tasks.
This requirement sits within a larger compliance ecosystem. COPE (Committee on Publication Ethics) emphasizes transparency and accountability when AI tools are used in scholarly work. That means the statement must be specific, verifiable, and complete. ICMJE authorship criteria make clear that AI systems cannot fulfill the responsibilities of an author: they cannot take responsibility for the integrity of the work as a whole, cannot provide consent, and cannot respond to peer review. Therefore, an AI tool may be used, but it must not be credited as an author, and its role must be described as a tool—subject to human oversight and responsibility.
JAMIA’s expectations align with these anchors. The journal seeks clarity on whether AI was used, which tools were used, what functions they performed, and how human authors supervised and verified outputs. The journal also expects that data privacy and security norms are followed, especially with clinical or patient data. If protected health information (PHI) or restricted datasets are involved in the research, authors must not expose such data to external tools without explicit institutional approval and compliant technical pathways. This includes both prompts and uploaded files; even seemingly harmless context can inadvertently disclose identifiers.
The key learning takeaway is that the LLM use statement is a precision instrument. It is essential for traceability of AI involvement, safeguarding originality (by preventing unacknowledged text reuse or AI hallucinations from being passed as fact), protecting privacy (by preventing leakage of identifiable or proprietary data), and preserving research integrity (by ensuring human accountability). In short, the statement is a record of responsible AI integration from draft to submission.
Step 2: Deconstruct the JAMIA-aligned template into required elements
A useful way to produce a compliant statement is to think in five modular sections that can be adapted to the specifics of your manuscript. Each section has a distinct function and invites precise, falsifiable claims rather than vague generalities.
-
1) Purpose and Scope
- This section declares whether AI tools were used at all, and if used, outlines the high-level functions. It draws a boundary around what the tools did and did not do. Avoid ambiguous verbs. Use verbs that are narrow and testable, such as “polish grammar,” “suggest section headings,” or “draft pseudocode for data parsing.” These verbs create a factual, auditable record.
- Example phrasing: “AI tools were/ were not used in the development of this manuscript. Where used, their functions were limited to [e.g., language polishing, outline ideation, figure caption refinement].”
-
2) Specific AI Tools and Versions
- This section lists the tool name, provider, version/date, and, if relevant, the mode or model family (for instance, a specific LLM like GPT-4o or a code assistant’s release channel). Naming versions matters because capabilities and risks change over time; a general label like “chatbot” is insufficient. Indicate whether default or custom settings were used, and identify any specialized models.
- Example phrasing: “We used [Tool name, provider, version/date, mode if relevant] with [default/custom] settings; no specialized or proprietary prompts/models were employed unless specified.”
-
3) Human Oversight and Authorship
- This section asserts human control, verification, and authorship. It should state that all intellectual contributions—including conception, design, analysis, interpretation, and final wording—were performed and validated by human authors. Make explicit that AI tools do not meet authorship criteria and are not listed as authors. State who bears responsibility for accuracy and integrity (the authors).
- Example phrasing: “All intellectual contributions, study design, interpretation, and final text were conceived and verified by human authors. AI tools did not meet authorship criteria and are not listed as authors.”
-
4) Originality, Fact-Checking, and Anti-plagiarism
- This section explains how you safeguarded originality and accuracy. It should note that authors critically reviewed any AI-assisted text, removed hallucinated content, verified every citation against primary sources, and conducted plagiarism checks on the final manuscript. These steps assure reviewers that you did not outsource truth claims to a model.
- Example phrasing: “Authors critically reviewed AI-assisted text for accuracy, eliminated hallucinations, verified citations, and screened the final manuscript with plagiarism detection. All citations were independently checked against primary sources.”
-
5) Data Security, Privacy, and Limitations
- This section declares that no identifiable patient data or restricted datasets were entered into AI tools and that prompts avoided PHI. It should also state compliance with institutional policies and tool-specific data-sharing settings. Finally, it should acknowledge known limitations of the tools used (e.g., fabricated references, non-deterministic outputs) and the mitigation steps you took.
- Example phrasing: “No identifiable patient data or restricted datasets were input into AI tools. Prompts avoided protected health information. Where institutional or tool policies restricted data sharing, compliant workflows were used. Limitations of the AI tools (e.g., risk of fabricated references) were acknowledged and mitigated.”
For alignment with JAMIA, use precise, falsifiable statements; name the models and versions if known (e.g., “GPT-4o, OpenAI, May 2025”); and, importantly, if you did not use any AI tools, state that explicitly: “No AI tools were used in the preparation of this manuscript.” Precision and clarity are your best defenses against misunderstanding and your strongest signals of integrity.
Step 3: Apply the template to three common manuscript contexts
When adapting the LLM use statement template for JAMIA, the goal is to tailor the five sections to your manuscript’s actual practices without adding AI-generated language that obscures meaning. Each scenario below illustrates how to think through the template’s components to match your context.
-
Scenario A: Language polishing only
- In this context, AI tools served as editorial aids. You should declare that their role was limited to improving clarity, grammar, or formatting of prose. Identify each tool (e.g., a grammar assistant and an LLM used as a writing coach), and record the versions/dates. Emphasize human review: authors accepted or rejected suggestions and maintained control over scientific content, claims, and conclusions. Declare that no proprietary or identifiable information was entered, and reiterate that all scientific decisions and interpretations were made by the human authors.
- The focus here is on narrow scope and robust oversight. Clarify that AI tools did not originate analyses, generate references, or produce novel findings. Ensure that originality checks were performed, especially if any paraphrasing features were used. The goal is to show that language-level assistance did not spill into content generation.
-
Scenario B: Idea generation and outline refinement
- If an LLM was used to brainstorm alternative structures or clarify headings, be transparent about that ideation phase. Name the model and date, and describe the boundary between brainstorming and authorship: the authors produced all substantive content, verified every claim, and performed all analyses. State the data hygiene: no private or identifiable data were shared with the model, and any prompts avoided potentially sensitive details.
- Stress the safeguards: plagiarism screening of the final manuscript and independent verification of citations. Acknowledge the limitation that LLMs can produce plausible but inaccurate suggestions; explain that the team critically evaluated the ideas and retained only those aligned with evidence and domain standards. This demonstrates that the LLM did not determine the scientific logic of the paper.
-
Scenario C: Code drafting for reproducible methods (nonclinical dataset)
- When a code assistant or LLM helped draft pseudocode or boilerplate for data wrangling, specify the tool/version and the dataset context (public, de-identified, nonclinical). Emphasize human validation: authors debugged, audited, and tested the code, ensuring correctness, reproducibility, and conformance to the described methods. Document that no PHI, proprietary data, or credentials were shared with the tool.
- Acknowledge limitations of AI-generated code, such as potential security vulnerabilities, licensing uncertainties, or silent logic errors. Describe mitigation steps like independent tests, code review by a domain expert, and unit tests. This conveys that AI assistance accelerated drafting but did not replace rigorous scientific programming practices.
In all scenarios, the essential alignment points are constant: name the tool and version, specify the function, document human verification, describe originality safeguards, and commit to data privacy. This structure maps directly to JAMIA’s transparency principles and COPE guidance, while the scoped verbs and precise claims prevent the introduction of AI-generated artifacts that obscure responsibility.
Step 4: Quality control checklist and anti-pitfall guardrails
Before submission, run your statement through a targeted checklist. This helps you confirm that your disclosure is complete, consistent across manuscript sections, and phrased in verifiable terms.
-
Submission-ready checklist for JAMIA
- 1) Declared whether AI was used; if not, included: “No AI tools were used in the preparation of this manuscript.”
- 2) Named tools, providers, and versions/dates; specified the precise functions performed (e.g., editing vs. outline ideation vs. pseudocode drafting).
- 3) Explicitly stated human oversight, authorship, and responsibility for accuracy and integrity.
- 4) Described originality safeguards: fact-checking, verification of all citations against primary sources, and plagiarism screening of the final manuscript.
- 5) Stated data handling rules: no PHI or restricted data entered into external tools; institutional policy compliance; secure, approved workflows for any sensitive information.
- 6) Acknowledged limitations or risks of the AI tools used and the mitigation steps taken.
- 7) Ensured language is precise, concise, and free of AI artifacts—avoiding vague or expansive claims like “AI wrote the paper” or “AI ensured accuracy.”
- 8) Performed a consistency check across the manuscript: the statement’s details align with Methods, Acknowledgments, and Data/Code Availability sections.
-
Common pitfalls and how to avoid them
- Implying AI is a co-author or bears responsibility: Keep authorship human-only, and assign accountability to the authors. State that AI tools do not meet authorship criteria.
- Failing to specify tool/version and role: Always name the tool/provider and the version/date. Tie each tool to a specific function. Vague labels like “AI assistance” are insufficient.
- Ingesting identifiable patient data into third-party tools without approval: Never enter PHI into external services unless there is an institutional agreement and a secure pathway. Even partial identifiers can breach privacy commitments.
- Citing references suggested by AI without independent verification: Treat AI-suggested references as leads only. Verify each citation against the primary source, and ensure its relevance and accuracy.
- Overclaiming or underdisclosing: Do not claim that AI guaranteed accuracy or that its role was “minor” if it generated substantive text or code. Match the statement to actual practice with defensible specificity.
-
Final note on phrasing
- Prefer verifiable, scoped verbs such as “used to polish grammar,” “assisted with outline ideation,” or “drafted initial pseudocode.” Avoid ambiguous verbs like “enhanced the manuscript,” which conceal scope and hinder accountability. The more concrete your verbs, the easier it is for editors and reviewers to understand and assess your compliance.
By following these steps, you transform the LLM use statement from a perfunctory note into a robust disclosure that upholds JAMIA’s standards and COPE-aligned transparency. The result is a clear, auditable description of ethical AI integration: it shows where AI helped, how humans remained in control, how originality and accuracy were protected, and how privacy and security were preserved. This approach not only meets submission requirements but also models best practices for responsible, credible scholarly communication in an era of rapidly evolving AI capabilities.
- Disclose clearly whether AI was used, specify the exact tools, providers, versions/dates, and describe precise, narrow functions (e.g., “polished grammar,” “drafted pseudocode”).
- Keep authorship and accountability human: AI tools are not authors; human authors conceived, verified, and are responsible for all content and conclusions.
- Safeguard originality and accuracy: critically review AI-assisted text, verify every citation against primary sources, and run plagiarism screening on the final manuscript.
- Protect privacy and security: do not input PHI or restricted data into external tools, comply with institutional policies, and acknowledge tool limitations with stated mitigations.
Example Sentences
- AI tools were used solely to polish grammar and refine figure captions; all scientific interpretations were verified by human authors.
- We used GPT-4o (OpenAI, May 2025) with default settings for outline ideation, and no protected health information was included in prompts.
- All intellectual contributions, analysis decisions, and final wording were performed by human authors; AI systems did not meet authorship criteria and are not listed as authors.
- Authors independently verified every citation suggested during drafting and screened the final manuscript with plagiarism detection to ensure originality.
- No identifiable patient data or restricted datasets were entered into external AI services, and known tool limitations (e.g., potential fabricated references) were mitigated through human fact-checking.
Example Dialogue
Alex: I’m finalizing our JAMIA submission and need a precise LLM use statement—did we use any AI beyond grammar polishing?
Ben: Only GPT-4o (OpenAI, May 2025) to polish grammar and suggest section headings; we kept default settings and avoided any PHI in prompts.
Alex: Good—then we’ll state the tools, versions, and narrow functions, plus that all analyses and conclusions were authored and verified by us.
Ben: Right, and add that we checked every citation against primary sources and ran a plagiarism screen on the final draft.
Alex: I’ll also note that AI isn’t an author and that we followed institutional policies on data privacy.
Ben: Perfect—concise, specific, and compliant with JAMIA and COPE guidance.
Exercises
Multiple Choice
1. Which sentence best fits the “Purpose and Scope” section of a JAMIA-aligned LLM use statement?
- We used AI to enhance the manuscript.
- AI tools were used solely to polish grammar and refine figure captions; no analyses or conclusions were generated by AI.
- AI guaranteed the accuracy of our findings.
- A chatbot helped a bit throughout the paper.
Show Answer & Explanation
Correct Answer: AI tools were used solely to polish grammar and refine figure captions; no analyses or conclusions were generated by AI.
Explanation: Purpose and Scope should use precise, falsifiable verbs and draw clear boundaries. “Polish grammar” and “refine figure captions” are narrow and auditable, unlike vague terms like “enhance” or overclaims like “AI guaranteed accuracy.”
2. Which option correctly addresses the “Specific AI Tools and Versions” requirement?
- We used a chatbot for drafting.
- We used GPT-4o with custom magic prompts.
- We used GPT-4o (OpenAI, May 2025) with default settings; no specialized models were employed.
- We used AI from the internet; version unknown.
Show Answer & Explanation
Correct Answer: We used GPT-4o (OpenAI, May 2025) with default settings; no specialized models were employed.
Explanation: The template requires tool name, provider, version/date, and relevant settings. This choice is precise and verifiable; the others are vague or missing required details.
Fill in the Blanks
All intellectual contributions and final wording were verified by human authors; AI tools ___ meet authorship criteria and are not listed as authors.
Show Answer & Explanation
Correct Answer: do not
Explanation: ICMJE criteria state AI cannot take responsibility or consent; therefore, AI tools do not meet authorship criteria.
No identifiable patient data or restricted datasets were entered into AI tools; prompts explicitly avoided ___.
Show Answer & Explanation
Correct Answer: protected health information (PHI)
Explanation: JAMIA/COPE-aligned statements must assert privacy safeguards, including avoiding PHI in prompts or uploads.
Error Correction
Incorrect: AI contributed as a co-author and ensured the accuracy of the manuscript.
Show Correction & Explanation
Correct Sentence: AI tools were used as aids and are not listed as authors; the human authors are responsible for the accuracy and integrity of the manuscript.
Explanation: AI cannot meet authorship criteria (ICMJE) and cannot guarantee accuracy; accountability must remain with human authors.
Incorrect: We used an AI assistant but cannot recall the version, and we entered some de-identified patient snippets to speed drafting.
Show Correction & Explanation
Correct Sentence: We used [Tool name, provider, version/date] and did not enter any identifiable patient data or restricted datasets into external tools; prompts avoided PHI and complied with institutional policies.
Explanation: JAMIA requires naming the tool/version and forbids exposing PHI/restricted data without approved workflows; the corrected sentence restores specificity and privacy compliance.