By Legally Curious ·
On this page
- What “authorship” means (and why AI isn’t it)
- From “plagiarism machine” to “RA in silicon”
- A practical authorship test
- Disclosure standards: what good looks like
- Audit trails: from vibes to provenance
- Exam carve-outs (clear red lines)
- Equity, not exceptionalism
- Policy blueprint for departments, journals, labs
- Templates you can drop into your next project
- Anticipating pushback (and answers)
- The forward-looking view
- Selected sources
2) From “plagiarism machine” to “RA in silicon”
Calling any AI-assisted drafting “plagiarism” collapses different behaviors:
- Ghostwriting: undisclosed AI that substitutes for the student’s or scholar’s own intellectual labor.
- Assisted drafting: disclosed help with phrasing, structure, or translation.
- Computational research: using models as methods—summarization baselines, coding assistants, simulated annotators—with validation.
Only the first is inherently deceptive. The second and third are permissible with conditions—just as with human research assistants (RAs): the scholar stays in charge, validates outputs, and documents the workflow.
3) A practical authorship test you can apply tomorrow
1) Intellectual Contribution Test
Did you originate the core claims, design, methods, and interpretation? If AI produced any of these, either (a) independently verify/re-derive them and take ownership, or (b) treat the output as exploratory notes not included in the final work.
2) Accountability Test
Can you defend every table, quotation, and citation without the model? If not, it’s ghostwriting.
3) Transparency Test
Would a reasonable reader understand what AI did, where, how, and under whose supervision? If not, disclose more.
4) Disclosure standards: what good looks like
Default rule: Disclose specific, concrete facts about AI use in a “Methods/Author Contributions/Notes” section and—if assessed student work—in a short preface.
Minimum disclosure block (copy, paste & fill)
AI Use Statement. We used OpenAI GPT-4.1 (May 2025 model) to (i) brainstorm title variants;
(ii) rewrite two sentences for clarity in §2.3; and (iii) generate docstrings in the analysis scripts.
Prompts and raw outputs appear in Appendix A (/ai-logs). All references were verified manually;
all quantitative results were reproduced from raw data by the authors without AI assistance.
No personal or confidential data were entered into third-party systems.
When figures or synthetic media are involved: label them clearly; many journals restrict AI-generated images or require explicit permission.
Regulatory backdrop: the EU AI Act’s transparency ethos (e.g., labelling synthetic content) is a useful principle for internal academic policies, even when it doesn’t apply directly to coursework.
5) Audit trails: from vibes to verifiable provenance
Detection tools are brittle and invite false positives. Shift from “gotcha” detection to verifiable process evidence via a provenance pack.
The 4-A Provenance Pack
- Access – who used which tools (accounts), where (local/hosted), and when.
- Activity – prompts, parameters (temperature/top-p), model versions/dates.
- Artifacts – raw AI outputs kept separate from final text; redlines showing human edits.
- Accountability – a signed statement of human verification (citations checked, data re-run).
Practical tools: version-controlled docs or notebooks; “model-of-record” fields in your lab template; and a zipped “/ai-logs” folder submitted alongside the paper or assignment.
6) Exam carve-outs (clear red lines)
Why carve-outs? Because assessment goals differ. When the purpose is to test you, AI assistance undermines validity. Use a simple traffic-light policy:
| Context | Policy | Notes |
|---|---|---|
| Closed-book invigilated exams; oral defenses (unless tool use is explicitly allowed) | Red — No AI | Zero tolerance; clarify consequences in the rubric. |
| Take-home exams; problem sets | Amber — Limited AI | Only specified tasks (e.g., code linting, grammar) with disclosure + provenance pack. |
| Research reports; literature maps; prototyping | Green — AI welcomed | Disclose, attribute, audit. Human authors remain fully responsible. |
7) Equity, not exceptionalism
Bans disproportionately harm non-native writers, disabled students, and scholars in resource-poor settings, for whom assistance with clarity and structure can be transformative. Integrity and inclusion are compatible: disclose and document rather than prohibit by default.
8) Policy blueprint for departments, journals, and labs
- No AI as author. Credit tools in Acknowledgments/Methods; all responsibility remains with human authors.
- Mandatory disclosure whenever AI meaningfully shaped text, code, figures, or analysis; label synthetic media.
- Provenance required for assessed student work and computational-methods papers: prompts, model versions, raw outputs, human edits.
- Data protection. Never paste personal/confidential data into third-party tools without a lawful basis and approvals (e.g., GDPR/ethics board).
- Exam carve-outs. Adopt the red/amber/green rule with rubrics stating allowed AI uses.
- Validation duty. Any AI-produced citation, quote, or statistic must be manually verified; hallucinations are your liability.
- Synthetic-content labelling. If your paper includes AI-generated media or text passages, label them plainly.
9) Templates you can drop into your next project
A) “AI Use” disclosure (journal or thesis)
AI Use Statement. We used OpenAI GPT-4.1 (May 2025 model) to (i) brainstorm title variants;
(ii) rewrite two sentences for clarity in §2.3; and (iii) generate docstrings in the analysis scripts.
Prompts and raw outputs appear in Appendix A (/ai-logs). All references were verified manually;
all quantitative results were reproduced from raw data by the authors without AI assistance.
B) Student assignment preface
Assistance Declaration. Except where noted, the analysis, structure, and final wording are my own.
I used Anthropic Claude 3 Opus to propose a bullet-point outline; I rewrote all text and verified all sources.
Prompts and outputs are included in the folder /ai-logs.
C) Provenance checklist (attach as PDF or appendix)
- Model/provider/version/date; parameters used
- Prompt list (copy/paste)
- Unedited AI outputs (separate file)
- Redlined edits showing human revisions
- Manual verification log for citations/quotes/data
10) Anticipating pushback (and answers)
“Disclosure will bias reviewers.” Blind the provenance pack to reviewers and keep a short neutral disclosure in the manuscript.
“Detectors say my text is AI.” That’s why we privilege process evidence over detectors. Provide drafts, logs, and edit history.
“This slows students down.” Good. Scholarship is accountable work. Guardrails trade a little speed for much higher integrity.
“What about legal compliance?” You already label conflicts, methods, and ethics approvals. AI use is another compliance lane—and it mirrors emerging expectations to label synthetic content.
11) The forward-looking view
In a year, this debate won’t be “AI or no AI.” It will be which parts of the workflow are delegated, under what controls, and with what auditability. The institutions that get this right will publish faster, with clearer language and better reproducibility—and with fewer integrity scandals.
Bottom line: Treat LLMs like RAs. Disclose, attribute, audit. Keep exams human. Everything else is design.

Leave a comment