AI USAGE POLICY
POLICY REVISION AND OVERSIGHT
Revision Date: 3/3/2026
Oversight: Chief Academic Officer
POLICY STATEMENT
AI tools can be very helpful for the research and writing processes, and using tools that help maximize learning and upskilling is encouraged. Using these tools properly and at the right stage of the learning process is important. Using AI too early in a Learner’s process can overwhelm the process and do the thinking for the Learner. This is especially problematic in humanities and education contexts that emphasize critical thinking. While AI guidance can be helpful early for idea generation or gaining an understanding of the scope of a topic, using AI prematurely may hinder the learning process.
I. USING AI AT THE PROPER STAGE OF A PROJECT
If AI is employed too early in a Learner’s process, it can easily overwhelm the process and do the thinking for the Learner. Rather than aiding the learning process, early-stage AI can hinder it. LLMs can hallucinate or present entirely false information as factual. Even when LLMs offer a factually correct path of inquiry, it may not be the best or most appropriate one for the assignment.
It is best to follow assignment steps meticulously and avoid AI at the early stages of a project. After completing each step, it can be helpful to use AI for refinement—to see its process and discover things that might have been missed, then adjust accordingly. This preserves the Learner’s own thinking and improves information processing skills.
II. CITING AI RESOURCES
When using AI, it must be cited properly. Use inline citations and footnotes as appropriate, and cite all AI tools (including LLMs) in the bibliography. For transparency, explain how AI was used and at what stage of the project it was employed.
III. AI AND INTELLECTUAL INTEGRITY
All submitted assignments must be the creative work of the Learner. Any use of AI that hinders that creative process or results in submitted work not being substantively the work of the Learner is misuse of AI. Misusing AI negatively impacts learning and can constitute intellectual dishonesty.
IV. FERPA-COMPLIANT DATA PROTECTION REQUIREMENTS
The following expansions ensure compliance with FERPA and protect learner privacy:
– No personally identifiable information (PII) from education records may be entered into any AI system not approved by Agathon University.
– PII includes names, email addresses, ID numbers, grades, advising comments, discussion posts, assignment excerpts, or any non-public academic information.
– Only institution-approved, FERPA-compliant AI tools may be used for any student-related content.
– AI tools that store or reuse prompt data must not be used with internal academic information.
– All data used with AI must be de-identified unless the tool is institutionally licensed for FERPA compliance.
V. PERSONNEL USAGE LIMITATIONS
Agathon personnel may use AI tools for drafting non-confidential documents, summarizing general information, brainstorming, or analyzing pre-approved anonymized datasets. However:
– AI must not be used for evaluations or decisions about admissions, hiring, grading, retention, or promotion.
– AI may not autonomously grade work unless built into an institution-approved LMS tool.
– All outputs require human verification.
VI. ACCEPTABLE AI USE
Examples of proper AI use include:
– Brainstorming or exploring topics after completing independent analysis.
– Refinement of clarity, style, or organization.
– Creating study aids from de-identified content.
VII. UNACCEPTABLE AI USE
– Submitting AI-generated work as original work.
– Entering learner PII or protected academic data into any external AI system.
– Using AI to produce full assignments without meaningful learner thought.
– Attempting to evade plagiarism or integrity checks.
VIII. TRAINING AND AWARENESS
Agathon University will provide periodic training to ensure personnel, and learners understand ethical AI use, FERPA obligations, risks of hallucinations, and accuracy validation.
IX. ENFORCEMENT AND CONSEQUENCES
Violations may result in academic sanctions, loss of AI privileges, disciplinary action, or additional measures consistent with the Intellectual Honesty Policy.
X. POLICY REVIEW
This policy will be reviewed annually or as technology and regulatory requirements evolve.