ACCEPTABLE USE OF AI
About This Policy
Responsible Office
Vice President for Human Resources & Operations
Policy Owner
Executive Director of Innovation & Technology
Policy Contact
Executive Director of Innovation & Technology
Issued
2026-01-12
Policy Statement
North Central University supports the ethical and responsible use of artificial intelligence
North Central University supports the ethical and responsible use of artificial intelligence tools to enhance productivity, support decision-making, and advance the university’s mission. Employees are encouraged to integrate AI tools into their work to increase efficiency and deliver better outcomes for the community the university serves. To ensure ethical and responsible use, employees must protect university data, respect intellectual property rights, maintain accuracy and accountability, and comply with applicable laws and policies.
Employees must follow these requirements when using artificial intelligence tools for university business. Student use of AI tools for academic coursework falls under separate academic integrity policies and individual faculty course policies and syllabi.
Using AI Tools Effectively and Responsibly
AI tools offer valuable capabilities to assist with research, communication, analysis, and creative work. To use these tools effectively, employees must understand two important characteristics that shape responsible AI use:
Understanding AI Tools as Public Systems
Most commercially available AI tools work as public systems outside university control. Once employees enter information, the university cannot guarantee privacy, recall the information, control how providers use it, or ensure deletion. Inputs may become forever available to AI providers, stored forever, used to train models, shared with other users, or shared through legal processes. The university has no contract relationship with these third-party services and cannot control what happens to information employees enter.
Recognizing the Need for Human Verification
AI systems frequently create false or misleading information while presenting it confidently. This problem, known as hallucination, occurs regularly across all AI tools. Generated content looks just as polished and authoritative as accurate content, making it impossible to spot errors by reading the output alone. This characteristic makes AI an effective assistant that can draft, suggest, and accelerate work – but requires human expertise to verify, refine, and take responsibility for final outputs.
These characteristics require clear guidelines about what information employees share with AI systems, how employees verify AI-generated content, and who owns intellectual property created with AI assistance.
Classifying Data for Safe AI Use
To help employees use AI tools safely and effectively, the university provides a three-category data classification system. These categories guide employees in making informed decisions about what information to share with AI tools while protecting the privacy and confidentiality entrusted to the institution. Employees must follow these data classification requirements:
- Prohibited Data: Information employees must never input into AI tools under any circumstances
- Restricted Data: Internal operational information employees may input only after proper de-identification
- Permitted Data: Information employees may freely input without additional restrictions
Classifying Data
| Category | Examples | Input Status | Reason |
| Prohibited | Grades, transcripts, SSNs, medical info, strategic plans, legal advice | NEVER | FERPA/HIPAA violations; loss of legal privilege |
| Restricted | Internal SOPs, workflow docs, meeting notes, project timelines | AFTER DE-ID | Protects internal operations while allowing process improvement |
| Permitted | Public website content, press releases, industry standards, general concepts | FREELY | Information is already public or generic in nature |
Protecting the University: Prohibited Data
The university community entrusts the institution with sensitive information that demands absolute protection. To honor this trust and comply with legal obligations, employees must never input the following types of information into AI tools:
- Student education records protected by FERPA, including names,
- identification numbers, grades, academic status, financial
- information, disciplinary records, or any other personally
- identifiable student information
- Employee human resources information, including personnel files,
- performance evaluations, compensation data, disciplinary records,
- health information, or background check results
- Confidential university information as defined in the Information
- Security policy, including strategic plans, financial data not
- publicly shared, donor information, legal matters, contracts under
- negotiation, or security-sensitive information
- Research data subject to institutional review board protocols, grant
- agreements, data use agreements, collaborative research agreements, or
- non-disclosure agreements that prohibit third-party disclosure
- Protected health information subject to HIPAA regulations
- Payment card information subject to PCI-DSS requirements
- Social Security numbers, driver’s license numbers, passport
- numbers, financial account numbers, or other government-issued
- identification numbers
- Information subject to attorney-client privilege or attorney work
- product protections
- Any other information marked as confidential or restricted under
- university policy
Prohibited uses of AI tools include analyzing student grades, writing comments for student evaluations, summarizing course evaluation feedback that identifies students, drafting performance reviews containing employee names, analyzing departmental compensation data, analyzing enrollment projections or strategic forecasts, reviewing draft contracts with vendors, analyzing data collected under IRB protocols, and interpreting survey responses from research participants.
De-Identifying Restricted Data
Employees must de-identify all restricted data before entering it into AI tools. Employees may input internal operational information into AI tools only after removing all personally identifiable information, confidential details, and sensitive context. Restricted data includes departmental procedures, workflow documentation, internal communications, draft documents, meeting notes, and operational data. Before inputting restricted data, employees must:
- Remove all names, identification numbers, and personally
- identifiable information
- Strip out confidential financial figures, strategic details, and
- proprietary information
- Generalize specific situations to remove identifying context
- Verify that the remaining information contains nothing that could
- identify individuals, reveal confidential matters, or violate
- agreements
Using Permitted Data
Employees may freely input the following information into AI tools:
- Information already available to the public through university
- websites, publications, or official communications
- General knowledge questions about concepts, best practices, or
- publicly available information
- Requests for assistance with writing, editing, or formatting that do
- not include confidential content
- Technical questions about software, tools, or processes that do not
- reveal sensitive implementation details
- De-identified information from which all restricted and prohibited
- elements have been properly removed
Permitted uses include drafting professional emails to faculty about upcoming training, enhancing word choice for public-facing announcements, explaining key themes in published research articles, identifying best practices for course design, creating agenda structures for department meetings, developing project management approaches for general initiatives, debugging non-sensitive code, explaining spreadsheet formula functionality, generating themes for department newsletters, and developing metaphors to explain concepts.
Determining Intellectual Property Ownership
The university owns intellectual property created by employees in the course of their employment, even if AI tools assisted in creation. Employees must not claim personal ownership of work products created within the scope of their employment duties, even when AI tools contributed to their creation.
The university owns all work product created by employees within the scope of their employment duties, including content created with AI assistance. This includes administrative documents, operational procedures, training materials, marketing content, website content, communications, business correspondence, reports, analyses, presentations, and any other materials created to fulfill job responsibilities. The use of AI tools does not change the work-for-hire nature of employee-created content.
Academic and Scholarly Work
Intellectual property rights for academic and scholarly work follow the rules established in university policy and the Faculty Handbook. Faculty members must consult the Faculty Handbook for specific guidance on ownership of scholarly works, research outputs, course materials, and creative works. The use of AI tools does not alter the intellectual property arrangements named in those documents.
However, all data protection requirements, privacy duties, and acceptable use restrictions apply equally to faculty members regardless of IP ownership arrangements. Faculty members must comply with all data classification requirements, prohibited data restrictions, and verification duties when using AI tools for scholarly work. The university’s ownership or non-ownership of scholarly outputs does not alter faculty duties to protect university data, student information, or other confidential materials.
The university owns all work products employees create using AI tools during their employment. When employees develop AI-assisted processes, methodologies, or workflows as part of their university responsibilities, the university retains rights to continue using those approaches for institutional purposes. Employees must not commercialize, license, or transfer university-developed AI methodologies to third parties without explicit written authorization from university leadership.
Protecting Third-Party Copyrights
Employees must respect third-party intellectual property rights and must not submit AI-generated content that infringes copyrights, trademarks, patents, or other protected rights. Employees must verify that AI-generated content does not infringe on third-party copyrights, trademarks, patents, or other intellectual property rights. If AI-generated content appears to reproduce major portions of copyrighted works, employees must not use that content. Employees must either obtain proper permissions, completely rewrite the content in their own words, or choose alternative approaches. Employees remain responsible for ensuring all university work products respect others’ intellectual property rights, even if AI tools assisted in creation. The university prohibits submitting any work product that violates third-party IP rights, and employees who violate copyright or other IP protections face disciplinary action under the university’s Misconduct Policy.
Ensuring Accuracy, Accountability, and Verification
When employees use AI tools as collaborators in their work, they maintain full professional responsibility for all outputs. Just as employees take responsibility for work created with any tool or resource, AI-assisted work products reflect the employee’s professional judgment, expertise, and accountability. Using AI tools does not excuse errors, misstatements, or policy violations, and employees cannot deflect responsibility by attributing problems to AI-generated content.
AI systems frequently generate false or misleading information (hallucinations) while presenting it with high confidence. Because AI tools regularly produce false information presented as fact, employees must verify all AI-generated content before use. Verification requires employees to:
- Verify factual accuracy by checking AI-generated information against authoritative sources, understanding that AI systems frequently fabricate facts, statistics, dates, and citations
- Confirm that all citations and references actually exist and support the claims made, as AI systems commonly create non-existent sources
- Confirm that the content aligns with university policies, values, and standards
- Review for appropriate tone, context, and audience
- Check for potential copyright infringement, plagiarism, or inappropriate content
- Ensure compliance with all applicable laws and regulations
- Edit and customize AI-generated content to meet specific needs and context
All AI outputs require human verification before use in any university work product, communication, or decision-making process. The university prohibits submitting AI-generated content without thorough review and verification. Employees who submit unverified AI content or who submit inaccurate or policy-violating content face disciplinary action under university policies.
Requiring Disclosure and Transparency
Faculty members must comply with all disclosure requirements for research and scholarly work. Faculty members conducting research or producing scholarly work must follow disclosure requirements established by funding agencies, publishers, professional associations, and institutional review boards. Many research contexts require explicit disclosure of AI tool usage. Faculty members must consult with the Office of Academic Affairs and relevant oversight bodies regarding appropriate disclosure practices for their disciplines and must not submit work to funders or publishers without first verifying disclosure requirements.
For official external communications representing the university to prospective students, current students, alumni, donors, regulatory agencies, or the general public, employees must exercise judgment about whether AI assistance warrants disclosure. Factors to consider include the nature of the communication, audience expectations, regulatory requirements, and professional standards in the relevant field. When disclosure requirements exist through regulation or professional standards, employees must comply fully.
The university does not require disclosure of AI usage for routine internal operational communications, draft documents, planning materials, or administrative work products. However, employees must disclose AI usage when relevant to understanding methodology, limitations, or decision-making processes.
Prohibiting Specific Uses
Employees must comply with all use restrictions and must not use AI tools in any prohibited manner. Beyond the data input restrictions described above, the university expressly prohibits employees from using AI tools to:
Use AI-powered meeting assistants or transcription services other than Microsoft Copilot for any university-related meetings, classes, interviews, focus groups, or other gatherings involving university work. Employees must use Microsoft Copilot (available through university Microsoft 365 accounts) for AI-assisted meeting transcription, note-taking, or recording when conducting university business.
- Make final decisions about hiring, promotion, termination, student
- admissions, student discipline, or other consequential determinations
- affecting individuals. AI tools may inform decision-making processes,
- but humans must make all final determinations.
- Generate deceptive content, including fake sources, false
- credentials, deepfakes, or other misleading materials
- Create content that violates university policies on discrimination,
- harassment, or respectful conduct
- Circumvent security measures, access controls, or other protections
- Generate malicious code, exploits, or other harmful technical
- content
- Violate licensing agreements, terms of service, or acceptable use
- policies of AI tool providers
- Engage in academic dishonesty or help student academic
- misconduct
- Process or analyze data in ways that violate research ethics,
- participant consent, or institutional review board protocols
Managing Security and Risk
Employees must understand that information entered into AI tools may be:
- Retained by AI tool providers and used to train or improve their
- systems
- available to other users in some AI tool configurations
- Subject to third-party access through legal processes, data
- breaches, or other means
- Difficult or impossible to completely delete after submission
These risks reinforce the importance of adhering strictly to data classification requirements. Once information enters an AI system, the university cannot guarantee its privacy or control its use.
Employees must treat all AI tools as public systems unless the Office of Innovation & Technology explicitly states otherwise in writing. Even when using university-provided AI tools with enterprise agreements, employees must assume maximum security restrictions apply and that all inputs may become public unless the Office of Innovation & Technology has communicated specific, documented exceptions for particular tools. This default-to-maximum-security approach protects university data even if employees misunderstand tool capabilities or if vendor agreements change.
University-Provided AI Tools
The university may provide specific AI tools through enterprise agreements that offer enhanced privacy protections, data retention controls, and security features. However, enhanced privacy protections do not automatically permit input of prohibited data. The Office of Innovation & Technology will clearly communicate any differences in data input restrictions for university-provided tools, including specific categories of data that particular tools may process based on contract protections. Unless employees receive explicit written guidance from the Office of Innovation & Technology stating that a specific university-provided tool permits processing of certain data types, employees must follow all standard data classification restrictions. When in doubt, assume the most restrictive interpretation applies.
Managing Data Breaches and Security Incidents
Immediate Response to Accidental Data Input
When an employee realizes they have accidentally input prohibited or improperly de-identified restricted data into an AI tool, the employee must take the following immediate actions:
- Stop All Interaction: Immediately cease using the AI tool. Employees must not attempt additional interactions with the AI system, including efforts to instruct the AI to delete or forget the information. Continued interaction after recognizing a breach can further train the model on sensitive data and compound the violation.
- Preserve the Evidence: Employees must not delete the chat history, conversation thread, account, or any other artifacts related to the incident. The Office of Innovation & Technology requires access to the complete conversation, including the exact prompts entered and outputs generated, to assess the scope of the breach and determine reporting obligations under federal and state law.
- Report Immediately: Contact the Office of Innovation & Technology immediately using the IT Security Incident Report form available on the university website. The university treats the first two hours after discovery as a critical window for breach assessment and response. Reporting within this window demonstrates good faith and allows the university to take immediate protective action.
Self-Reporting as Mitigating Factor
The university distinguishes between accidental breaches that employees promptly report and concealed breaches that the university discovers later:
- Employees who self-report accidental prohibited data input within two hours of discovery, preserve all artifacts, and cooperate fully with the investigation receive consideration for this good-faith disclosure when the university determines appropriate sanctions.
- Employees who conceal breaches, delete evidence, continue using compromised tools, or allow the university to discover breaches through monitoring or audit face treatment of the incident as willful misconduct subject to immediate termination.
The university cannot guarantee that prompt self-reporting will eliminate all consequences, particularly when breaches trigger mandatory legal reporting requirements or result in actual harm. However, the university recognizes honest mistakes and treats them differently from concealed violations.
Understanding Legal Implications
For legal and regulatory purposes, the university treats data input into public AI tools without contractual protections as public disclosure. The moment an employee enters prohibited data into a public AI tool, the university’s legal duty to notify affected individuals and regulatory agencies may begin, regardless of whether the employee believes “only the AI saw it.”
Employees cannot argue that prohibited data input was not a disclosure because the information remained within an AI system. Federal regulations such as FERPA and HIPAA, state data breach notification laws, and contractual obligations require the university to treat data input into uncontrolled systems as potential breaches requiring investigation, notification, and remediation.
Contractual vs. Non-Contractual Tool Breaches
The severity of a data breach and the university’s response options depend significantly on whether the breach occurred using a university-contracted AI tool or a public tool without contractual protections:
University-Contracted Tools (e.g., Microsoft Copilot with enterprise agreement):
- The university has contractual rights to demand data deletion, audit logs, and breach investigation support
- The university can verify vendor compliance with data handling requirements
- The university maintains some ability to limit data exposure through contractual enforcement
- Violations receive proportionate response based on actual harm and reporting timeline
Public Tools Without University Contracts (e.g., free ChatGPT, Claude.ai, other public services):
- The university has zero legal authority to demand data deletion or audit access
- The university cannot verify what the vendor does with the data or who can access it
- The university must assume the data remains permanently accessible to the vendor and potentially to other users
- Violations receive more severe response because the university has no remediation options and the exposure risk is permanent
Employees must understand that inputting prohibited data into public AI tools creates irreversible institutional risk that the university cannot mitigate, repair, or contain.
Derivative Work Contamination
- When prohibited data entered into an AI tool influences subsequent AI-generated content, all work products derived from that contaminated interaction require destruction or comprehensive review:
- If an employee discovers that AI-generated content includes or reflects prohibited data from previous prompts in the conversation history, the employee must not use, distribute, or retain that content.
- If AI-generated reports, analyses, communications, or other work products incorporate insights, patterns, or information derived from prohibited data inputs, the university may require destruction of those work products regardless of how much additional work went into their development.
Employees who realize that previous prohibited data inputs may have influenced current AI outputs must report this immediately to the Office of Innovation & Technology for assessment.
The university cannot accept work products that may be “poisoned” by prohibited data, even if the prohibited data does not appear explicitly in the final output. This includes situations where an AI learned patterns or made inferences from prohibited data that then influenced subsequent recommendations or content generation.
Reporting Security Incidents
Employees must report suspected or actual data breaches, unauthorized access, or security incidents involving AI tools to the Office of Innovation & Technology immediately using the IT Security Incident Report form available on the university website. Reports must include:
- The specific AI tool or service used
- The approximate time of the incident
- The type of data potentially exposed (with sufficient detail for breach assessment without repeating the prohibited data)
- Whether the employee has preserved all conversation artifacts
- Whether the employee has ceased all interaction with the tool
- Any known or suspected harm resulting from the breach
- Registration and Account Requirements
Employees must not use university email addresses or credentials to register for or access AI tools that lack university contracts or explicit approval from the Office of Innovation & Technology. When experimenting with AI tools for personal learning or exploration, employees must use personal email accounts to maintain clear separation between university systems and third-party services. Use of university credentials for non-approved services creates institutional risk through credential exposure and creates ambiguity about data ownership and liability.
Enforcing Violations and Sanctions
These guidelines protect the university community and enable responsible innovation with AI tools. Violations of data protection requirements, verification standards, or intellectual property obligations result in disciplinary action up to and including termination of employment. The university addresses serious violations – particularly those involving prohibited data input, unverified institutional content, intellectual property infringement, or deceptive practices – as immediate disciplinary matters. Technical violations without harm receive proportionate response based on severity and impact.
Employees who become aware of policy violations must report suspected violations to their immediate supervisor or the Office of Human Resources. Reports made in good faith receive protection under the university’s Non-Retaliation Policy. Supervisors must report violations to the Office of Human Resources immediately upon becoming aware of them.
The university reserves the right to revoke access to university-provided AI tools for employees who violate requirements or show inability to use such tools responsibly.
Reason For Policy
Artificial intelligence tools have become widely available and offer practical capabilities that employees can leverage to enhance their work. These tools can assist with drafting communications, analyzing information, generating ideas, and streamlining routine tasks. North Central University encourages employees to explore AI tools thoughtfully and integrate them into their work where they add value. When used responsibly, AI tools help employees work more efficiently, solve problems creatively, and deliver better outcomes for the university community.
Most commercial AI tools function as public systems outside university infrastructure. Information entered into these platforms passes through a “one-way door” – data cannot be recalled, deleted, or guaranteed confidential once sent. Without clear guidelines, well-intentioned use of AI tools could inadvertently expose student records protected by federal law, compromise research integrity, violate patient privacy, or breach confidential institutional information.
This policy provides employees with clear guidance for using AI tools effectively while protecting the privacy of students and employees, preserving the integrity of research and scholarship, maintaining the trust of those who share sensitive information with the university, and ensuring the institution honors its legal and ethical obligations. The requirements enable practical innovation with AI tools while preventing data exposure risks inherent to unmanaged AI environments.
Policy Scope
Requirements apply to all individuals conducting university business, including:
- Faculty and staff
- Student employees
- Contractors
- Consultants
- Other individuals granted access to university information or performing work on behalf of the university.
Requirements apply when using AI tools for university purposes.
Student use of AI tools for academic work falls under separate academic integrity policies and faculty course policies.
Procedures
There are no procedures associated with policy.
Forms
- Information Security Incidents Form –https://northcentral.teamdynamix.com/TDClient/1980/Portal/Requests/ServiceDet?ID=27126
FREQUENTLY ASKED QUESTIONS
Can I use free versions of ChatGPT or other AI tools for university work?
You may use AI tools for university work only if they meet university data protection requirements. Free public versions of AI tools typically retain your inputs and use them for training, creating data exposure risks. Before using any AI tool, consult with the Office of Innovation & Technology to verify approval status. Microsoft Copilot is available at no cost through your university Microsoft 365 account when you log in with your NCU credentials.
What if I accidentally input prohibited data into an AI tool?
Report the incident immediately to the Office of Innovation & Technology using the IT Security Incident Report form. Do not attempt to delete or recall the information. Prompt reporting demonstrates good faith and may affect how the incident is handled.
Can I use my university email to sign up for AI tools?
No. You must not use your university email address or credentials to register for AI tools that lack university contracts or explicit approval from the Office of Innovation & Technology. Use personal email accounts for personal experimentation with AI tools to maintain clear separation between university systems and third-party services.
Do I need to fact-check everything AI generates?
Yes. All AI outputs require human verification before use in any university work product, communication, or decision-making process. AI systems frequently generate false or misleading information while presenting it confidently. You remain fully responsible for the accuracy of any content you submit, even if AI helped create it.
Can I use AI to help write student recommendation letters?
You may use AI to help draft recommendation letters, but you must not input any student names, identification numbers, grades, or other FERPA-protected information into the AI tool. You must also thoroughly review and personalize any AI-generated content to ensure accuracy and appropriateness before submitting the letter.
Do I need to tell people when I use AI for my work?
It depends on the context. Faculty conducting research or producing scholarly work must follow disclosure requirements established by funding agencies, publishers, and professional associations. For official external communications, exercise judgment based on audience expectations and professional standards. The university does not require disclosure for routine internal operational communications or draft documents.
Who owns the work I create with AI assistance?
The university owns all work products employees create using AI tools during their employment, just as it owns work products employees create without AI assistance. This follows standard work-for-hire principles. Faculty intellectual property rights for academic and scholarly work follow the rules established in the Faculty Handbook.
Can I use AI to help with grading or student evaluations?
AI may inform decision-making processes, but employees must make all final determinations about student grades, evaluations, admissions, or discipline. Employees must not input student education records into AI tools. AI tools may help develop rubrics or grading criteria, but the actual grading and evaluation of identified students must occur without inputting their data into AI systems.
What happens if I violate this policy?
Violations result in disciplinary action up to and including termination. The university treats violations involving prohibited data input, unverified content presented as institutional fact, or intellectual property infringement as serious misconduct that may result in immediate termination. Lesser violations may warrant progressive discipline depending on severity and impact.
Can I use AI-generated images in university materials?
Yes, but you must verify that AI-generated images do not reproduce copyrighted works and align with university standards. AI-generated images must not depict real university events, actual students, or identifiable employees in ways that could mislead audiences. Always disclose when images are AI-generated if there is any possibility of confusion about their origin.
How do I know if data is prohibited, restricted, or permitted?
Employees should consult the Data Classification Quick Reference table in the Classifying Data for Safe AI Use section of this policy. If uncertainty remains after reviewing the table, employees should contact their supervisor or the Office of Innovation & Technology before inputting any information. When in doubt, assume the most restrictive classification applies.
Can I use AI tools that my department purchased without IT approval?
No. All AI tools used for university purposes require approval from the Office of Innovation & Technology, regardless of how they were acquired. Department purchases do not automatically constitute approval for use with university data. Contact the Office of Innovation & Technology to initiate the approval process.
Can I use Otter.ai, Fireflies, or other AI meeting assistants for university meetings?
No. Employees must use Microsoft Copilot (available through university Microsoft 365 accounts) for AI-assisted meeting transcription, note-taking, or recording when conducting university business. This applies to all university-related meetings, classes, interviews, focus groups, and other gatherings. Other AI meeting assistants and transcription services are prohibited for university work because the university lacks contractual protections and cannot control how those services handle university data.
Additional Contacts
| Subject Matter | Contact | Phone | |
| Policy Clarification & Interpretation | Office of Innovation & Technology | 612.343.4170 | oit@northcentral.edu |
| Data Incidents | Information Security | 612.343.4190 | cybersecurity@northcentral.edu |
| Academic Matters | Office of Academic Affairs | 612.343.4400 | academicaffairs@northcentral.edu |
| Personnel & Misconduct Issues | Office of Human Resources | 612.343.4412 | hr@northcentral.edu |
Definitions
Artificial Intelligence (AI) Tools
Software applications, systems, or services that use machine learning, natural language processing, or other artificial intelligence technologies to analyze data, generate content, make predictions, or provide recommendations. AI tools include large language models, generative AI systems, AI-powered writing assistants, and AI capabilities embedded within other software applications.
AI Meeting Assistants
AI-powered tools and services that record, transcribe, summarize, or analyze meetings, classes, interviews, focus groups, or other gatherings. Meeting assistants use artificial intelligence to convert spoken conversation into text, generate meeting summaries, identify action items, or provide searchable transcripts. Examples include Microsoft Copilot (the only meeting assistant approved for university use), Otter.ai, Fireflies.ai, Fathom, Krisp, Zoom AI Companion, Google Meet transcription services, and similar AI-powered transcription or meeting analysis tools.
Large Language Model (LLM)
A type of artificial intelligence system trained on vast amounts of text data that can understand and generate human-like text. LLMs operate by predicting likely word sequences based on patterns learned from training data, which means they can produce plausible-sounding but potentially inaccurate information.
De-identification
The process of removing or obscuring personally identifiable information, confidential details, and identifying context from data to prevent determination of the individuals, entities, or specific situations to which the data refers.
Anonymize
See De-identification. The terms anonymize and de-identify mean the same process of removing personally identifiable information from data.
Personally Identifiable Information (PII)
Information that directly identifies an individual or that can be used in combination with other information to identify, contact, or locate an individual, including names, identification numbers, contact information, and any other information that could distinguish or trace an individual’s identity.
Prohibited Data
Information that employees must never input into AI tools under any circumstances, including student education records protected by FERPA, employee human resources information, confidential university information, research data subject to restrictive agreements, protected health information, payment card information, government-issued identification numbers, and information subject to legal privilege.
Restricted Data
Internal operational information that employees may input into AI tools only after proper de-identification removes all personally identifiable information, confidential details, and identifying context. Restricted data includes departmental procedures, workflow documentation, internal communications, draft documents, meeting notes, and operational data that, in its original form, contains information that must not be shared externally but that can be appropriately generalized for AI input.
Permitted Data
Information that employees may freely input into AI tools without additional restrictions or safeguards. Permitted data includes information already publicly available through university websites or publications, general knowledge questions, requests for writing or editing assistance that do not include confidential content, technical questions that do not reveal sensitive implementation details, and properly de-identified information from which all prohibited and restricted elements have been removed.
Verification
The process of reviewing, checking, and confirming the accuracy, appropriateness, and following rules of AI-made content before using it in university work products. Verification places responsibility for the final work product on the employee, not the AI tool.
Work-for-Hire
A legal doctrine under which works created by employees within the scope of their employment belong to the employer, not the individual employee who created them. For university employees, work-for-hire typically covers administrative documents, operational materials, communications, business correspondence, and other content created to fulfill job responsibilities. The use of AI tools does not change the work-for-hire status of employee-created content. Academic and scholarly work may follow different ownership rules as named in university policy and the Faculty Manual.
Responsibilities
All Employees
- Comply with data classification requirements and input restrictions
- Verify accuracy of AI-made content before use
- Maintain accountability for all work products
- Report suspected policy violations
Supervisors and Department Heads
- Ensure employee understanding and following rules with requirements
- Monitor following rules within areas of responsibility
- Report violations to Human Resources
- Model right AI tool usage
Office of Innovation & Technology
- Provide technical guidance on AI tools and data classification
- Evaluate, approve, and negotiate contracts for AI tools
- Investigate security incidents and monitor emerging technologies
Office of Human Resources
- Investigate violations and coordinate disciplinary actions Coordinate disciplinary actions
- Include requirements in employee orientation and provide enforcement guidance
Office of Academic Affairs
- Provide guidance on AI use in research and scholarly work
- Clarify disclosure requirements and ensure alignment with academic integrity standards
RELATED INFORMATION
Related University Policies & Procedures
- Information Security Policy (https://www.northcentral.edu/policy/infosec/)
- Acceptable Use of Information Technology Resources (https://www.northcentral.edu/policy/acceptable-use-it/)
- Managing Student Records (https://www.northcentral.edu/policy/student-records/)
- Privacy Policy (https://www.northcentral.edu/policy/privacy/)
- Data Breach Notification Policy (https://www.northcentral.edu/policy/data-breach-notification/)
- Intellectual Property Policy (https://www.northcentral.edu/policy/ai/)
- Non-Retaliation Policy (https://www.northcentral.edu/policy/non-retaliation/)
- Misconduct Policy (https://www.northcentral.edu/policy/misconduct/)
- North Central University Faculty Manual
Relevant Legislation
- Family Educational Rights and Privacy Act (FERPA), 20 U.S.C. § 1232g (https://www.govinfo.gov/app/details/USCODE-2022-title20/USCODE-2022-title20-chap31-subchapIII-part4-sec1232g)
- Health Insurance Portability and Accountability Act (HIPAA), Pub. L. 104-191 (https://www.govinfo.gov/app/details/PLAW-104publ191)
- Payment Card Industry Data Security Standard (PCI DSS) (https://www.pcisecuritystandards.org/document_library/)
- Gramm-Leach-Bliley Act (GLBA), 15 U.S.C. §§ 6801-6809 (https://www.google.com/search?q=https://www.govinfo.gov/app/details/USCODE-2022-title15/USCODE-2022-title15-chap94-subchapI)
- Copyright Law of the United States, 17 U.S.C. (https://www.copyright.gov/title17/)
Other Related Information
History
Issued
2026-01-12