All users of AI tools must adhere to university policies on data security and privacy.
When using generative AI tools, only data classified as Public should be used without concern. Use of Private data may be appropriate in some cases if it aligns with university policies and does not involve sensitive or identifiable information. However, Confidential and Highly Confidential data should not be entered into generative AI tools, as doing so may violate privacy, legal, or security requirements. View the data classifications guidelines here.
Do not input sensitive, proprietary, or personal university data into AI tools unless formally approved by the university through designated channels. Examples of restricted data include:
Ensure the AI tools used comply with university-approved security protocols. Avoid tools that lack transparent data storage or usage policies.
The same precautions for protecting university data also apply to personal data when using AI tools. Individuals should avoid uploading sensitive personal information (such as passwords, personally identifiable information, or financial details) to AI systems, as these tools may store or process data in ways that are beyond the user’s control. Users are responsible for safeguarding their personal data and ensuring it is not inadvertently shared with AI platforms.
As with all data, FERPA, HIPAA, and ethical guidelines should be followed and used to protect data privacy.
Enforcement: Violations of this policy will be addressed under existing university policies and procedures. For students, this includes the Code of Student Rights and Responsibilities. For faculty and staff, applicable procedures are outlined in the Faculty Handbook and the Employee Handbook, including the Technology Policy and Data Stewardship provisions.