Skip to Main Content

AI Acceptable Use Policy

Acceptable Use Policy for use of Artificial Intelligence at Marian University.

Definitions

  • Artificial Intelligence (AI): Systems or tools capable of performing tasks that typically require human intelligence, such as natural language processing, image recognition, or decision-making.
  • Generative AI: AI tools that create content, including text, images, or code, such as ChatGPT, DALL-E, NotebookLM, or similar platforms.
  • University Data: Any data or information generated, managed, or owned by the university, including research, administrative, and personal data.
  • Prompt: A command, question, or instruction posed to an AI tool.  Prompts can exist in a singular instance or in multiple iterations, or even a conversation.
  • Hallucinations: the potential for an AI tool to generate false, misleading, or inaccurate outputs based on a user’s prompt and/or incomplete training dataset.
  • Large Language Model (LLM): a program originally designed to predict subsequent words in a sequence based on its training data. Large models generate text based on probability of interrelated data from a large training dataset to generate a continued sequence of words and context, based on the specificity of the prompt. 
  • Chatbot: an online interface where a user can maintain a “conversation” or ongoing exchange of information with a generative AI program, with the aim of generating new or analyzing/summarizing input data or datasets.  
  • Natural Language Processing (NLP): A field of artificial intelligence that enables computers to process and analyze human language, allowing AI tools to interpret, generate, and respond to text-based input.
  • Sensitive Data: This includes legally protected or confidential information, such as student records (FERPA), personally identifiable information (PII), health data (HIPAA), or financial records.
  • Proprietary Data: This refers to information that the university owns and considers confidential for competitive, research, or operational reasons. Examples include unpublished research findings, internal reports, and strategic planning documents.
  • Personal University Data: This refers to data that relates to individuals within the university but is not necessarily confidential or legally protected. This could include internal email addresses, casual communications, meeting notes, or faculty/staff directories that are not public.
  • Personal Data: This refers to an individual’s own sensitive or identifying information that should be protected when interacting with AI tools. It includes data such as passwords, personal email accounts, government identification numbers, financial details, health information, and other personally identifiable information (PII). Personal data in this context is not related to university systems or roles, but rather to private information that, if exposed, could compromise an individual's security or privacy.