All university members must uphold ethical standards when using AI tools, including the following.
Acknowledging the existence of biases in AI systems due to the data they are trained on or the algorithms they use is extremely important. Recognize and mitigate biases in AI outputs by objectively evaluating AI tools, using this section 508 of the Rehabilitation Act as a guide.
Take a holistic approach when evaluating AI, only using technologies that adhere to the characteristics of trustworthy AI, as outlined by the National Institute of Standards and Technology (NIST). This includes ensuring AI systems are valid, reliable, safe, secure, accountable, explainable, and privacy-enhanced. Trustworthy AI respects human dignity, minimizes risks, and provides transparent, interpretable outcomes that align with ethical principles and responsible stewardship. Users should prioritize tools and platforms that demonstrate robust safety measures, ongoing performance monitoring, and clear documentation of design and potential impacts.
Clearly communicate when AI is used in communications, decisions, or outputs.
The ethical use of AI at our university aligns with the institution's core values, fostering a respectful, just, and responsible academic community. Faculty, staff, and students are encouraged to practice the Franciscan Values when utilizing AI: