Skip to Main Content

Artificial Intelligence in Education

Introduction

Artificial intelligence tools each have unique privacy policies and data retention practices, giving users varying levels of control over their personal information. While some tools provide robust options to enhance privacy and limit data usage, others may have more restricted settings. Below, you’ll find concise guides and short videos that demonstrate how to access and adjust privacy settings for each tool, empowering you to take control of your data.

ChatGPT

To manage your privacy when using ChatGPT, OpenAI provides a straightforward way to control how your data is used. To adjust the privacy settings in ChatGPT, follow these steps:

  1. Log into your ChatGPT account on the web or mobile app. Click on your profile icon (top-right corner on the web, bottom-left menu in the app).
  2. Select "Settings" and then go to "Data Controls."
  3. Click "Improve the model for everyone"
  4. Toggle off the "Improve the model for everyone" option. This prevents OpenAI from using your conversations for model training and removes them from being saved in your chat history.

When this setting is disabled, new conversations will not appear in your history and will be deleted from OpenAI’s systems within 30 days, unless required for abuse monitoring. Note that disabling this feature means you won't have access to chat logs for future reference.

Is personal information used to teach ChatGPT?

A large amount of data on the internet relates to people, so our training information does incidentally include personal information. We don’t actively seek out personal information to train our models.

We use training information only to teach our models intelligence, such as the ability to predict, reason, and solve problems. We do not and will not use any personal information in training information to build profiles about people, to contact them, to advertise to them, to try to sell them anything, or to sell the information itself.

Our models may learn from personal information to understand how things like names and addresses fit within language and sentences, or to learn about famous people and public figures. This makes our models better at providing relevant responses.

We also take steps to reduce the processing of personal information when training our models. For example, we remove websites that aggregate large volumes of personal information and we train our models to reject requests for private or sensitive information about people.

https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-foundation-models-are-developed#h_6b40d4df3c

Copilot

To manage your privacy when using Microsoft Copilot, you can adjust data-sharing preferences and other privacy-related settings depending on the specific Copilot platform (e.g., Microsoft 365 Copilot, Security Copilot, or Windows Copilot). When accessing Copilot from a web browser, here are the steps to adjust your privacy settings:

  1. Go to copilot.microsoft.com and sign in to your account
  2. Click on your profile picture or the settings icon located at the top right corner of the page
  3. Look for a section labeled "Privacy" or "Privacy Settings"

Here, you can manage various privacy-related options, such as personalization, data sharing, and chat history. Adjust these settings according to your preferences.

Data stored about user interactions with Microsoft 365 Copilot

When a user interacts with Microsoft 365 Copilot (using apps such as Word, PowerPoint, Excel, OneNote, Loop, or Whiteboard), we store data about these interactions. The stored data includes the user's prompt and Copilot's response, including citations to any information used to ground Copilot's response. We refer to the user’s prompt and Copilot’s response to that prompt as the "content of interactions" and the record of those interactions is the user’s Copilot activity history. For example, this stored data provides users with Copilot activity history in Microsoft 365 Copilot Chat (previously named Business Chat) and meetings in Microsoft Teams. This data is processed and stored in alignment with contractual commitments with your organization’s other content in Microsoft 365. The data is encrypted while it's stored and isn't used to train foundation LLMs, including those used by Microsoft 365 Copilot.

https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-privacy#data-stored-about-user-interactions-with-microsoft-365-copilot

Gemini

Google’s Gemini provides several privacy controls to help you manage how your data is saved and used. These settings are accessible through the Gemini mobile app, web experience, or your Google Account. Here’s how to find and adjust them:

  1. Open the Gemini app or visit the Gemini web interface. You can also manage settings directly via your Google Account by navigating to myactivity.google.com/product/gemini.
  2. In the app or web interface, go to the Settings menu and locate the "Gemini Apps Activity" section. Here, you can:
    • Turn off "Gemini Apps Activity" to stop conversations from being saved or used for model improvement.
    • Adjust auto-delete settings (default is 18 months) to delete activity sooner (3 months) or later (up to 36 months).
    • Review or delete specific conversations from your activity log.
  3. Even if "Gemini Apps Activity" is turned off, conversations may still be stored temporarily (up to 72 hours) to provide services and process feedback. Additionally, data reviewed for product improvement purposes is anonymized and retained separately for up to three years.  

What data is collected and how it’s used

Google collects your Gemini Apps conversations, related product usage information, info about your location, and your feedback. Google uses this data, consistent with our Privacy Policy, to provide, improve, and develop Google products and services and machine-learning technologies, including Google’s enterprise products such as Google Cloud.

Gemini Apps Activity is on by default if you are 18 or older. Users under 18 can choose to turn it on.

If your Gemini Apps Activity setting is on, Google stores your Gemini Apps activity with your Google Account for up to 18 months. You can change this to 3 or 36 months in your Gemini Apps Activity setting. Info about your location, including the general area from your device, IP address, or Home or Work addresses in your Google Account, is also stored with your Gemini Apps activity. Learn more at g.co/privacypolicy/location.

https://support.google.com/gemini/answer/13594961?hl=en#your_data

Perplexity

Perplexity AI provides users with privacy controls to manage how their data is used, ensuring a customizable and secure experience. Here's how to locate and adjust these settings:

  1.  Log into your Perplexity account on the web and click on your profile name in the bottom-left corner.
  2. Click Settings and locate the "AI Data Retention" option. Toggle this setting off to prevent your search data from being used to improve Perplexity's AI models.
  3. For additional privacy, you can enable Incognito Mode by selecting your profile name in the bottom-left corner and choosing "Incognito" from the dropdown. If you're not signed in, your threads are anonymous by default.

These settings allow you to control how your data is handled while using Perplexity AI. Note that disabling data usage or using Incognito Mode may limit certain features, such as saving threads for future reference.

Claude

Claude does not currently offer customizable privacy settings for users. However, its privacy policy emphasizes that user data is not automatically used for training its models. Data is only utilized for training purposes if users explicitly provide consent through feedback mechanisms (e.g., thumbs up/down) or if flagged for trust and safety reviews to improve policy enforcement. Additionally, Anthropic commits to automatically deleting user inputs and outputs from its backend systems within 30 days unless longer retention is required for legal or policy compliance.

For more detailed information, please refer to Claude's privacy policy on Anthropic's official website: https://privacy.anthropic.com/en/

Data usage for Claude.ai Consumer Offerings (e.g. Free Claude.ai, Claude Pro plan)

We will not use your Inputs or Outputs to train our models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Usage Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3) by otherwise explicitly opting in to training.

https://privacy.anthropic.com/en/articles/10023555-how-do-you-use-personal-data-in-model-training