Safe Computing Guidance
U-M AI Services
U-M AI Services (U-M GPT, U-M Maizey, and U-M GPT Toolkit) meet the privacy and security standards of the university to be used with institutional data of Moderate sensitivity, including FERPA data. For details, see the ITS AI Services page in the Sensitive Data Guide.
See below for guidance provided to U-M community members about the use of third-party Artificial Intelligence (AI).
Third-Party AI Tools
The University of Michigan is actively reviewing the role third-party AI tools, like ChatGPT, play at the university, and part of that review involves examining formal contracts and agreements with AI vendors. Currently, U-M does not have a contract or agreement with any AI provider, which means that standardized U-M security and privacy provisions are not present for this technology. There are, however, still ample opportunities to experiment and innovate using third-party AI tools at the university.
The university's guidance on third-party AI usage will adapt and change as U-M engages in broader institutional review and analysis of these tools. U-M encourages its community members to use AI responsibly and review the data inputted into AI systems to ensure it meets the current general guidelines.
U-M Guidelines for Secure AI Use (Third-Party Tools)
- Third-party AI tools should only be used with institutional data classified as LOW.
- Third-party AI tools like ChatGPT should not be used with sensitive information such as student information regulated by FERPA, human subject research information, health information, HR records, etc.
- AI-generated code should not be used for institutional IT systems and services unless it is reviewed by a human and meets the requirements of Secure Coding and Application Security.
- Open AI's usage policies disallow the use of its products for many other specific activities. Examples of these activities include, but are not limited to:
- Illegal activity
- Generation of hateful, harassing, or violent content
- Generation of malware
- Activity that has high risk of economic harm
- Fraudulent or deceptive activity
- Activity that violates people’s privacy
- Telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition
- High risk government decision-making
For more detail, please see Artificial Intelligence and U-M Institutional Data on the Safe Computing website.