AI augmentation refers to the utilization of artificial intelligence to assist, expedite, enhance, and in some cases, substitute for, humans in accomplishing tasks. AI augmentation will significantly alter the education landscape, we will need to re-evaluate teaching strategies to take advantage of the opportunities and mitigate risks. Our response to GenAI should be consistent with our core values of fairness, diversity, equity, inclusion, accessibility, research veracity, and ethical integrity.
Further, the rapid integration of AI in various sectors of society requires a re-evaluation of curriculum design to prepare students for a future where AI will be ubiquitous. On a more immediate note, it can be anticipated that an overwhelming majority of U-M students will be using GenAI tools in Fall 2023. Given the rapid evolution of the technology and its adaptation, this section is focused on offering near-term recommendations.
Key components of courses will have to be reviewed in relation to GenAI capacities and risks. Instructors should at minimum be able to answer the following questions about each course:
- Should GenAI be used in the course or not—and why or why not?
- If GenAI is to be used, how is the use to be documented?
- Should course learning objectives be revised?
- Should GenAI competencies be taught in the specific disciplinary context?
- Should assessments be revised?
Academic Misconduct Policies
Current definitions of academic misconduct do not take account of the new technologies and should be revised. The same is true of Honor Codes and the policies followed by the Academic Judiciary. The Library’s website offers a basic definition of plagiarism: “Plagiarism: presenting others' work without adequate acknowledgement of its source, as though it were one’s own.” LSA’s website details a range of misconduct, including cheating, plagiarism, falsification of documents, and unacceptable collaboration. The College of Engineering has an Honor Code that defines misconduct. These and other schools’ and colleges’ policies should be updated this summer to take into account the potential and risks of GenAI in instructional contexts.
Common approaches to updating academic misconduct policies are to consider ChatGPT (or GenAI) as prohibited help from another “person” (e.g., UCLA), or as a “source” that should be acknowledged (e.g., UW Madison). The “person” approach misleadingly attributes sentience and a reasoning capacity to GenAI. The “source” approach is more workable.
Treating GenAI as a source that should be acknowledged is more complicated than citing a print book or online article. Unauthorized GenAI use may constitute cheating (a student presenting ChatGPT output as their own original work) and/or plagiarism (copying output from a source without acknowledging that source). U-M schools and colleges will have to determine what misconduct policies will work in their contexts, in consultation with Academic Judiciary bodies.
Some American schools’ academic misconduct policies recommend that instructors use AI-detecting tools. This committee finds the detecting tools unreliable and capable of false positives, so we recommend against that approach.
The use of GenAI in coursework is banned at some universities. This approach is impossible to enforce perfectly, partly because detecting tools are (at present) untrustworthy, and partly because GenAI can be used undetectably at any stage in composing processes, such as prompting ChatGPT to generate ideas, an abstract, an outline, or an essay draft. If GenAI is to be banned in specific contexts, it is vital to ensure that the instruction protects equity and accessibility for students.