Exploring Universities’ Responses, Student Resources, and Policy Implications in the Era of AI-Generated Work
It’s been seven months since ChatGPT exploded onto the scene, sending higher ed institutions scrambling to respond to concerns that students will use ChatGPT and other generative AI to do their work for them. In an era where AI can create seemingly sophisticated academic content, institutions face new challenges in addressing academic dishonesty, with little precedent for how to proceed.
The absence of clear directives from university policies on AI-generated work has left many academic communities seeking concrete examples and actionable language to incorporate into their own policies. Recognizing this need, we have compiled a roundup of how some colleges and universities have updated their academic integrity policies to address the impact of generative AI.
Only a handful of institutions in the U.S. have formally updated their codes of student conduct; higher ed administrations aren’t always known for moving quickly. While the official policies of most universities have yet to catch up, libraries, instructional design departments, and student conduct offices have stepped in to provide guidance and resources for both students and instructors.
Note: This information has all been taken from publicly available documents on institutions’ websites; it has not been verified by the institutions.
Colleges and Universities with Formal Academic Dishonesty Policies about Artificial Intelligence
These institutions have updated their formal academic integrity policies to include language about artificial intelligence:
Montclair State University (New Jersey)
Montclair’s Academic Dishonesty policy has been updated to include this language within its description of plagiarism:
Plagiarism is defined as using another person or entity’s words or ideas as if they were your own, unintentionally or otherwise, and the unacknowledged incorporation of those words in one’s own work for academic credit. […] The following guidelines for written work will assist students in avoiding plagiarism: […] Information taken from generative AI, such as ChatGPT, must be cited, otherwise it will be defined as plagiarism. Best practice for some disciplines may be to find the same information elsewhere for a complete citation. (Montclair.edu academic dishonesty policy, emphasis mine)
Elsewhere, on the page for Montclair’s Office for Faculty Excellence, additional guidance is given for instructors:
As of 05/15/2023, the University’s Academic Dishonesty policy has changed slightly to include a clause on work completed by entities that are not human: “Academic dishonesty is any attempt by a student to submit 1) work completed by another person or entity without proper citation or 2) to give improper aid to another student in the completion of an assignment, such as plagiarism.” This change helps establish, at institutional level, that submitting AI-generated content in place of one’s own work constitutes plagiarism. (Montclair State University Office for Faculty Excellence, emphasis theirs)
St. Michael’s College (Vermont)
St. Michael’s College includes the following language in their Policy on Academic Integrity, listed as examples of offenses against academic integrity:
1. PLAGIARISM:
Presenting another person’s ideas or content generated by artificial intelligence as one’s own, by directly quoting or indirectly paraphrasing, without properly citing the original source. This includes inadvertent failure to properly acknowledge sources. […]
2. UNAUTHORIZED ASSISTANCE:
Giving or receiving assistance during an examination or in the preparation of other assignments without the authorization of the instructor.
There are many possible instances of unauthorized assistance. […] [U]nauthorized use of generative artificial intelligence for the completion of exams or assignments is prohibited. (St. Michael’s College policy on academic integrity, bolded emphasis mine)
State University at Buffalo (NY)
The Academic Integrity policy in the University at Buffalo’s Academic Policies and Procedures includes AI in its examples of academic dishonesty:
Falsifying academic materials. […] [S]ubmitting a report, paper, materials, computer data or examination (or any considerable part thereof) prepared by any person or technology (e.g., artificial intelligence) other than the student responsible for the assignment. (SUNY Buffalo Academic Policies and Procedures, emphasis mine)
In addition, beginning in Fall 2023, all incoming students (undergraduate or graduate, including transfer students) will be required to take an “Academic Integrity at UB” course that is “designed to provide incoming UB students with information they need to uphold our high standards around the honest completion of work” (SUNY Buffalo Office of Academic Integrity). According to a January 2023 article in the New York Times, the course will include discussion of A.I. tools. (Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach)
Northwestern University (Illinois)
Northwestern University’s Principles of Academic Integrity lists unattributed AI use among their examples of unacceptable behaviors:
1. Cheating: using unauthorized notes, study aids, or information on an examination; altering a graded work after it has been returned, then submitting the work for regrading; allowing another person or resource (including, but not limited to, generative artificial intelligence) to do one’s work and submitting that work under one’s own name without proper attribution; submitting identical or similar papers for credit in more than one course without prior permission from the course instructors.
2. Plagiarism: submitting material that in part or whole is not entirely one’s own work without attributing those same portions to their correct source. Plagiarism includes, but is not limited to, the unauthorized use of generative artificial intelligence to create content that is submitted as one’s own. (Northwestern University, Academic Integrity: A Basic Guide, emphasis mine)
Seattle University (Washington)
Seattle University has updated its Academic Integrity policy with this language:
A. Plagiarism – The use of the work or intellectual property of other persons or the outputs of Generative Artificial Intelligence (AI) programs (e.g., ChatGPT, DALL-E, Github Copilot) presented as one’s own work without appropriate citation or acknowledgment. While different academic disciplines have different modes for attributing credit, all recognize and value the contributions of individuals to the general corpus of knowledge and expertise. Students are responsible for educating themselves as to the proper mode of attributing credit in any course or field. A student does not need to have intended to plagiarize; the unacknowledged use of another’s work is sufficient. Examples of plagiarism include, but are not limited to, copying, paraphrasing, summarizing, or borrowing ideas, phrases, sentences, paragraphs, code, images, or an entire paper from another person’s work or AI program’s output without proper citation and/or acknowledgment. (SeattleU.edu Academic Integrity policy, emphasis mine)
City University of New York
City University of New York has not yet updated their academic integrity policy, but their advisory committee has recommended language changes. CUNY is moving forward with the suggested policy changes, but (as of May 2023) the changes must be approved by the student government, several committees, and the Board of Trustees before they will be adopted. From a post on the CUNY Faculty Senate website:
Given the emergence of AI, the Academic Affairs Advisory Committee at the University Faculty Senate undertook a charge to revisit the CUNY policy on academic integrity. It was challenging to do so when the technology is changing day-to-day, if not hour-to-hour.
We have received word that the University is moving forward with suggested policy changes made by the UFS Academic Affairs Advisory Committee, with additional input from Prof. Roxanne Shirazi, Chair of the Libraries and Information Technology Committee, and unanimously affirmed at the most recent UFS Plenary on May 9th.
The suggested language recommends expanding the definition of cheating to include:
“Unauthorized use of AI-generated content on assignments or examinations unless an instructor for a given course specifically authorizes their use. Some instructors may approve of using generative AI tools in the academic setting for specific goals. However, these tools should be used only with the explicit and clear permission of each individual instructor, and then only in the ways allowed by the instructor.”
Similarly, the recommendation to expand plagiarism in the policy to reflect:
“Unauthorized use of AI-generated content; or use of AI-generated content, whether in whole or in part, even when paraphrased, without citing the AI as the source.” (“UFS Passes Academic Integrity Update for Artificial Intelligence”, emphasis mine)
Institutions With Language Changes on Their Website
A number of colleges and universities have not adopted new policies around ChatGPT and generative AI, but they have created pages to give students and faculty more information about how the use of generative AI fits into their existing academic integrity policies. Here are some examples of those pages.
Stanford University (California)
Stanford’s Board of Judicial Affairs has provided this guidance to “address the Honor Code implications of generative AI tools such as ChatGPT, Bard, DALL-E, and Stable Diffusion”:
Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person. In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted. Students should acknowledge the use of generative AI (other than incidental use) and default to disclosing such assistance when in doubt.
Individual course instructors are free to set their own policies regulating the use of generative AI tools in their courses, including allowing or disallowing some or all uses of such tools. Course instructors should set such policies in their course syllabi and clearly communicate such policies to students. Students who are unsure of policies regarding generative AI tools are encouraged to ask their instructors for clarification. (Stanford Office of Community Standards Generative AI Policy Guidance)
University of Washington
This language is posted on the Community Standards and Student Conduct page of the UW website, but is not included in the formal student conduct code:
It is important to know and understand the expectations of the University and your instructors regarding academic standards. This is especially relevant to the use of technology and online resources available today. Artificial Intelligence (AI) content generators, such as ChatGPT, present opportunities that can contribute to your learning and academic work. However, using these technologies may also violate academic standards of the University. Under the Student Conduct Code, cheating includes the unauthorized use of assistance, including technology, in completing assignments or exams. While some instructors may encourage you to utilize technology to enhance your learning experience, other instructors may prefer that you do your own work without seeking outside help. It is your responsibility to read the syllabus for each course you take so that you understand the particular expectations of each of your instructors. If you are unsure of expectations, you are encouraged to ask for clarification before you use specific resources in completing assignments. (UW Academic Misconduct – Community Standards and Student Conduct, emphasis theirs)
University of Missouri
The webpage for the Office of Academic Integrity now includes a page for students about AI, although no new language has been added to the official academic dishonesty policy.
Since its launch by OpenAI in late 2022, ChatGPT has inspired many questions related to academic integrity. Like most tools, ChatGPT (and other artificial intelligence products) can be used for purposes both good and bad. There are legitimate ways to use these tools for research, and there are ways to use them to cheat on academic work. This page aims to explain how students can avoid committing academic dishonesty with chatbots and other online tools.
The University of Missouri page is an exemplary resource hub for students, providing an insightful overview of ChatGPT and generative AI, their potential applications, and their implications within the framework of academic integrity policies. The page also offers a number of additional resources students can use to gain a more comprehensive understanding of these technologies. Read the whole thing here.
Universities with LibGuides about ChatGPT and AI
A number of institutions have created Library Guide/Research Guide pages to instruct students on how AI works, its uses, proper citation styles, and its academic integrity implications. Here are three examples that stand out as being especially thorough and accessible:
New York University Shanghai Library Research Guide: Machines and Society – An incredibly detailed exploration of all things generative AI, including pages on ChatGPT for Research and Creative Use, Generative AI and Society, Emerging AI Tools and Approaches, and many more.
UC Santa Barbara Library STEM LibGuide Resources: Citing ChatGPT – A thorough list of citation styles for ChatGPT and other generative AI
Centennial College (Ontario) has an extended section in their Academic Integrity LibGuide about ChatGPT and artificial intelligence.
Resources for Instructors
Finally, in the absence of official policy guidance, institutions’ instructional design offices have stepped up to offer comprehensive training for faculty on how to understand, use, and teach about artificial intelligence. Here are some notable examples.
Yale University Poorvu Center for Teaching and Learning – AI Guidance
Illinois State University Center for Integrated Professional Development – AI-Generated Content: Considerations for Course Design
Indiana University Bloomington Center for Innovative Teaching and Learning – How to Productively Address AI-Generated Text in Your Classroom
Colorado State University, The Institute for Learning and Teaching (TILT) – Artificial Intelligence and Academic Integrity hub
While official academic integrity policies specifically addressing artificial intelligence are still relatively scarce among universities, our roundup of various institutions reveals promising efforts from libraries, student conduct offices, and centers for teaching and learning. These resources offer valuable insights, guidance, and supplementary materials that help students and instructors navigate the complex intersection of AI and academic integrity.
By leveraging these examples and adapting them to their own contexts, universities can proactively address the challenges posed by AI-generated work while upholding the principles of academic honesty and integrity in an increasingly technologically driven educational landscape.
Abi Bechtel is a writer, educator, and ChatGPT enthusiast. They have an MFA in Creative Writing from the Northeast Ohio MFA program through the University of Akron, and they just think generative AI is neat.