Policy
General AI Policy
Columbia College Chicago's Security and Privacy for Artificial Intelligence policy defines Artificial Intelligence (AI), in which situations it can be used, when consent is required and how it can be acquired, how to communicate it is in use, and the ethics around its usage.
AI and Current Academic Integrity Policies
Columbia College Chicago academic integrity calls for scholars at all levels to demonstrate transparency and honesty about sources of knowledge consulted or used for assignments (CCC Academic Integrity Policy, paragraph 1).
- The unauthorized use of AI generative software currently falls under the cheating category of the Academic Integrity Policy. However, we remain committed to exploring potential changes to the policy, as necessary, in the future.
- Concerns with unauthorized use of AI should begin with a conversation between the student and faculty member and follow the steps in the Academic Integrity Policy.
- Since this is new, evolving territory, we recommend additional, specific clarity on first offenses (will students be able to redo assignment?) and second offenses (will subsequent or multiple offenses result in an automatic zero for the assignment?).
- On the other hand, you may want to authorize some degree of A.I. in your course, in which case you need to clearly indicate how and under what circumstances, as well as whether/how you want that A.I. use to be cited (see policy examples below and please link to the library citation guides).
The faculty are responsible for offering clear guidance to students on where your course stands with regards to A.I. since different courses may have different guidelines.
Sample Policies to Include as Part of Your Course
If you have not already, please add a policy on using AI generative software to your syllabus and take a few minutes in class to explain your policy. Since this is new, evolving territory, we recommend additional, specific clarity on first offenses (will students be able to redo assignment?) and second offenses (will subsequent or multiple offenses result in an automatic zero for the assignment?).
- Classroom Policies for AI Generative Tools, Lance Eaton (2023, dynamic) – a list of almost 60 classroom policies from across disciplines and schools. 10-30 min read
- "The AI Assessment Scale: From No AI to Full AI," by Leon Furze, 2023 - introduces a sliding scale to clarify for students which assignments might never allow AI (such as an in-class writing exercise) or which might use limited amounts of AI (for brainstorming or drafting outlines from student notes). Detailed analysis by Leon Furze (educational consultant, PhD candidate in Education) that includes a downloadable PDF of rubric and visual chart. 10 min read
Some sample policies suggested by the AI Taskforce at Columbia College:
A policy prohibiting the use of AI generative software or similar technology for assignments in your course might read (feel free to copy/paste/edit):
- Collaboration with AI text composition software (like ChatGPT) is not permitted in this course.
- Collaboration with any AI generative software (like ChatGPT for text or Midjourney for images) is not permitted in this course.
If you’d rather consider students’ use of AI generative software on a case-by-case basis, your policy might read (feel free to copy/paste/edit):
- Please obtain permission from me before collaborating with peers or AI generative software (like ChatGPT for text or Midjourney for images) on assignments for this course.
If you want students to have access to AI generative software and to incorporate it as part of the writing process your policy might read (feel free to copy/paste/edit):
- You may collaborate with AI generative software (like ChatGPT for text) on assignments for this course but they must not constitute the entirety of your assignment, and they must be properly cited to distinguish their role using the course-sanctioned citation method. Citation models for AI are evolving, so please refer to the syllabus for current examples.
Examples may include:
- Using AI chatbots to produce initial drafts of claims. Students should cite the entirety of the AI chatbot version of the claim in a footnote or endnote so the instructor can compare/contrast the final, student-written claim from the first draft by the AI chatbot.
- Using AI Chatbots (especially in search engines like Bing) to identify sources for reference in a student’s argument. Note that AI chatbots may return inauthentic sources and falsehoods that the Chatbot mistakenly believes to be true, so students should be warned to double-check the authenticity of the source and citation by trying to find it using library.colum.edu.
- Using AI chatbots or other generative software to produce initial drafts of code, imagery, or sketches that the student subsequently edits, alters, or redrafts into their own creative endeavor. The use of software must be declared in the final submission's description and the student should be prepared to include or share the original, software-generated draft with the instructor depending on the assignment requirements.
If you want to create a community technology agreement, collectively, between yourself and/among the students, you might want to consider something like this:
- We will establish community guidelines for our classroom’s use of AI generative software in a forthcoming class and subsequently post our results here. Until that time, please cite and refer all AI generative software usage to the instructor.