Policy

General AI Policy

Columbia College Chicago's Security and Privacy for Artificial Intelligence policy defines Artificial Intelligence (AI), in which situations it can be used, when consent is required and how it can be acquired, how to communicate it is in use, and the ethics around its usage.

AI and Current Academic Integrity Policies 

Columbia College Chicago academic integrity calls for scholars at all levels to demonstrate transparency and honesty about sources of knowledge consulted or used for assignments (CCC Academic Integrity Policy, paragraph 1).   

Sample Policies to Include as Part of Your Course 

If you have not already, please add a policy on using AI generative software to your syllabus and take a few minutes in class to explain your policy.  Since this is new, evolving territory, we recommend additional, specific clarity on first offenses (will students be able to redo assignment?) and second offenses (will subsequent or multiple offenses result in an automatic zero for the assignment?). 

Some sample policies suggested by the AI Taskforce at Columbia College:

A policy prohibiting the use of AI generative software or similar technology for assignments in your course might read (feel free to copy/paste/edit):  

If you’d rather consider students’ use of AI generative software on a case-by-case basis, your policy might read (feel free to copy/paste/edit):   

If you want students to have access to AI generative software and to incorporate it as part of the writing process your policy might read (feel free to copy/paste/edit):  

Examples may include:   

If you want to create a community technology agreement, collectively, between yourself and/among the students, you might want to consider something like this: