Artificial intelligence has been in our everyday lives for some time. Think of Amazon's Alexa and Apple's Siri voice assistants. They can be helpful to perform tasks.

Generative artificial intelligence has been researched for decades, but has not been widely implemented or discussed outside research circles. You may have experienced generative AI though predictive text (Smart Compose) in Gmail.

The release of ChatGPT on November 30, 2022, though, was the first time many of us learned about generative artificial intelligence. There are now many generative AI models and tools based on them.

What is ChatGPT?

A generative artificial intelligence chatbot that uses predictive analytics to compose natural language. Introduction to Generative AI with GPT on LinkedIn Learning is a great 65 minute course that is free to UIS students and staff. This video excerpt provides a good 4 minute overview of GPT and generative AI:

Generative artificial intelligence tools like ChatGPT should be looked at as a tool that can be employed by faculty, staff, and students. It is a tool that is here and will not be going away. There are affordances, challenges, and risks related to using these technologies. As a university, we believe that we need to learn to use AI technologies and to teach our students to use and understand them in productive and balanced ways.

One risk, for instance, that ChatGPT and other large language models (LLM) will always have is the potential to generate unsubstantiated or untrue content, even though this risk may be diminished as the tools develop. A challenge for instructors is the concern of AI cheating and the workload that can come with it. One affordance could be the use of AI to take care of administrative tasks.

"The best way to think about this is you are chatting with an omniscient, eager-to-please intern who sometimes lies to you."

Ethan Mollick, a professor at the University of Pennsylvania's Wharton School of Business in an NPR Interview on December 19, 2022

The Biden administration issued an executive order aimed at setting comprehensive guidelines on artificial intelligence technology, which will allow multiple agencies to start regulating emerging technology and protect individuals’ privacy in the absence of any legislation governing AI. Explore the resources below to learn more and consider the challenges and opportunities presented by generative AI.

UIS workshop recordings. View the Explore ChatGPT Session that COLRS hosted with the UIS Learning Hub and Center for Faculty Excellence in January 2022. Assignment ideas and an overview of generative AI are covered "Generative AI in Our Classrooms" (presented by Layne Morsch and Emily Boles on December 1, 2023 for the Central and Southern Illinois Faculty Development Network).

U of I System guidance. The University of Illinois System convened a focus group from across the three universities to create guidance documents on Generative AI, including specific information for instructors and students.

Mind the AI Gap – Equity, Access & the Risks of Standing Still. In this Contact North webinar Dr. Philippa Hardman explores the technical, ethical and cultural fear and risks associated with the rise of AI in the education context. It explores complex issues surrounding generative AI in higher education.

Syllabus Statements

Syllabus Statements are a great way to frame the conversation about generative AI with your students. COLRS has created a common document for UIS instructors to share the generative AI syllabus statements. Please feel free to send it to COLRS staff for it to be added to the document.

A set of UIS syllabus and assignment icons that represent common generative AI course policies and acceptable use. These icons have been adapted from the Oregon State Syllabus and Assignment AI Icon Project.

Syllabus Resources

Resources

Cheating with AI: Classroom Strategies and Detection Tools

There is an arms race taking place between AI and AI detection tools. It is a race that isn't winnable by educators. The generative AI detectors will always be catching up to the tools.There is a significant negative impact on students when AI detectors present false positives. Research has also shown that AI detectors are more likely to label text written by non-native English speakers as being written by generative AI (Myers, 2023). Turnitin has said that it has a 1% false-positive rate for AI detection. If that is true, then out of 4,000 papers submitted by UIS students each term, 40 students could be falsely accused of cheating with generative AI tools. 

Becoming the cheating police does not teach our students to value their voice, agency, and creativity. Rather, we need to encourage our students to view conversational AI as a tool. Conversations and classroom activities that demonstrate intentional uses for AI are effective strategies with many students. 

The UIS Academic Integrity Council sent a guidance memo on generative AI and student work to UIS faculty on April 3, 2023.

In classroom discussion and reflection on AI, we need to position humans -- instructors and students! -- as experts and critics of the tool and its output. We recommend reinforcing that position often.

Students who are intrinsically motivated to learn are not likely to cheat. To encourage intrinsic motivation, try to provide room and support for autonomy, mastery, and purpose in your assessments.

  • Allow students to self-direct a portion of the assignment. Can you allow them to pick a topic or direction? Can they customize it to address a project at work or to further research in a favorite topic?
  • Scaffold your large assignments to provide students the confidence that completing the assessment is possible. Hopeless students who aren't sure what to do next are more likely to cheat. 
  • Tell your students your "why." What is the purpose of the work? What will it enable them to do in your field, later in the class, at a job. Let them know why they should care about this work. 

Review your assessment and activities. Are they asking students to recite facts? If so, those are easy targets for AI chatbot use. Try to put a twist on those assessment: add an element of comparison; ask students to reflection on their own experiences; or add a local twist. These types of work are beyond the capabilities of current AI chatbots.

Another effective action is to create explicit assignment instructions that address what you view as appropriate uses of AI in each of your assignments and what you view as cheating. This keeps your knowledge of AI tools centered and provides students guardrails for their behavior. Show students that using AI tools can save time and reduce busywork without relinquishing their agency and voice and missing opportunities for growth and learning.

Even if you do all of this, some some will still cheat. It will happen. What are instructors to do?

  • First, get to know AI writing style. We suggest using ChatGPT and other conversational AI tools to learn to recognize the consistent output (writing style) of the chatbot. Your observations will be your strongest tool.
  • Second, check the suspected writing with several tools. When you suspect AI writing is being used, we recommend feeding a large amount of the suspected text into several detectors to compare the results. Longer pieces of text  (250+ words) provide more evidence for the detector to consider and view patterns in the writing. Know that the detectors are fairly easy to trick. With some light editing and error introduction, you can easily change the results from "likely an AI chatbot" to "likely human." Also, beware that creative outputs from ChatGPT (write in an accent/style, write a poem or lyrics) aren't detectable by these tools. 
  • Third, have a conversation with your student(s). Ask them if they used an AI chatbot in their work. Some student work can sound stilted like AI chatbots. They may not have cheated. Since it is a tool and not a source, using chatbots isn't plagiarism. It can be a gray area for students if you haven't specifically addressed it in class or in your syllabus. Consider giving them a chance to redo the work after the conversation. Tell them the consequences for using an AI chatbot moving forward (zeroes, academic integrity violations, etc). 

The AI Detectors

Search Engine Land released a comparison of AI detectors. AI detection tools “grade content based on how predictable the phrase choices are within a piece of content.” Does the text align with the likely pattern AI would in creating the content? The AI detectors generally look at two qualities:

  • Burstiness: A predictable length and tempo to sentence structure.
  • Perplexity: A randomness to the words chosen in a sentence or collection of sentences.

CopyLeaks is a mainstream plagiarism detection tool that developed an detector to recognize human writing (as opposed to AI writing). They advertise a 0.2% false positive rate. They offer a free Chrome extension. 

GPTzero was developed by Princeton undergrad Edward Tian, and also uses GPT4. You may upload entire files to this tool in addition to copy and pasting text. It uses "perplexity" and "burstiness" scores for rating writing as human or AI authored.

Turnitin released a beta tool for AI detection in March 2023. It was available at no cost while in beta. In January 2024, it became a separate paid service and is no longer available in our Turnitin subscription.

Select AI Articles and Resources

AI and Education Articles

Resources

AI News Articles

Resource type