- AI on the Fly
- Posts
- Pulling Back the Curtain on AI Chatbots Writing Papers
Pulling Back the Curtain on AI Chatbots Writing Papers
Students using AI tools to compose papers and complete assignments is education’s new monster in the closet. At least that’s what a scant few instructors and media outlets pushing a fear-based narrative would have us believe. Do not be afraid! Bard will not be doing children’s homework, and ChatGPT aiding student paper-writing is nothing to dread. The AI chatbot conversation may be new to the teacher water cooler and in professional development sessions, but most administrators and instructors have already investigated the potential threat of AI on evaluating student writing. Here is why most education professionals put it in the “no teeth” category.
While the output of AI chatbots can be useful and impressive, that output is generated by a technology that is utilizing language patterns -- not critical thinking. AI tools are not doing the reading and analysis many paper-writing courses require to earn a decent grade. This distinction between language patterns and critical analysis can result in a program like ChatGPT fabricating sources to support an argument or inventing details when summarizing a piece of writing. Until AI is designed to evaluate the reasonableness of the language patterns it identifies, most instructors will uncover transparent evidence of chatbot usage easily.
Teachers are also armed the pedagogical practices of formative and summative assessments to stymie cheating and plagiarism with AI chatbots. Formative assessments are formal or informal opportunities for the teacher and student to evaluate whether that student correctly understands a new skill or concept. Aside from gaining information about a student’s comprehension levels, instructors can use formative assessments to learn about a student’s writing tendencies, their grammar-based misconceptions, and the boundaries of their universe of thought on a piece of writing or the assignment subject matter. AI chatbot unorthodox word choices and sentence or paragraph structures that consistently conflict with a student’s in-class work product limit the ability to pass off a chatbot’s output as a student-written paper. Once a red flag is identified, a summative assessment of the student’s work will illuminate whether a chatbot was used inappropriately or not.
I was teaching a Transition English (basic writing) course last spring, and I received my first ChatGPT-aided paper. The red flags were numerous. Suspicious formatting, unusual word choices, novel sentence and paragraph structures the student had never shown, and my knowledge that the pressure on him was mounting were all factors. I decided to engage in a one-on-one summative assessment with this student. I spoke with him privately outside the classroom while the rest of the class engaged in an activity. I asked him several questions about the paper’s ideas and about words he used. He could not respond to any of them. From there I explained to him that using ChatGPT as a tool to help write a paper is a good thing! However, there is no computer program that can replace the critical thinking required to ace a paper-writing assignment. He took the lesson and went back to the struggle, the thrill, and the pain of learning and growing.
Educators do not need to fear students using AI chatbots for their paper-writing assignments. Like most cheating that occurs, there are myriad ways of detecting it. AI chatbot technology is rapidly advancing, and systemic countermeasures against using it for writing academic papers is already happening. In a country where over 52 million adults are performing at the lowest levels of literacy, is AI in education a reasonable locus of concern? There is certainly no chatbot that can answer that question for us, but it is a question worth asking.