- FryAI
- Posts
- Can AI-Written Essays Be Stopped? (Part 1/2): A New Approach
Can AI-Written Essays Be Stopped? (Part 1/2): A New Approach
Welcome to this week’s Deep-Fried Dive with Fry Guy! In these long-form articles, Fry Guy conducts in-depth analyses of cutting-edge artificial intelligence (AI) developments and developers. Today, Fry Guy dives into the controversy surrounding AI-written essays in education. We hope you enjoy!
*Notice: We do not receive any monetary compensation from the people and projects we feature in the Sunday Deep-Fried Dives with Fry Guy. We explore these projects and developers solely to reveal interesting and cutting-edge AI developments and uses.*
🤯 MYSTERY LINK 🤯
(The mystery link can lead to ANYTHING AI-related. Tools, memes, and more…)
If you ask any teacher what their biggest problem is nowadays, you’re probably going to get a consensus: AI!
Students at all levels—from middle schoolers to PhD candidates—are using AI to write their essays. In fact, a recent study found 4 out of 10 college students are using AI for their assignments. For many teachers and universities as a whole, this issue has been a confusing maze, rife with frustration. For students, however, large language models (LLMs) like ChatGPT have been a godsend, allowing them to “write” an entire essay with one prompt.
Many people view ChatGPT and its friends as the end of writing altogether. As a result of such technology, students are able to cheat right under their teachers’ noses, and there is little that can be done to stop it. Many have tried and failed, but this might just be the end of writing and education as we know it …
This article is a call to PUMP THE BRAKES! Maybe if we take a step back and assess this situation in a new light, the future won’t look so bleak. And maybe there is hope for this AI thing in education after all.
THE DEATH OF EDUCATION?
LLMs like ChatGPT and Gemini are free to use, and students are chomping at the bit to use these AI systems to write their essays for them in school. Many educators are trying to dismiss the problem by saying, “ChatGPT doesn’t even write that well.” But while it may not write with great acuity on issues to the extent of being published in academic journals (though this is happening), we have to be honest with ourselves: LLMs like ChatGPT are pretty darn good. Yes, LLMs hallucinate and get things wrong. Yes, LLMs use language patterns that are often quite annoying. And yes, LLMs make generalizations, struggling to give firm stances on issues or provide critical arguments. But from a simple prompt (if the prompt is at least decent), the model is able to produce writing that is often much better than that of an average college student.
Some educators are scared to admit that LLMs like ChatGPT are good writers. They want to believe that LLMs produce poor, or at least subpar, content. This way, they don’t have to deal with the reality that students could use an LLM to pass their class with ease. Other educators, however, realize the potential of these models to produce solid writing, oftentimes writing that would earn students an “A” on a college essay. So on one hand, we have educators oblivious to the fact that LLMs can pass their classes with flying colors, making their classroom a dangerous breeding ground for AI abuse. And on the other hand, we have educators quivering at the thought of a future where classical writing becomes obsolete.
Educators of all kinds care deeply about integrity within their classes and the learning outcomes of their students. As more students lean on AI to write their essays, many believe it will become damaging to these learning outcomes. For social ethics professor Brant Entrekin, AI can be likened to plagiarism and should be treated as such. Entrekin states, “For me, a student using AI to write their whole essay is really no different than the plagiarism of old: they are taking a cheap and easy route to avoid sharpening the skills that writing the essay is meant to develop in students.” As a result of such concerns, universities of all kinds have attempted to crack down on AI usage policies in the classroom. Many have tried to implement policies such as giving students Fs on assignments when they use AI or even going as far as to dismiss students from classes altogether for using the technology. In some instances, educators are turning back to hand-written assignments … inevitably leading to devastating hand cramps!
One glaring problem with an “AI crack-down” approach is that there is currently no good way to detect AI writing. Sure, there are platforms like GPTZero which claim to detect AI writing accurately. However, these tools are proven to be unreliable. There have been many cases where 100% AI-generated text has been approved as human-written, and cases where 100% human-written text has been flagged as AI-generated. In one shocking case, Shakespeare’s MacBeth was flagged as “AI-generated.” And we know he wasn’t using AI, unless he was keeping a big secret from the world in 1606! Even Sapling, noted as one of the “best” AI detectors in the industry, only boasts a 68% accuracy in detection.
Detection is incredibly difficult because AI models are trained on patterns in human writing—by its very design, AI is trained to mimic human writing patterns and styles. Not to mention, there are hundreds of LLMs available that are free to use, each trained on different, continually updating datasets. This means over time, AI writing will only get better and more indistinguishable from that of genuine human writing. On top of that, there are multiple tools which students can use to “humanize” AI-written text. Tools like Humanize AI can take a piece of AI-written text and convert it into a more human-like tone, oftentimes dumbing it down or purposely including small common mistakes in grammar or sentence structure. So there is simply no way for AI detection models to keep up.
A lack of detection ability means a lack of enforcement over AI usage. Those students who are careless with AI may get caught by keen-eyed professors, but the witty students will continue to skate along, using AI to earn their “education.”
A NEW TOOL
Maybe we are thinking about this whole “using AI to cheat on essays” thing entirely wrong. What if instead of educators doing all they can to stop AI, they do all they can to help students use AI effectively?
Consider the following:
Imagine you look out your window, and you see your neighbor attempting to cut down a tree in their backyard. You watch them as they pull out their handsaw and start cutting. As you continue watching, you see that they are struggling to make any progress. You know that there is an electric chainsaw in your shed that was given to you as a gift last Christmas—a tool specifically designed for cutting through large trees with ease. The chainsaw would allow them to cut through the trunk in a fraction of the time with much less physical effort. By choosing the chainsaw instead of the handsaw, they could finish the job quickly and efficiently, avoiding unnecessary strain and achieving a cleaner, more precise cut. However, you know that chainsaws are much more dangerous to use (you yourself don’t know how to use one), so you leave your neighbor to their business.
When we reflect on educational practices, we ought to reflect on the goal of education itself. If the goal is to teach students how to write (how to use a handsaw)—like in an English class—then it might be a good idea to find innovative ways to minimize the use of AI, at least with regards to writing essays. However, if the goal is to teach students to maximize their potential for writing, maybe it would be best to embrace AI rather than trying to deter students from using it.
In grade school, I remember being told that I could not use a calculator for my homework because I would not always have one when I needed to do math (Insert iPhone here!). I would be much better off now if my teachers would’ve taught me what half the buttons on the calculator do instead of deterring me from using it altogether. Something similar might be said for AI. In fact, one college student stated, “I am going to be able to use AI at my job, so why can’t I use it at school?” Further, one might argue that if students don’t learn how to use AI in school, they won’t be as prepared as they could be when they enter the job market. As economist Richard Baldwin stated, “AI won't take your job. It’s somebody using AI that will take your job.” If educators aim to prepare students for real-world applications, then it might be helpful to let AI into the classroom, at least in some small, responsible way.
Now, you might be thinking, “There are basic skills that everyone needs—like writing—and a reliance on AI could strip people from developing these skills!” This is certainly true. I wouldn’t be too well off if I never learned how to do addition, subtraction, and multiplication, for instance. These skills are vital in many areas of life, as is being able to write effectively. So surely there are some basic skills that need to be taught even in a world where AI systems are readily available. Writing as we know it may change, but the skill of writing (especially critical thinking) is of vital importance.
So this leads to the final point: Can we find a way to teach students how to leverage AI models to produce writing that maximizes their unique writing abilities, without sacrificing a worthwhile education in writing skills? This is what we will explore in next week’s Deep-Fried Dive: Can AI-Written Essays Be Stopped? (Part 2/2): Using AI To (Responsibly) Write Your Essay. Until then, enjoy your hand cramps.
Did you enjoy today's article? |