Now is the era of generative AI, and with that comes new challenges. How do we implement it in our daily lives? How can we mitigate job displacement? Can we use it for medicine research?
For educators, there is one clear problem: how to stop students from using ChatGPT and other LLMs for academic dishonesty.
The bottom line is that language models like ChatGPT have opened up new avenues for both innovation and misuse. While they can boost productivity and learning, there are valid concerns around students potentially exploiting them to circumvent assignments. So, the burning question becomes this: is Canvas equipped to navigate this new era?
Good news: it can, but with more than a few asterisks. Let’s talk about those in this article.
What is SimCheck?
Let’s get one thing straight first: Canvas can’t actually detect if an essay came from ChatGPT specifically. But they can differentiate AI from human writing.
Canvas offers instructors a powerful weapon in their battle against academic dishonesty using LLMs: SimCheck. But how does it work, and is there more to it than just spotting copied passages?
Let’s break it down.
SimCheck is a plugin by TurnItIn that scans submitted work against a massive database of online sources, including websites and academic journals. SimCheck uses a robust text-matching algorithm to identify any significant overlaps between a student’s work and existing sources.
But ever since ChatGPT gained popularity, SimCheck also started offering free AI detection for Canvas instructors. This helps them identify assignments that might be copied, paraphrased, or rewritten from AI-created content, making it harder for cheaters to fly under the radar.
Should Teachers 100% Trust SimCheck AI Detection?
Absolutely not.
Think of it this way: no AI detector is ever truly 100% accurate. Even the companies at the forefront of this technology, like OpenAI, say that detection is a fool’s errand at this point. Education is such a pivotal point in our lives that a single mistake could ruin someone’s future. Entrusting the full task of checking for AI detection to a detector is the wrong way to navigate this new era of academia.
But don’t take it from me. Take it from William Quarterman and Louise Stivers.
How Can False Positives Affect Students?
The aforementioned UC Davis students were falsely accused of using AI after their professors tested their work against two popular detectors: GPTZero and TurnItIn (the company behind SimCheck). The only way they avoided permanent academic punishment is by appealing and publicizing their story. And yes, that’s right: somehow, the burden of proof is on the students.
Smaller Cases
There’s little to no cases of SimCheck cases online, so let’s broaden the issue a bit since most AI detectors are unreliable to a degree. Here are some false positive cases we’ve heard over the past year:
- Texas A&M University Students: A Texas A&M professor threatened to fail his entire class last May after their essays triggered an AI detection tool. The situation was fortunately resolved by giving a different writing assignment instead. However, it raises the question of why the burden of proof falls on students when AI detectors can be unreliable.
- New Zealand High School Students: Two high-achieving year 12 students in New Zealand were falsely accused of using ChatGPT on their assignments. This was a high-stakes situation that could have prevented them from getting into top universities if proven true. After review by Research on Academic Integrity in New Zealand, both students were deemed innocent.
- Non-Native English Writers: Stanford professor James Zou tested 91 essays against seven AI detection tools. On average, the tools were 8% more likely to flag writing by non-native English speakers as potential AI versus native speakers. The likely reason is that non-native writing tends to have less complexity, similar to generative AI output. Ironically, this could incentivize non-native writers to actually use AI language models to try evading detection.
With that said, what’s the best way to actually detect AI misuse?
Other Ways Teachers Detect AI When You Use Canvas
There are two main ways professors can detect LLMs in a student’s work without using AI detectors like SimCheck. These are the following:
- Unusual Activity. Canvas can detect when you move away from a page or copy something from a different website. It also monitors your activity like how much time you spent on a question, if you’re answering quizzes and essays too fast, or any other behavior that may be caused by cheating.
- Reading Your Work. Truth be told, if you’re reading AI work a lot, it’s not that hard to differentiate it from human writing. There are some telltale signs like using transitive words a lot, making tons of lists, using certain words, and more.
At the end of the day, the most responsible thing to do is use AI detectors but also practice human checking. This way, you get the best of both worlds. It’s still not perfect, but it’s the best we got since AI is still in its infancy.
Can Students Avoid AI Detection?
Yes, students can actually avoid AI detection and it’s not even that hard to do so. The reason behind this is that, if false positives can happen, then the opposite (false negatives) must also be true. Off the top of my head, I could already think of three ways to avoid AI detection if you’re a student.
- Manual Tweaking. A student can remove AI markers like repetition, lists, and common words so AI detectors wouldn’t be able to locate them and classify a student’s essay as machine-generated.
- AI Bypassers. There are also paraphrasers whose only goal is to rewrite your text so it wouldn’t read like it came from an AI. A few examples of this technology are HideMyAI, Netus, and BypassGPT. But by far, the most potent one is Undetectable AI. You can read our full review of the product here.
- Prompt Engineering. A student can also just instruct an AI to be better at hiding from detectors by using certain keywords in their prompt like “creative”, “first person perspective”, and more. It’s also possible to feed an LLM your previous essays and tell them to generate a new essay based on your writing voice.
All Said and Done
Canvas remains one of the top learning management systems in the world — but it’s not without shortcomings. Not having a robust AI detection plugin and relying on TurnItIn is a misstep when there are better choices around.
That said, you can’t really blame them for using the academic standard. And it’s also not their fault that AI detection can’t keep up with truly intelligent LLMs. However, as models continue to evolve, the need for a better detection system integrated with Canvas becomes more and more urgent.
If it doesn’t happen anytime soon, more students will get falsely accused of using AI and receive academic punishments for it. That’s something that universities must protect them from, not cause.
Want to learn more about the role of AI detection in education? Hear from teachers worldwide here.