As another school year comes to a close, the debate around AI writing detection tools like TurnItIn and Canvas is, once again, the talk of the town. TurnItIn was founded way back in the 90s with the goal of promoting honesty and integrity in education by detecting potential plagiarism. But do they really stay true to those goals with their AI detector?
This question becomes more relevant during and after the pandemic as many institutions shift to a hybrid learning setup. Being at home means giving students access to AI tools all the time, which also means more opportunities to cheat their way to an A+ if they wanted to — that is despite the robust protection that learning environments like Canvas provides.
Many argue that AI detection is still too inaccurate to be making high-stakes decisions about academic dishonesty. Using them as the sole basis for accusations of cheating raises concerns around fairness and due process because of false positives.
So, in this article, let’s answer one simple question: can we trust Canvas and TurnItIn to be education’s tool of choice against AI?
What Students Think About TurnItIn AI Detection
No student is ever the same and there is a wide range of opinion regarding this matter. There will be some that will argue in favor of it to stop academic dishonesty, but from what I see online, the overwhelming opinion on AI detection is “not yet.” There have been numerous statements, including from OpenAI, that say detection is unreliable at its current implementation.
There are three main reasons why people are wary of TurnItIn’s detection, these are the following:
How Accurate is TurnItIn
We can’t test TurnItIn’s AI detection accuracy ourselves like we usually do, but there’s been many that have, so I’m going to cite one of them.
Last year, Washington Post conducted a test along with five students and sixteen essays which are human-written, AI-generated, and mixed in nature. The result?
TurnItIn got over half of them wrong.
The fact of the matter is, even if TurnItIn can be 99% sure that their software is robust, one out of a hundred students will still get a false positive. Most times, it could be resolved by simply speaking with a professor. But that isn’t always the case…
Students Getting Hit With False Positives
One such case of false positive AI detection is that of William Quarterman, a student at UC Davis. Despite never using ChatGPT, he was slapped with a failing grade after his professor used a detector to detect AI writing in his exam. The toll on Quarterman’s academics and mental health was severe until the university finally admitted to lacking reasonable evidence.
But Quarterman was just the beginning. Fellow UC Davis student Louise Stivers also faced accusations of AI use for a case brief, this time flagged by TurnItIn. In an ironic twist, it was Quarterman who came to Stivers’ aid by advising her alongside his father on how to overturn the unjust academic sanction.
As these cases show, the increasing paranoia around LLMs in academia has created a messy situation rife with false positives. There are countless issues worldwide about false positives, with many that are not even publicized.
Prejudice Against International Students
AI detection can also be prejudicial, as demonstrated by James Zou, an assistant professor at Stanford.
Based on his studies, AI detectors are 8% more likely to flag non-native English speakers because they write simpler, less complex sentences, which could easily be mistaken as AI by detectors.
What Students Think About Canvas AI Detection
I know this article is titled “Canvas vs. TurnItIn,” but we really can’t compare these two for one simple reason: they’re the same thing because Canvas uses a plugin created by TurnItIn for AI detection.
Whatever’s been said here about TurnItIn — the good and the bad — also applies to Canvas’ AI detector.
Actual Opinions From The Education World on AI Detection
Recent reports from students and educators across various online forums reveal that TurnItIn’s new AI writing detection feature is plagued with accuracy issues, frequently flagging human-written work as potential AI output.
- Multiple students have shared frustrating experiences of having their original essays and papers incorrectly flagged by TurnItIn with high AI probability scores. In some cases, they can go up to a whopping 100% likelihood according to the tool’s assessment despite never using artificial intelligence.
- One graduate student’s paper was flagged at 34% AI probability on the second detection attempt, despite the student confirming they did not use AI writing assistance. The professor did not fully believe the student and only awarded an “alright” grade on the assignment as a result.
- Another student had a handwritten essay incorrectly labeled as 50% AI generated by TurnItIn’s detector.
- And yet another had their work flagged at 97% AI likelihood, even though they only used ChatGPT minimally to clarify points and rewrite some sentences and not for full content generation.
- A graduate student provided a logical mathematical critique of why TurnItIn’s approach is fundamentally flawed for accurately detecting AI text due to the technology’s current limitations.
The list goes on and on.
What Do Teachers and Graduate Students Think?
Several educators have also weighed in, with many casting doubt on TurnItIn’s accuracy claims. A college professor pointed out that TurnItIn itself admits the detector is only around 80% accurate based on their testing. A history professor in Quora noted that many of their own human-written papers triggered high AI probability flags from TurnItIn when they clearly did not involve AI usage.
The discussion threads reflect a growing skepticism towards blindly trusting AI detection results without oversight or an opportunity for students to validate false claims. Educators and students alike call for more transparency around the capabilities and error rates of these tools.
Wrapping Up
As the debate around AI writing detectors like TurnItIn and Canvas persist, one thing becomes very clear: relying solely on their verdict for high-stakes decisions like academic misconduct allegations is not one without risks. The mounting reports of inaccuracies, false positives affecting innocent students, and potential bias against certain groups raise serious concerns about fairness and due process.
While these tools can provide a preliminary flag, they should not be the sole arbiters. A more balanced approach would involve human review and an opportunity for students to defend themselves. The challenges posed by AI writing are real, but so are the dangers of quick bandaid solutions. As AI continues to evolve, so must our perspective.
Only with a nuanced approach can we navigate this era of artificial intelligence while preserving the essential role of education.
Want to learn more about AI detection? I highly recommend this article to learn what teachers think of it as a whole.