Are you a teacher who’s struggling to keep up with AI? Or perhaps a student wondering if your hard work might be unfairly flagged? As we step into 2024, AI detectors have become somewhat of a necessity in classrooms and workplaces alike. But why?
When LLMs are actively changing how we study and work, AI detectors will continue to be the supposed defenders of authenticity. They promise to separate human creativity from machine-generated content, but the question remains: can we truly rely on them?
Let’s dive into the world of AI detection — exploring why it matters, how it’s being used (and also, misused), and most importantly, whether it’s something you should be incorporating into your personal toolkit in 2024.
Why AI Detection Matters
To answer why AI detection matters, we first need to understand what they truly are — and the perfect way I found to describe them is that they’re the reaction to an action.
Newton once said that for every action, there’s an equal and opposite reaction. AI detectors only exist (and are popular) because LLMs like ChatGPT changed the way the world works. Since the potential for bad outweighed the potential for good, AI detectors were born out of necessity — to find a way to police AI abuse in classrooms and in workplaces.
In classrooms, AI detectors serve as sentinels of academic integrity. They flag potential plagiarism and unauthorized information use during exams. This creates a fair environment for all students. After all, over-reliance on LLMs may hinder critical thinking development.
On the internet, AI detection is also an important tool. In an era of rampant misinformation, it helps authenticate the information we consume daily since AI-generated content can be produced at lightning speed and people are taking advantage of that. Detectors level the playing field by highlighting instances where AI might be used unethically.
The value of AI detection doesn’t rest on catching potential cheaters or scammers — the true virtue of it is about preserving the value of human creativity, intellect, and effort.
Can You Still Trust AI Detectors?
On its own, I don’t believe so.
The most popular AI detection tool in schools is TurnItIn. To its credit, it’s well-earned. After being the most popular tool for traditional plagiarism, this platform stepped up to the plate and created an algorithm that supposedly only has 1% false positive rate. But therein lies the problem…
Once we think of students as statistics, we forget that the 1% are people too. People who have worked hard to finish their assignments, complete their admission essays, painstakingly scrutinize and rework every word in their dissertation, and more. Even if it only has a 1% false positive rate, that’s still a 1% chance you’re getting punished for something you didn’t do.
And that’s a chance no student can afford to take.
How People Outsmart AI Detectors
If you can trick something, that means it’s not trustworthy. That goes the same for AI detectors. Since it became mainstream, bad actors have been looking for ways to outsmart them — and it sometimes works. Here are some of the most reliable ways people have tricked AI detectors:
- Removing AI Hallucinations: AI hallucinations are the tendency of LLMs to create fake information. This is a result of current limitations, improper prompting, and outdated knowledge base.
- Limiting Lists and Repetitions: A telltale sign of AI-generated content is overreliance on lists and repetitions. By removing them, you can significantly lower AI likelihood scores.
- Using Different Words: LLMs have favorite words. Some of these are delve, utilize, and revolutionize. As you read more AI-generated content, you’re likely to pick up on the words that AI overuse.
- Adjusting Sentence Structure: Machine-generated essays lack emotion because they don’t have a sense of rhythm.
- Using AI Bypassers: AI bypassers are paraphrasing tools that enable users to avoid AI detection. While that may sound bad, some bypassers actually disapprove of using their software for academic dishonesty. One of these is Undetectable AI, which isn’t only ethical but also reliable. You can read more about this platform here.
How Should You Actually Use AI Detectors?
At this point, I would just like to point out that I don’t think we should boycott AI detectors. On the contrary, I think we should continue using them and provide feedback so they could create better detection models in the future.
But there’s also a proper way of using AI detectors that people often overlook. Here’s how I would use an AI detector if I’m a teacher:
Step #1: Pick The Right AI Detector
There are a lot of AI detectors online, but not all of them are created equally. Some have incredibly low AI detection accuracy that I’m actually surprised people are still using them. To save you the hassle of false positives, please avoid Writer and ZeroGPT. Why?
Earlier this year, we conducted an in-depth testing of eight of the most popular AI detectors today. There were some standouts (which we’ll get to in a bit), some that are in the middle-of-the-pack, and two that were absolutely unreliable. ZeroGPT had a true positive rate of only 36.87% while Writer had — and I’m not exaggerating — a true positive rate of only 18.67%.
On the other hand, we found Copyleaks, Winston, and Sapling to be the most accurate out of the eight. So, if you’re looking for good AI detectors, pick one of them.
Step #2: Re-examine The Text Yourself
You can always use AI detectors to smoke out suspicious papers when you’re grading a lot of them. But here’s the thing: it shouldn’t be the end-all-be-all of scoring or else you risk the chance of innocent students paying the price.
Instead, you should always re-examine the suspicious papers and use your own judgment. There is a pattern to how AI writes, and you should learn it. If you’re confused, here’s an article detailing how to differentiate AI vs. human writing.
Step #3: Collect Evidence
So now, let’s say that there’s an essay that you think came from an LLM and most detectors say so too. Don’t be too quick on the draw and accuse a student. Instead, confront him from a place of inquiry and understanding. Create a safe environment and ask him if his assignment was machine-generated.
If he denies it, then the next step should always be to ask him for evidence. Google Doc history, browser history, library records, outlines — request everything. Only when he can’t provide any should you escalate the issue to the proper authorities.
All Said And Done
It can’t be helped that AI detectors remain a controversial topic in 2024. While what they offer is valuable, they’re far from infallible. As educators and professionals, we must approach these tools with a balanced perspective, and to recognize both their potential and limitations.
You need a nuanced approach. Rather than relying solely on these technologies, we should view them as one gear in a broader machinery to maintain academic integrity and professional standards. This means combining AI detectors with human judgment, critical thinking, and open communication.
Moving forward, we also need to keep up with what’s happening. As LLMs become increasingly sophisticated, so too must our methods for identifying and addressing its use. And heed my warning: there will come a time when AI detectors won’t be able to tell what’s human and AI anymore.
Like I said, AI detection was never about punishment — it’s about creating a culture of trust and integrity. By using AI detectors wisely, we can harness their benefits while keeping the drawbacks at bay, ensuring that we’re preparing students for a future where AI is everywhere, but uniquely human skills remain irreplaceable.
Want to learn more about AI? Professors have given their insight and we’ve collected them in this article.