Turnitin reassures educators with promise of AI detection software

As college professors worry that students will use new technology to cheat, Turnitin promises AI generated writing detection to come soon.
25 January 2023

Source: shutterstock https://www.shutterstock.com/image-vector/illustration-happy-little-robot-graduated-proud-1163324101

Getting your Trinity Audio player ready...

As AI generated essays written by chatbots have become sophisticated and coherent, fears for how they’ll challenge society have mounted — AI threatens to replace journalists, musicians and, most recently, academic thought.

Last week, Turnitin CEO Chris Caren acknowledged in a blog post the “surge of interest and concern surrounding ChatGPT,” which he said is “a challenge and and opportunity for the education community.”

Students have (supposedly) been giving ChatGPT prompts to write their essays for them, and at this stage there aren’t academic guidelines to regulate this.

To mitigate the risk that students hand in AI generated writing as their own work, Turnitin has reassured educators that it will launch AI-created content detection software in the first half of this year. A prototype will be available free to existing customers while the company gathers data and user feedback.

The coherence of AI writing means that professors might not notice a submission is not a student’s own work, and since the content is randomly generated, it won’t be flagged in a standard plagiarism check. However, VP of AI at Turnitin, Eric Wang, told The Register that “even though it feels human-like to us, [machines write using] a fundamentally different mechanism.” The way that chatbots write is by using the most probable word in the most probable place.

Since OpenAI released GPT-3 in 2020, Turnitin has been preparing for AI-created writing. Now, it recognises that while AI has potential to “support and enhance” learning, “there is a more pressing and immediate need to know when and where AI and AI writing tools have been used by students.”

There’s no smoke without fire, but amongst the frenzied headlines about teachers’ woes, there are few testimonials from professors who have actually had AI generated essays handed in. Is it so hard to detect that most cases have gone unnoticed? Or is it less of a threat to the academic institution than we might believe?

Man vs machine: identifying AI generated essays

A Philosophy professor at Furman University, Darren Hick, and professor of Religious Studies and Philosophy at North Michigan University, Antony Aumann, spoke to Insider about their run-ins with AI writing. Both noticed something was slightly amiss in student essays and confronted students, who admitted to having used generative AI.

For Hick, the clue was a small falsity stated as an absolute fact that gave the chatbot away. Aumann’s accusations came, he says, because “the chat writes better than 95% of [his] students could ever.”

Hick’s observation points to a genuine flaw in AI generated writing: he called it “really well written wrong.” ChatGPT’s creators OpenAI have warned of just that – “plausible-sounding but incorrect or nonsensical answers” – pointing out how a model trained by reinforcement learning has no source of truth.

With some extra effort, students could pass off AI-written work as their own. In a Reddit thread, one user explains how they “used chat gpt [sic] to write 90% of it, added [a] few more things from myself, [and] changed the text a little bit,” before submitting the essay. Turnitin’s plagiarism score was less than five percent chance.

Responses, although mixed, aren’t as quick to condemn ‘cheaters’ as some media outlets. Using ChatGPT in this way is compared to using a calculator — a tool not a cheat. Most interesting is a comment from a Redditor who says they’re a teacher. The user asks why using ChatGPT is dishonest, given that “it won’t write anything worth reading if you don’t feed it the right information [or] revise and edit the outcome.”

AI essay assignments

Putting the construct of the college essay aside momentarily, in a high school setting maybe allowing AI generated writing is beneficial. The same user makes this case, that it helps “level the ‘playing field’ for disaffected students who have fallen behind on some of the intricacies of writing, but are able to, nonetheless, formulate interesting ideas and/or points of discussion. They can benefit from software that helps them with phrasing and rhetoric.”
In fact, one might ask, wouldn’t an exercise involving correcting the tone and cadence of machine written text be a useful teaching method?
UK tabloid newspaper The Mirror reported that UK schoolteachers are concerned about the capabilities of ChatGPT, citing that responses it wrote to English Literature, English Language, and History essay questions would get passing GCSE grades. The article doesn’t comment on the likelihood of GCSE students using AI generated responses in their exams.

At the collegiate level, academic standards seek to ensure that testing will prove students’ merit in achieving a higher level qualification based only on their ability. Unclear guidelines on the use of AI generated essays might give some students an unfair advantage; might preventative measures also tackle advantageous factors such as wealth or family alumni?

There are already bots available online that can detect machine generated writing. GPT-2 Output Detector Demo is one such offering that displays the likelihood that a text is ‘real’ or ‘fake’ writing. Although it seemingly detects AI written work, longer blocks of text are required for an accurate reading: “the results start to get reliable after around 50 tokens.”

Some university professors are considering draconian measures to assuage cheating concerns. Aumann told Insider that some professors were considering a return to traditional handwritten assessments like blue books.

Within the debate, Turnitin’s new software doesn’t aim to get ChatGPT banned from academia, rather “enable teachers and students to trust each other and the technology.”