Key points:

  • AI detection tools are skyrocketing in popularity–but how efficient are they?
  • A look at different AI detectors offers an eye-opening look at whether or not AI-generated pieces are identified as such
  • See related article: Is AI the future of education?

Nearly every school or university faculty is having at least a few conversations about how to address a world rich in easy-to-use artificial intelligence tools that can generate student assignments.

Multiple AI detection services claim efficacy in identifying whether text is generated by AI or human writers. Turnitin, ZeroGPT, Quill, and AI Textclassifier each represent this ability and are in use by higher-ed faculty and K-12 educators.

In an attempt to determine the effectiveness of Turnitin’s ability to identify artificial intelligence generated materials, students in a doctoral methods course were asked to submit one or two assignments that were fully generated by ChatGPT or another generative tool like Google’s Bard or Microsoft’s Bing AI. It appears that most students used ChatGPT. Of 28 fully AI-derived assignments, 24 of 28 were determined to be 100 percent AI generated. The other four ranged from zero to 65 percent AI-derived. The size of the papers ranged from 411 to 1368 words.

Turnitin returned evidence of potential plagiarism through its Similarity Scores in the range from zero percent to 49 percent. The average AI generated paper was noted to be 13.75 percent similar to other extant materials. (You can find Turnitin’s AI Writing detection tool FAQ here.)

Latest posts by eSchool Media Contributors (see all)

Source: https://www.eschoolnews.com/digital-learning/2023/07/03/we-gave-ai-detectors-a-try-heres-what-we-found/