Open AI, the creator of the incredibly popular AI chatbot ChatGPT, has officially shut down the tool it had developed for detecting content created by AI and not humans. ‘AI Classifier’ has been scrapped just six months after its launch – apparently due to a ‘low rate of accuracy’, says OpenAI in a blog post.
ChatGPT has exploded in popularity this year, worming its way into every aspect of our digital lives, with a slew of rival services and copycats. Of course, the flood of AI-generated content does bring up concerns from multiple groups surrounding inaccurate, inhuman content pervading our social media and newsfeeds.
Educators in particular are troubled by the different ways ChatGPT has been used to write essays and assignments that are passed off as original work. OpenAI’s classifier tool was designed to address these fears not just within education but wider spheres like corporate workspaces, medical fields, and coding-intensive careers. The idea behind the tool was that it should be able to determine whether a piece of text was written by a human or an AI chatbot, in order to combat misinformation
Plagiarism detection service Turnitin, often used by universities, recently integrated an ‘AI Detection Tool’ that has demonstrated a very prominent fault of being wrong on either side. Students and faculty have gone to Reddit to protest the inaccurate results, with students stating their own original work is being flagged as AI-generated content, and faculty complaining about AI work passing through these detectors unflagged.
Turnitin’s “AI Detection Tool” strikes (wrong) again from r/ChatGPT
It is an incredibly troubling thought: the idea that the makers of ChatGPT can no longer differentiate between what is a product of their own tool and what is not. If OpenAI can’t tell the difference, then what chance do we have? Is this the beginning of a misinformation flood, in which no one will ever be certain if what they read online is true? I don’t like to doomsay, but it’s certainly worrying.