subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
In OpenAI’s evaluations only 26% of AI-written text was correctly identified. It also flagged 9% of human-written text as being composed by AI. Picture: BLOOMBERG
In OpenAI’s evaluations only 26% of AI-written text was correctly identified. It also flagged 9% of human-written text as being composed by AI. Picture: BLOOMBERG

OpenAI, which released the viral ChatGPT chatbot last year, has unveiled a tool aimed at helping show if text has been authored by an artificial intelligence (AI) program and passed off as human.

The tool will flag content written by OpenAI’s products and other AI authoring software. However, the company said “it still has a number of limitations — so it should be used as a complement to other methods of determining the source of text instead of being the primary decision-making tool”.

In the Microsoft-backed company’s evaluations, only 26% of AI-written text was correctly identified. It also flagged 9% of human-written text as being composed by AI.

The tool, called a classifier, will be available as a web app, with some resources for teachers, the company said in a statement on Tuesday. The popularity of ChatGPT has given rise to authorship concerns as students and workers use the bot to create reports and content and pass it off as their own. It has also spurred worries about the ease of autogenerated misinformation campaigns.

“While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human,” OpenAI said in a blog post. 

Since the release of ChatGPT in November, teachers in particular have been struggling to cope. Students quickly realised that the tool could generate term papers and summarise material, albeit while occasionally inserting glaring errors. 

Earlier this month, a Princeton University student named Edward Tian released an app called GPTZero that he said he programmed to detect AI writing. Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School developed an AI policy for his classes, which allows students to use ChatGPT as long as they provide a description of what they used the program for and how they used it. 

New York City’s state schools have banned using ChatGPT and so has the International Conference on Machine Learning, except in certain cases. The conference’s ethics statement noted that “papers that include text generated from a large-scale language model such as ChatGPT are prohibited unless these produced text is presented as a part of the paper’s experimental analysis”.

Bloomberg News. More stories like this are available on bloomberg.com

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.