OpenAI struggles to detect texts written by its own AI

The Californian start-up launched a tool on Tuesday to determine if a text is written by a human or ChatGPT. With fairly poor results.
Boon for students, nightmares for teachers. Designed to answer questions from humans, ChatGPT is also used to cheating on exams. Open AI, the start-up behind the chatbot is well aware of the abuses associated with the use of its conversational robot. The Californian company therefore unveiled on Tuesday software responsible for determining whether a text was produced using artificial intelligence (AI). But the efficiency leaves something to be desired.
Many limits
He is “impossible to reliably detect all texts written with AI”warns OpenAI in a statement. According to the start-up’s first tests, its detector only detects 26% of the texts written by a chatbot. In 9% of cases, it attributes a text written by a human to an artificial intelligence. Texts modified by Man also escape his supervision.. The efficiency of the detector improves as the length of the input text increases. “It is therefore very unreliable on short texts of less than 1,000 characters”the statement said.
Read alsoChatGPT: “Only those who have a good command of the language should use it”
The software only works in English, but the company assures that its tool will improve over time. In the meantime, the detector “should be used as a complement to other methods in order to determine with certainty the origin of the text”an OpenAI spokesperson told TechCrunch. Other tools are freely available on the market. Like GPTZeroa program unveiled by a student, or even that of the Franco-Canadian start-up Draft&Goal.
Launched at the end of 2015 and co-led in its early days by entrepreneurs Sam Altman and Elon Musk, OpenAI presents itself as a research and deployment company for artificial intelligence. After receiving $1 billion from Microsoft in 2019, the start-up has just signed a new agreement for 10 billion dollars with the computer giant.