Tech giant OpenAI has found a new solution to combat cheating on assignment concerns with ChatGPT.
For months, many have debated how useful ChatGPT really is, considering the drawbacks it comes with. This includes using the popular chatbot to cheat in assignments but as per new reports linked to the Wall Street Journal, the company has developed a new tool that could detect writings done through AI.
The only issue is that ChatGPT is debating on whether it should release it or not.
A new statement was provided on this front to media outlets where an OpenAI representative mentioned how the firm is currently researching the text watermarking stories published in the Journal. It hopes to take a deliberate step to alter the complexities that could be involved here, not to mention how the decision might end up altering the ecosystem beyond this system.
This method that they developed seems to be promising on a technical level but it does come with a lot of risks for the company which they hope to be weighing while studying different alternatives. Common examples include the susceptibility related to circumvention by threat actors and the ability to impact groups such as those who don’t speak English.
It’s going to be a different approach to that witnessed from efforts conducted in the past when detecting AI-based text. It’s been so ineffective. Even the company itself plans to shut down any old AI text detectors that arose due to a lower accuracy rate.
Thanks to text watermarking, the company hopes to focus solely on making detections through writings produced via ChatGPT. This would not apply to other models in use. It’s going to enable this by making small amendments to how the tool makes word selections by producing invisible watermarks seen in writings that might be detected at a later time by another tool.
After the publication was done, we saw the tech giant make updates to one of its own blog posts from May related to a study that had to do with detecting AI-based content. The change says text watermarking is very accurate and could turn out to be very useful when it comes to localized tampering incidents which include paraphrasing.
But it’s still not as effective with global techniques used for tampering including translations or rewording done via a separate generative model. This includes asking the tool to add special characters in between each word and then going about with character deletions later on.
OpenAI says that such methods turn out to be trivial to evasion by threat actors. The latest update from OpenAi seems to be an echo linked to the representative’s point that has to do with non-English speakers. They added how such a technique would stigmatize the true purpose or use of AI as the most useful tool for writing when non-native English speakers happen to be involved.
Image: DIW
Read next: What’s the Best Tool for Detecting AI Generated Content?
For months, many have debated how useful ChatGPT really is, considering the drawbacks it comes with. This includes using the popular chatbot to cheat in assignments but as per new reports linked to the Wall Street Journal, the company has developed a new tool that could detect writings done through AI.
The only issue is that ChatGPT is debating on whether it should release it or not.
A new statement was provided on this front to media outlets where an OpenAI representative mentioned how the firm is currently researching the text watermarking stories published in the Journal. It hopes to take a deliberate step to alter the complexities that could be involved here, not to mention how the decision might end up altering the ecosystem beyond this system.
This method that they developed seems to be promising on a technical level but it does come with a lot of risks for the company which they hope to be weighing while studying different alternatives. Common examples include the susceptibility related to circumvention by threat actors and the ability to impact groups such as those who don’t speak English.
It’s going to be a different approach to that witnessed from efforts conducted in the past when detecting AI-based text. It’s been so ineffective. Even the company itself plans to shut down any old AI text detectors that arose due to a lower accuracy rate.
Thanks to text watermarking, the company hopes to focus solely on making detections through writings produced via ChatGPT. This would not apply to other models in use. It’s going to enable this by making small amendments to how the tool makes word selections by producing invisible watermarks seen in writings that might be detected at a later time by another tool.
After the publication was done, we saw the tech giant make updates to one of its own blog posts from May related to a study that had to do with detecting AI-based content. The change says text watermarking is very accurate and could turn out to be very useful when it comes to localized tampering incidents which include paraphrasing.
But it’s still not as effective with global techniques used for tampering including translations or rewording done via a separate generative model. This includes asking the tool to add special characters in between each word and then going about with character deletions later on.
OpenAI says that such methods turn out to be trivial to evasion by threat actors. The latest update from OpenAi seems to be an echo linked to the representative’s point that has to do with non-English speakers. They added how such a technique would stigmatize the true purpose or use of AI as the most useful tool for writing when non-native English speakers happen to be involved.
Image: DIW
Read next: What’s the Best Tool for Detecting AI Generated Content?