A team of researchers at ChatGPT claims repeating the term ‘forever’ will end up revealing segments linked to its training codes.
And now, parent firm OpenAI is shedding light on this subject in regards to how such a command is also a clear violation of the company’s terms of service.
The news arose as authors who hail from 404 Media mentioned how words like computer forever end up revealing the code but only parts of it. However, it also came with a warning regarding how such behavior violates its respective content policy.
For now, experts feel OpenAI’s statement on the matter regarding a clear violation is very vague but when you actually sit down and get deep into the policy, you see how it delineates how compilation, compilation, reverse assembly, translation, and more is not allowed, provided the intent is simply to identify source codes. Similarly, it is to be avoided if done for the mere fact of figuring out which model parts, systems, and algorithms are used in the chatbot.
The company similarly bars all users from making use of automated means to remove data from its Services.
The glitch arose in the public eye when we saw one high-level researcher who hailed from Google Brain, make an attempt with her team to expose the chatbot’s behavior of scraping the online web for data that it could use for training reasons, ahead of time.
Whenever prompts like the repetition of terms like the company was made, it caused the chatbot to carry on repeating this particular term and this forced the chatbot to repeat such a word close to 313 times, right before text was published from a source based in New Jersey. The latter included contact details like email IDs and even phone numbers.
The team observed how such issues don’t regularly arise, each time they’re used. However, the authors disclosed the results to ChatGPT’s parent firm in August of this year and even waited for three months before having the data released regarding the matter. So in the end, OpenAI got plenty of time to find a solution to the error.
There is yet to be any response for such comments being generated.
Also, the group of researchers want OpenAI to know that researchers are constantly working to attain exemption from the DMC Act that gives them the power to break into company policy and override copyright claims to highlight vulnerabilities but as one can expect, big tech giants don’t like the sound of that.
Read next: Wall Street in Peril as AI Threat Reaches London Finance Sector
And now, parent firm OpenAI is shedding light on this subject in regards to how such a command is also a clear violation of the company’s terms of service.
The news arose as authors who hail from 404 Media mentioned how words like computer forever end up revealing the code but only parts of it. However, it also came with a warning regarding how such behavior violates its respective content policy.
For now, experts feel OpenAI’s statement on the matter regarding a clear violation is very vague but when you actually sit down and get deep into the policy, you see how it delineates how compilation, compilation, reverse assembly, translation, and more is not allowed, provided the intent is simply to identify source codes. Similarly, it is to be avoided if done for the mere fact of figuring out which model parts, systems, and algorithms are used in the chatbot.
The company similarly bars all users from making use of automated means to remove data from its Services.
The glitch arose in the public eye when we saw one high-level researcher who hailed from Google Brain, make an attempt with her team to expose the chatbot’s behavior of scraping the online web for data that it could use for training reasons, ahead of time.
Whenever prompts like the repetition of terms like the company was made, it caused the chatbot to carry on repeating this particular term and this forced the chatbot to repeat such a word close to 313 times, right before text was published from a source based in New Jersey. The latter included contact details like email IDs and even phone numbers.
The team observed how such issues don’t regularly arise, each time they’re used. However, the authors disclosed the results to ChatGPT’s parent firm in August of this year and even waited for three months before having the data released regarding the matter. So in the end, OpenAI got plenty of time to find a solution to the error.
There is yet to be any response for such comments being generated.
Also, the group of researchers want OpenAI to know that researchers are constantly working to attain exemption from the DMC Act that gives them the power to break into company policy and override copyright claims to highlight vulnerabilities but as one can expect, big tech giants don’t like the sound of that.
Read next: Wall Street in Peril as AI Threat Reaches London Finance Sector