OpenAI's store for ChatGPT and DALL-E, platforms where people can create custom chatbots, is facing issues. The store is filled with spam, bots pretending to be celebrities, and bots designed to avoid being detected as AI. This situation points to problems in how OpenAI checks these chatbots.
Reports highlight that despite OpenAI's efforts to check new chatbots, many are breaking the rules. For example, there are chatbots based on Pixar and Star Wars themes, which could be a problem for copyright reasons. Also found were chatbots related to Harry Potter, Overwatch characters, and Pokémon.
A big concern is with chatbots known as "humanizers." These bots are made to make AI-written text sound more human and claim they can avoid AI detection. This raises ethical issues, such as the potential for using AI to cheat in school or pretend AI work is human-made. Some chatbots are marketed as "jailbroken" versions of ChatGPT, offering features that bypass restrictions.
Additionally, the store contains many chatbots that mimic famous people like Joe Biden, Donald Trump, Taylor Swift, Elon Musk, and Beyonce. This goes against OpenAI's rules about impersonating people without permission. There are also chatbots with adult content, which goes against OpenAI's guidelines.
OpenAI introduced its GPT Store in January, allowing chatbot creators to list their bots after verifying their profiles. Users can report chatbots that break the rules. OpenAI says it uses a mix of automatic checks, human review, and user reports to enforce its policies. Chatbots that break rules can be restricted or removed.
OpenAI has defended some chatbots designed to make AI text sound more human, stating that not all uses of such technology are dishonest. The company recognizes the complexity of judging these tools' real-world applications.
With over 3 million custom chatbots in its store, it's a challenge for OpenAI to distinguish between legitimate and problematic bots. The company is expected to release an updated version of its AI, ChatGPT-5, soon.
Image: DIW-AIgen
Read next: Apple’s Third-Party Payment Rules Opposed By Meta, Microsoft, X, And Match In New Petition
Reports highlight that despite OpenAI's efforts to check new chatbots, many are breaking the rules. For example, there are chatbots based on Pixar and Star Wars themes, which could be a problem for copyright reasons. Also found were chatbots related to Harry Potter, Overwatch characters, and Pokémon.
A big concern is with chatbots known as "humanizers." These bots are made to make AI-written text sound more human and claim they can avoid AI detection. This raises ethical issues, such as the potential for using AI to cheat in school or pretend AI work is human-made. Some chatbots are marketed as "jailbroken" versions of ChatGPT, offering features that bypass restrictions.
Additionally, the store contains many chatbots that mimic famous people like Joe Biden, Donald Trump, Taylor Swift, Elon Musk, and Beyonce. This goes against OpenAI's rules about impersonating people without permission. There are also chatbots with adult content, which goes against OpenAI's guidelines.
OpenAI introduced its GPT Store in January, allowing chatbot creators to list their bots after verifying their profiles. Users can report chatbots that break the rules. OpenAI says it uses a mix of automatic checks, human review, and user reports to enforce its policies. Chatbots that break rules can be restricted or removed.
OpenAI has defended some chatbots designed to make AI text sound more human, stating that not all uses of such technology are dishonest. The company recognizes the complexity of judging these tools' real-world applications.
With over 3 million custom chatbots in its store, it's a challenge for OpenAI to distinguish between legitimate and problematic bots. The company is expected to release an updated version of its AI, ChatGPT-5, soon.
Image: DIW-AIgen
Read next: Apple’s Third-Party Payment Rules Opposed By Meta, Microsoft, X, And Match In New Petition