The makers behind ChatGPT mentioned on Thursday that they were currently working on a major development related to their viral chatbot.
This entailed giving users the ability to customize the feature. The news comes as the company makes way to address a number of increasing concerns related to the likes of bias being added to the world of AI.
The company based in San Francisco says that with the help from Microsoft Corp which provides funding to its platform, it is working on technology that gets rid of that biasing element and makes way for the likes of diversity.
What this really signals is that we’ll be saying hello to new and advanced system outputs that some might disagree with extensively. Moreover, it makes way for customization as the right way forward. But at the same time, it still feels that there might still always be bounds that would affect system behavior.
The viral chatbot was launched in November 2022 and since then, it has grown immensely in popularity, revolutionizing the tech world. Many have grown more interested in the likes of generative AI, not to mention major tech giants opting to venture into the market and create their own version of the chatbot.
For those who may not be aware, generative AI is the name reserved for technology used to generate answers which copy human speech and continue to dazzle users around the globe.
The reports about the startup arise at the same time that we witness Microsoft’s new Bing Search pop up in the industry and soar to new heights of popularity. Again, this is powered by the likes of OpenAI.
But experts are setting out their major reservations on the matter, calling it out as very dangerous behavior. They feel the technology might not be up and ready in time so that is awfully concerning.
How the tech world actually ends up setting guardrails for this type of behavior is definitely a key area of interest that many have in the world of generative AI that they’re still wrestling with.
As mentioned by Microsoft recently, user feedback is what’s important to help ensure its new Bing search turns out to be a major success. For now, it’s only restricted to a few creators but it does hope to allow those on the waiting list a chance to see what the chatbot is all about, thanks to a wider rollout soon.
What we’ve seen in the past few days is how the chatbot may be provoked by some people to give out reasons that it may not have planned.
Meanwhile, OpenAI stated in a recent blog post that it feels the answers linked to ChatGPT are initially trained on the likes of larger text datasets. These are found on the web. The next step entails human reviews of small datasets. And right after that, it’s provided with more guidelines on what needs to be done in different scenarios.
For instance, if a user sends out a request for the chatbot to give content that is more graphic or adult in context, then the chatbot is trained to give a response that entails, ‘I cannot do something like that’.
Then in cases when controversial subjects are involved, reviewers in charge must enable the ChatGPT tool to give out a response to the query but at the same time, they should provide a few viewpoints regarding both people as well as movements. This would be awfully different than just trying to take on the correct views on such complex ordeals.
H/T: Reuters
Read next: Battle of the Brains: Chat GPT Takes on Google Search in AI Showdown
This entailed giving users the ability to customize the feature. The news comes as the company makes way to address a number of increasing concerns related to the likes of bias being added to the world of AI.
The company based in San Francisco says that with the help from Microsoft Corp which provides funding to its platform, it is working on technology that gets rid of that biasing element and makes way for the likes of diversity.
What this really signals is that we’ll be saying hello to new and advanced system outputs that some might disagree with extensively. Moreover, it makes way for customization as the right way forward. But at the same time, it still feels that there might still always be bounds that would affect system behavior.
The viral chatbot was launched in November 2022 and since then, it has grown immensely in popularity, revolutionizing the tech world. Many have grown more interested in the likes of generative AI, not to mention major tech giants opting to venture into the market and create their own version of the chatbot.
For those who may not be aware, generative AI is the name reserved for technology used to generate answers which copy human speech and continue to dazzle users around the globe.
The reports about the startup arise at the same time that we witness Microsoft’s new Bing Search pop up in the industry and soar to new heights of popularity. Again, this is powered by the likes of OpenAI.
But experts are setting out their major reservations on the matter, calling it out as very dangerous behavior. They feel the technology might not be up and ready in time so that is awfully concerning.
How the tech world actually ends up setting guardrails for this type of behavior is definitely a key area of interest that many have in the world of generative AI that they’re still wrestling with.
As mentioned by Microsoft recently, user feedback is what’s important to help ensure its new Bing search turns out to be a major success. For now, it’s only restricted to a few creators but it does hope to allow those on the waiting list a chance to see what the chatbot is all about, thanks to a wider rollout soon.
What we’ve seen in the past few days is how the chatbot may be provoked by some people to give out reasons that it may not have planned.
Meanwhile, OpenAI stated in a recent blog post that it feels the answers linked to ChatGPT are initially trained on the likes of larger text datasets. These are found on the web. The next step entails human reviews of small datasets. And right after that, it’s provided with more guidelines on what needs to be done in different scenarios.
For instance, if a user sends out a request for the chatbot to give content that is more graphic or adult in context, then the chatbot is trained to give a response that entails, ‘I cannot do something like that’.
Then in cases when controversial subjects are involved, reviewers in charge must enable the ChatGPT tool to give out a response to the query but at the same time, they should provide a few viewpoints regarding both people as well as movements. This would be awfully different than just trying to take on the correct views on such complex ordeals.
H/T: Reuters
Read next: Battle of the Brains: Chat GPT Takes on Google Search in AI Showdown