The hype surrounding ChatGPT is one that needs no introduction, whatsoever.
We’re seeing chit-chat and debates take place all the time surrounding this topic and how this powerful tool has revolutionized the lives of so many individuals in the market.
Today, so many kinds of AI chatbots are popping up and it’s hard to know whether or not they possess the same capabilities that ChatGPT has promised over time. But with the good does come the bad. And the debate about the latter is at an all-time high.
Will AI chatbots turn into an absolute mockery when it comes down to the world of academics? Does it really need experts in the industry or can it replace them in a single go? Also, do they have the capability of foreshadowing the likes of I Robot and Skynet?
As you can see, there’s a lot to think about right now and we don’t blame anyone for thinking about anything. The whole world of ChatGPT is very diverse and there’s just so much happening around us.
Today, we’ve got a very interesting study in discussion by authors at Purdue University. They’re based in the US and are finally trying to put the matter of Chatbots do not know everything and should not be trusted completely into the spotlight. And yes, it’s a subject that was a long time coming.
The study began by highlighting how chatbots that use AI technology aren’t designed to always produce the most accurate response out there. There’s a lot of disinformation being produced too and you’ll need to pay heed to differentiate fact from fiction as it’s happening right below our nose, yet very few individuals are paying attention.
In this study, the authors fed an AI chatbot with close to 517 queries regarding topics seen on certain sites, and those replies were highlighted as being incontrovertible.
Close to 52% of the tool’s replies were not correct and when the same assistance for math was taken from Stack Overflow, it was proven that a just 48% of the facts delineated was correct. Yikes!
So, what’s the final verdict?
Well, you’re better off without the tool, or shall we say, you might as well dump this AI endeavor into the Caspian Sea because to have this many inaccuracies is mind-blowing, to say the least.
Such results are a stark reminder that the uses of AI technology are very clear and it’s actually not fun and games. This is serious business and you enter dangerous territory by using it for daily activities, despite knowing how high the chances for error are.
Another thing worth a mention is how an increasing amount of individuals fail to notice as well as care about this. Moreover, people are now resorting to calling the tool one that cannot fail and is superior to others. But this needs to stop and the purpose of studies like these is to highlight this more than anything else.
The way replies are generated and presented to the user makes them fall into the trap of thinking that there’s no better solution but how deceptive can AI be.
Read next: College Professors Look To ‘ChatGPT-Proof’ Assignments To Minimize Cheating
We’re seeing chit-chat and debates take place all the time surrounding this topic and how this powerful tool has revolutionized the lives of so many individuals in the market.
Today, so many kinds of AI chatbots are popping up and it’s hard to know whether or not they possess the same capabilities that ChatGPT has promised over time. But with the good does come the bad. And the debate about the latter is at an all-time high.
Will AI chatbots turn into an absolute mockery when it comes down to the world of academics? Does it really need experts in the industry or can it replace them in a single go? Also, do they have the capability of foreshadowing the likes of I Robot and Skynet?
As you can see, there’s a lot to think about right now and we don’t blame anyone for thinking about anything. The whole world of ChatGPT is very diverse and there’s just so much happening around us.
Today, we’ve got a very interesting study in discussion by authors at Purdue University. They’re based in the US and are finally trying to put the matter of Chatbots do not know everything and should not be trusted completely into the spotlight. And yes, it’s a subject that was a long time coming.
The study began by highlighting how chatbots that use AI technology aren’t designed to always produce the most accurate response out there. There’s a lot of disinformation being produced too and you’ll need to pay heed to differentiate fact from fiction as it’s happening right below our nose, yet very few individuals are paying attention.
In this study, the authors fed an AI chatbot with close to 517 queries regarding topics seen on certain sites, and those replies were highlighted as being incontrovertible.
Close to 52% of the tool’s replies were not correct and when the same assistance for math was taken from Stack Overflow, it was proven that a just 48% of the facts delineated was correct. Yikes!
So, what’s the final verdict?
Well, you’re better off without the tool, or shall we say, you might as well dump this AI endeavor into the Caspian Sea because to have this many inaccuracies is mind-blowing, to say the least.
Such results are a stark reminder that the uses of AI technology are very clear and it’s actually not fun and games. This is serious business and you enter dangerous territory by using it for daily activities, despite knowing how high the chances for error are.
Another thing worth a mention is how an increasing amount of individuals fail to notice as well as care about this. Moreover, people are now resorting to calling the tool one that cannot fail and is superior to others. But this needs to stop and the purpose of studies like these is to highlight this more than anything else.
The way replies are generated and presented to the user makes them fall into the trap of thinking that there’s no better solution but how deceptive can AI be.
Read next: College Professors Look To ‘ChatGPT-Proof’ Assignments To Minimize Cheating