It’s Easy To Convince ChatGPT That Its Answers Are Wrong, Even If It’s Right - New Study Proves

ChatGPT is a phenomenon that revolutionized the tech world and just when you thought you knew everything about the system, here comes a new study that proves otherwise.

The popular AI tool is doing an amazing job in terms of ensuring it gives out the most accurate response to users’ prompts. But along the way, what you might not be aware of is how there’s a major weakness that hasn’t been spoken about until now.

Yes, that’s linked to trying to convince the popular chatbot about how it could be wrong when in reality, it’s right.

Thanks to a leading team at Ohio State University, we’re getting more information on how the LLMs continue to be bombarded with a wide range of queries across the board. The study led to a lot of people realizing how the flaw is major and it’s astonishing that despite a year since its launch, no one noticed how easy it was to tell the chatbot that it is wrong when it could be right.

Examples were provided on the matter by the authors in this regard. They proved how the chatbot failed to face pressuring situations including those that entailed subjects like logic, math, reasoning, common sense, and beyond. Moreover, this model failed in terms of defending the right beliefs and would just end up thinking the user is right, even if their points of view were wrong or the discussion surrounding the chat was invalid.


Any system like that in today’s tech world that lacks the competence to perform under pressure and stand firmly against its own answers is not something great. After all, if you don’t think you are right, then why would others?

Also, the researchers noted how no appropriate defense arguments were made, and seeing the chatbot mention how it was sorry and admitted its mistakes when the user was in the wrong was shocking.

It agreed to wrong replies and even apologized and that was again shocking, another critic mentioned. It was almost like it gave up on itself.

For now, Generative AI tools have been dominating the world in terms of solving complex tasks that involve reasoning. So many language models are turning into a mainstream phenomenon and beating out the expectations generated by their own parent firms.

But such studies prove to us all that they still have a long way to go before you can blindly trust them because such uncertainty can prove to be a major issue for those on the search for the right reply.

There’s also a debate about how they rely on reasoning or perhaps patterns memorized during the whole training process. The goal is to make the most of the knowledge linked to the truth and nothing else.

And that’s the power of AI but seeing such reports are now putting experts in a dilemma. More research is required to see how this can be overcome before it’s too late. Seeing ChatGPT outline step-by-step solutions and breaking them down in this manner despite being right is alarming, the authors added.

This study was unveiled to the public this week in Singapore where thousands of the world’s top researchers united at the Empirical Methods in Language Processing Conference of this year. On average, users misled the popular chatbot nearly 22% to 70% time and as one can imagine, it’s huge.

Critics feel more research is required to further validate the findings and it needs to be done sooner than later. What do you think?

Read next: AI Models Need Water Consumption to Work that Can Create a Worldwide Water Crisis
Previous Post Next Post