Study Shows Many Advanced AI Chatbots Would Rather Give Wrong Answers than Admit They Do Not Know the Answer

A study published in Nature reveals that many newer AI chatbots would rather give a wrong answer than admit that they do not know the answer. As the times are passing, many AI chatbots are getting more advanced and accurate, but they are also more likely to give wrong answers. As humans are also more likely to believe the advanced AI chatbots, they assume that the answer given by chatbot must be correct.

The author of the study, José Hernández-Orallo, professor at the Universitat Politecnica de Valencia, Spain, says that AI chatbots claim that they can answer anything these days. This means that even though they can give more diverse answers, this also means that they can give more incorrect information too. To know more about it, the team of researchers studied Meta’s Llama, open-source BLOOM and OpenAI’s GPT series.

The researchers started studying from earlier models and then slowly moved to larger and advanced ones. The newer and most advanced models weren't included in the study. The models were asked thousands of questions about geography, arithmetic, science and anagrams. It was found out that as models became more advanced, the trend of giving wrong answers also increased with them.

The researchers also asked some volunteers to read the answers from these AI models and determine which ones are incorrect or wrong. Approximately 40% of the answers perceived right by those volunteers were actually wrong. The researchers said that humans need to understand that AI chatbots cannot answer all of their questions. AI developers should program AI chatbots in a way that they answer “I don't know” to a question they actually don't have an answer to. We should also avoid believing every information that comes from AI and fact-check ourselves for accuracy.

Image: DIW-Aigen

Read next: Could Hackers Be Spying on Your Phone Right Now? The Shocking Truth About SS7!

Previous Post Next Post