All AI Chatbots Are Struggling To Keep Up With The Israel-Palestinian War And Producing Major Inaccurate Updates

AI chatbots might seem like the next best thing but this next discovery proves otherwise.

The ongoing conflict between Israel and Palestine has the world talking and by the looks of it, many of us are doing everything we can to stay up to date on what’s happening.

However, if you really want the latest details on this front, we’ve got news for you. And that includes how you shouldn’t be relying on AI chatbots to get your facts straight because simply put it, they know nothing.

Be it OpenAI, Microsoft, or even Google - all chatbots were found to deliver nothing but false information that is far from the real picture. Think along the lines of mixing fake statements with the most made-up facts after users request for the latest update on the current Middle East situation.

The news was first put out by media outlet Bloomberg who mentioned how Bing and Bard were making false claims regarding how a possible end was near and how both sides may be heading toward a ceasefire, after all.

This is a major sign of how the biggest and most hyped tools have turned out to be filled with nothing but flaws and can never be relied on for obvious reasons. And to give you a better understanding of what exactly we mean, here’s the latest update on this front.

Bard from Google blatantly lied and said the truce had come out of the situation and that the entire Gaza Strip was benefiting from this.

Meanwhile, it was so confident that it even gave out the exact date of when it happened, claiming it to be a victory after both sides indulged in violence that led to innocent people dying.

When or if a user made a second prompt on the same situation, the AI chatbot from Google mentioned how the ceasefire did arise after being agreed on in August of this year. And then a few days later, it changed that to explain how no ceasefire arose after Hamas entered Israel’s border.

Meanwhile, the situation was no better on Microsoft’s end where its Bing chatbot (powered by ChatGPT) poured out suggestions about a ceasefire that arose today, October 13. If you wished to prompt the Chabot further, it added how no ceasefire arose while more persuasion had it go back to the same suggestions.

Experts noted how Bing was still more accurate when compared to other AI chatbots in terms of other queries. For instance, the correct date for the start of the conflict was highlighted, unlike the others.

Yet, seeing it speak about an imaginary ceasefire thanks to Egypt was certainly a shocker.

Furthermore, ChatGPT Plus was also examined and the way it updated its users on prompts generated by them was an eye-opener. The replies were softer regarding the violent fiasco and fewer inaccuracies were produced too.

But it was interesting to see how it failed to put out inaccuracies did not give direct replies to queries and left users hanging for more information. It just liked to call the situation an alarming and hostile one. Thankfully, no date of a fake ceasefire was rolled out, which was a relief.

Google released a statement on this front regarding its Bard Chatbot, asking users to understand that mistakes can happen and how they are doing everything to avoid them.

But if you really want to know more, it’s best to tune into media channels and outlets for more reliable and accurate information.


Read next: Will AI Replace Creative Jobs? This Adobe Exec Says No
Previous Post Next Post