New Study Shows Generative AI Hallucinates When it Faces Real World Problems

According to a new study by researchers from Harvard University, University of Massachusetts Institute of Technology (MIT), University of Chicago and Cornell University, AI models are highly likely to hallucinate if you give them real life scenarios. One example of it is asking an AI model to navigate New York. It can do so with excellent accuracy but if we add detours and close some streets, it starts hallucinating and imagining things. This shows that AI models can show great capabilities in doing daily tasks but they do not have any understanding of real life scenarios.

For the study, the backbone of many LLMs, “transporter”, was analyzed. When a lot of detours were added to New York’s map, the transporter failed miserably. If only 1% of the street was closed, the accuracy decreases from 100 to 67.

According to MIT, this hallucination of an AI model made the map of New York start looking like a made-up map where there was no sense of streets and flyovers. The researchers also said that these results were just for navigation as the same type of hallucination was also seen in games and puzzles. The paper says that AI models aren't reliable in solving real world problems as sometimes they say things that do not make sense at all.

Image: DIW-Aigen

Read next: Small Businesses Slash Budgets Amid Inflation, 52% of Consumers Reduce Spending
Previous Post Next Post