Researchers Say LLMs Do Not Have Any Ability to Reason After Finding Out That AI Models Cannot Even Solve Math Problems with Some Changes

A group of AI research scientists at Apple published a paper to find if machine learning can really do reasoning and thinking. They gave the AI models a simple arithmetic equation and asked it to solve it. It was a pretty simple equation which could be solved easily, and LLMs did it too. But once they added some extra useless information, LLMs got confused and got the answer wrong.

Why do most LLMs answer incorrectly with simple information? This is probably because they are trained on simple data and can only give to the point answers. When we throw in a bit of irrelevant information where the actual reasoning is required, they cannot answer correctly.

The researchers say that LLMs are not capable of actual reasoning so they get confused when more clauses are added in an equation. They just try to repeat the reasoning steps they are taught in their training data. This research just shows that LLMs can only repeat what they are taught. They do not personally do anything. They just use their data to answer specific questions.

An OpenAI researcher says that correct answers can be achieved from LLMs with a little bit of prompt engineering. Even though better prompting can work for derivations, it may not work on contextual data. This just shows that LLMs cannot reason on its own so it is best to not use it for academics.


Image: DIW-Aigen

Read next: Consumer Advisory Group Which? Warns Users To Keep Mobile Number Active After Serious Security Issues

Previous Post Next Post