University of California, Berkeley conducted research that shows that AI models can never surpass the human mind when it comes to innovation. AI models like ChatGPT work on many words and image prompts that are given to it by humans. This means that these AI models can only create something that’s already existing, without coming up with anything on their own. Eunice Yiu, a co-author of the study published on this topic says that even children of humans can create better responses than most AI models. She says that we cannot categorize LLMs as something that is more intelligent than us. Treating them like a library or a search engine is more appropriate of a term as these LLMs only summarize the knowledge, facts and figures that have already been fed to them.
To elaborate more on this topic, authors of this research presented 42 children (3-7 years old) and 30 adults. They gave them some items that are used everyday with descriptions that described each of the items. First, they asked them to match things that work well together, like a ruler with a compass, not a teapot. 88% of children and 84% of adults identified each item successfully. After that, they asked everyone to use these ordinary things in other ways. Like how do you draw a circle without a compass? 85% children and 95% adults passed the test.
When the same activity was given to 5 LLMs, the models were successful in identifying the items with 85% success rate of best LLM and 59% of worst LLM. In the second question when they were asked to use the items in other ways, ie, innovation, most of the LLMs failed. The best LLM had 75% success while the worst performing LLM had 8% success.
The researchers also gave the kids unfamiliar machines and asked them to figure out a way to make them work. The kids figured out how it worked just by messing around with it. But the AI models couldn’t come up with a way to make them work. Even though they were given all the clues the kids found, they couldn't understand them. This shows that AI can only figure out information that’s been already given to it. It cannot come up with anything on its own.
Photo: DIW-AIgen
Read next: Elevate your 2024 travels with these savvy tech manners
To elaborate more on this topic, authors of this research presented 42 children (3-7 years old) and 30 adults. They gave them some items that are used everyday with descriptions that described each of the items. First, they asked them to match things that work well together, like a ruler with a compass, not a teapot. 88% of children and 84% of adults identified each item successfully. After that, they asked everyone to use these ordinary things in other ways. Like how do you draw a circle without a compass? 85% children and 95% adults passed the test.
When the same activity was given to 5 LLMs, the models were successful in identifying the items with 85% success rate of best LLM and 59% of worst LLM. In the second question when they were asked to use the items in other ways, ie, innovation, most of the LLMs failed. The best LLM had 75% success while the worst performing LLM had 8% success.
The researchers also gave the kids unfamiliar machines and asked them to figure out a way to make them work. The kids figured out how it worked just by messing around with it. But the AI models couldn’t come up with a way to make them work. Even though they were given all the clues the kids found, they couldn't understand them. This shows that AI can only figure out information that’s been already given to it. It cannot come up with anything on its own.
Photo: DIW-AIgen
Read next: Elevate your 2024 travels with these savvy tech manners