A study by UAB and URV researchers shows that AI cannot recognize grammatical errors in a text but humans can. This research was carried out by comparing the skills of human beings and three large language models to find out the grammatical errors in the text. As we know humans are more advanced in language than AIs. AI models were taught language so they can communicate with humans but they still cannot reach the complex levels of languages. In the recent past, computers have been taught languages and now we call them large language models. These models are used in AI applications for search engines, translators, and audio-to-text conversion.
The URV researchers wanted to know if large language models can comprehend languages as humans can. To know this, they used 2 large language models that were based on ChatGPT and one large language model based on GP3.5. These models were given some sentences and were asked to identify grammatical errors in them. The sentences were quite simple and were formed grammatically in their native languages. The result showed that while the humans easily identified the grammatical errors in the sentences, the LLMs couldn't. A researcher who led this study said that the results are shocking because these large language models are trained about a language concerning what is grammatically correct and what is not.
To train models about what is grammatically correct and what is not, they are given two sentences where one is grammatically correct and the other is not. On the other hand, humans are not told what is the correct sentence and what is incorrect. They learn languages in their day-to-day conversation. However a large language model still cannot recognize basic grammar mistakes while a human can. The research shows that people who claim that AI has similar language skills as humans need to rethink because this research proves otherwise.
Image: Digital Information World - AIgen
Read next: Shocking Revelation: Report Finds 45% of Advertised Cybersecurity Positions Are Deceptive
The URV researchers wanted to know if large language models can comprehend languages as humans can. To know this, they used 2 large language models that were based on ChatGPT and one large language model based on GP3.5. These models were given some sentences and were asked to identify grammatical errors in them. The sentences were quite simple and were formed grammatically in their native languages. The result showed that while the humans easily identified the grammatical errors in the sentences, the LLMs couldn't. A researcher who led this study said that the results are shocking because these large language models are trained about a language concerning what is grammatically correct and what is not.
To train models about what is grammatically correct and what is not, they are given two sentences where one is grammatically correct and the other is not. On the other hand, humans are not told what is the correct sentence and what is incorrect. They learn languages in their day-to-day conversation. However a large language model still cannot recognize basic grammar mistakes while a human can. The research shows that people who claim that AI has similar language skills as humans need to rethink because this research proves otherwise.
Image: Digital Information World - AIgen
Read next: Shocking Revelation: Report Finds 45% of Advertised Cybersecurity Positions Are Deceptive