The world of AI has really transformed the tech sector and that means our lives might not be the same again.
Thanks to tools like ChatGPT, Gemini, and Copilot, we’re seeing answers to questions get generated at the speed of light. And most of us go about believing everything we read, without understanding how erroneous they could be.
This is all in regards to a new study on OpenAI’s ChatGPT where we got a better and up-close look at whether or not the feature can be trusted and if yes, to what degree. But we wouldn’t be lying if we said the stats gave us a scare.
ChatGPT and all the other leading AI assistants are hailed for providing replies to the likes of coding and programming very quickly, and that means the speed supersedes that linked to humans today.
All you do is give the prompt and get a reply quickly but thanks to the latest research on this front, you shouldn’t be putting a lot of trust as the majority of replies feature wrong data linked to programming queries.
The news comes from a study carried out this month which was unveiled at Computer Human Interaction Conference. This is where a team of researchers arising from Purdue University checking a series of questions that were received as prompts to the popular AI tool.
They got back replies that were wrong, 52% of the time. Additionally, a staggering 77% of the answers were found to be overly verbose. Now that means if you have been relying on ChatGPT like AI tools to give out programming-related data, you may wish to think twice.
We agree that not all AI can be trusted all the time, but there are some serious mistakes with a high probability. Any AI chatbot that gives you wrong answers so frequently must be shunned and if not, then used with careful double-checks because the probability of things going wrong is just too high.
Experts at Perdue stated how human programmers are so much more reliable and therefore a better preference over ChatGPT. This is why they are even the first choice for 35% of the population and that needs to increase if you want correct information.
Humans are more expressive, and comprehensive in terms of details, and also tend to roll out replies in the most articulated language style imaginable. What is worse than this is how the study proved that human programmers were not able to detect 39% of the time that the replies rolled out were indeed false and filled with errors.
It’s a serious wakeup call we feel and this is just one study that has unraveled some worrisome findings. It does prove how generative AI bots continue to make serious mistakes and how humans might not be capable of catching them.
AI Overviews from Google were put into place across the US for Search at the start of May. They have been producing some of the most strange and error-filled summaries to search for a while now.
But the search giant refuses to accept them as serious mistakes and therefore goes about calling them rare occasions and isolated examples. They feel the replies that are bizarre or so-called classified as dangerous are mostly linked to uncommon questions. They do not represent the majority of people’s experiences.
Moreover, Google boasted how most of the time, users are getting high-quality replies and links that enable them to carry out searches on a deeper level than before. But it is still investigating the matter and is thankful for all the feedback on this front.
Image: DIW-Aigen
Read next: Study Reveals Apple and Starlink Users' Locations Easily Trackable, Sparking Privacy Concerns
Thanks to tools like ChatGPT, Gemini, and Copilot, we’re seeing answers to questions get generated at the speed of light. And most of us go about believing everything we read, without understanding how erroneous they could be.
This is all in regards to a new study on OpenAI’s ChatGPT where we got a better and up-close look at whether or not the feature can be trusted and if yes, to what degree. But we wouldn’t be lying if we said the stats gave us a scare.
ChatGPT and all the other leading AI assistants are hailed for providing replies to the likes of coding and programming very quickly, and that means the speed supersedes that linked to humans today.
All you do is give the prompt and get a reply quickly but thanks to the latest research on this front, you shouldn’t be putting a lot of trust as the majority of replies feature wrong data linked to programming queries.
The news comes from a study carried out this month which was unveiled at Computer Human Interaction Conference. This is where a team of researchers arising from Purdue University checking a series of questions that were received as prompts to the popular AI tool.
They got back replies that were wrong, 52% of the time. Additionally, a staggering 77% of the answers were found to be overly verbose. Now that means if you have been relying on ChatGPT like AI tools to give out programming-related data, you may wish to think twice.
We agree that not all AI can be trusted all the time, but there are some serious mistakes with a high probability. Any AI chatbot that gives you wrong answers so frequently must be shunned and if not, then used with careful double-checks because the probability of things going wrong is just too high.
Experts at Perdue stated how human programmers are so much more reliable and therefore a better preference over ChatGPT. This is why they are even the first choice for 35% of the population and that needs to increase if you want correct information.
Humans are more expressive, and comprehensive in terms of details, and also tend to roll out replies in the most articulated language style imaginable. What is worse than this is how the study proved that human programmers were not able to detect 39% of the time that the replies rolled out were indeed false and filled with errors.
It’s a serious wakeup call we feel and this is just one study that has unraveled some worrisome findings. It does prove how generative AI bots continue to make serious mistakes and how humans might not be capable of catching them.
AI Overviews from Google were put into place across the US for Search at the start of May. They have been producing some of the most strange and error-filled summaries to search for a while now.
But the search giant refuses to accept them as serious mistakes and therefore goes about calling them rare occasions and isolated examples. They feel the replies that are bizarre or so-called classified as dangerous are mostly linked to uncommon questions. They do not represent the majority of people’s experiences.
Moreover, Google boasted how most of the time, users are getting high-quality replies and links that enable them to carry out searches on a deeper level than before. But it is still investigating the matter and is thankful for all the feedback on this front.
Image: DIW-Aigen
Read next: Study Reveals Apple and Starlink Users' Locations Easily Trackable, Sparking Privacy Concerns