New Study Shows Parents Prefer AI for Child Healthcare Advice, Raising Concerns

A new study by the University of Kansas Life Span Institute finds that parents trust AI more than humans when it comes to health of their kids. This is a really shocking reveal and the study also found out that many parents find AI written text credible and trustworthy too. Parents trust AI that they are now seeking healthcare information from AI, instead of human healthcare workers.

ChatGPT and other AI models are also known for creating many errors and giving false information. So it is a concerning thing that many parents are using AI for their children. The lead author of the study, Leslie-Miller, says that this research was done so the researchers can learn about ChatGPT's impact and potential industry concerns. Before AI parents used to search about healthcare stuff for their children online but now they ask ChatGPT about it.

For the study, 116 parents were gathered and they were given text papers related to healthcare concerns in children. Half of the texts were generated by AI, mostly ChatGPT, and the other half were written by experts. The results found out that most parents couldn't distinguish between AI written content and human content. Although they weren't told that there would be two types of texts, most of them still chose AI written texts as the most reliable.

If parents are going to blindly trust AI, it is important that human domain specific expertise on healthcare information should be presented to parents. AI is also dangerous because it has a tendency to hallucinate which means that it can give responses that are very convincing but in reality, they are made up by it. LLMs are also just trained online which means they do not have real world information and experiences. The lead author suggests that parents should look for an AI system that has been integrated into a system with some expertise. Just stay cautious and always double check AI responses.

Image: DIW-Aigen

Read next: Study Shows Many Advanced AI Chatbots Would Rather Give Wrong Answers than Admit They Do Not Know the Answer

Previous Post Next Post