While ChatGPT has seen a rapid rise that has allowed it to become one of the most widely talked about tech innovations in recent memory, it has also seen its fair share of controversies many of which have to do with its tendency to hand out false information. It turns out that a study conducted by Long Island University revealed a shockingly high false answer rate when it came to questions pertaining to medicines.
The Large Language Model chatbot was provided 39 questions about various medicines and how they might be used in the appropriate manner. With all of that having been said and now out of the way, it is important to note that the chatbot provided the wrong answers or ignored the question entirely in about 74% of these questions, or 29 to be precise.
When ChatGPT was asked to provide citations or references for the information it provided, it was only able to do so in around 8 of the 39 questions with all things having been considered and taken into account. Such a trend is concerning because of the fact that this is the sort of thing that could potentially end up providing unsuspecting consumers with the wrong information about medicines that they rely on, and there is good chance that this information will be entirely unsourced for the most part.
For example, when ChatGPT was asked about the potential harmful interactions between Paxlovid, an antiviral used to treat Covid-19, and a blood pressure medication by the name of verapamil, the chatbot claimed that no such interactions existed. In spite of the fact that this is the case, using these two medications in tandem with each other can lead to an exacerbation of the blood pressure lowering effects of verapamil.
One thing that bears mentioning that is that OpenAI does not recommend using ChatGPT for medical purposes, and the chatbot itself is quick to say that it is not a doctor before providing any of the answers that users request.
Regardless, many consumers might not realize that the data they are receiving is not going to be all that accurate, and the dangers that can occur if they were to act on ChatGPT’s faulty instructions have the potential to cause widespread harm. It is essential to educate consumers so that they can better understand the pitfalls involved.
Read next: Wikipedia Unravels Its Most Searched Articles For 2023 As Company Celebrates 84 Billion View Milestone
The Large Language Model chatbot was provided 39 questions about various medicines and how they might be used in the appropriate manner. With all of that having been said and now out of the way, it is important to note that the chatbot provided the wrong answers or ignored the question entirely in about 74% of these questions, or 29 to be precise.
When ChatGPT was asked to provide citations or references for the information it provided, it was only able to do so in around 8 of the 39 questions with all things having been considered and taken into account. Such a trend is concerning because of the fact that this is the sort of thing that could potentially end up providing unsuspecting consumers with the wrong information about medicines that they rely on, and there is good chance that this information will be entirely unsourced for the most part.
For example, when ChatGPT was asked about the potential harmful interactions between Paxlovid, an antiviral used to treat Covid-19, and a blood pressure medication by the name of verapamil, the chatbot claimed that no such interactions existed. In spite of the fact that this is the case, using these two medications in tandem with each other can lead to an exacerbation of the blood pressure lowering effects of verapamil.
One thing that bears mentioning that is that OpenAI does not recommend using ChatGPT for medical purposes, and the chatbot itself is quick to say that it is not a doctor before providing any of the answers that users request.
Regardless, many consumers might not realize that the data they are receiving is not going to be all that accurate, and the dangers that can occur if they were to act on ChatGPT’s faulty instructions have the potential to cause widespread harm. It is essential to educate consumers so that they can better understand the pitfalls involved.
Read next: Wikipedia Unravels Its Most Searched Articles For 2023 As Company Celebrates 84 Billion View Milestone