Study Finds ChatGPT Search Misattribution of News Content Threatens Publisher Visibility and Credibility

ChatGPT Search could be a major spreader of misinformation, a new Columbian study revealed.

The research claims it misattributes news material 76.5% of the time and this is a leading source of concern for publishers’ visibility. The leading AI tool is having difficulties citing news publishers the right way, the study adds.

The report also mentioned how it commonly misquotes and includes the wrong attributions which is a leading concern among publishers around the world today. Many raised questions related to brand visibility and having control over material published.

Furthermore, the results shared challenge the company’s commitment to responsible AI developments in the journalism sector.

Some more background into the search shows how October had SearchGPT collaborating with the media sector and also adding publisher feedback. This is very different from the original launched in 2022 for ChatGPT where publishers found content used for training AI models without any consent.

Today, the tech giant gives the green signal to publishers to highlight any robots.txt file if they wish to be part of the Search results or not. Now if they do say yes, they’re at major risk of getting misattributed and misrepresented, no matter what their participation choice might be.

The research carried out by Tow Center took 20 publications from ChatGPT’s Search into consideration. Some major findings included 200 questions that were not correct and how AI hardly highlights errors. Similarly, phrases such as possibly keep getting used in several replies.

The tool likes to prioritize making others happy over accuracy. This is misleading and might put the publisher’s reputation at stake. Furthermore, researchers find that ChatGPT is not consistent when the same query keeps getting asked. It has to do with randomness added to language models.

On many occasions, researchers found ChatGPT Search likes to cite replicated articles instead of the actual sources. This might be related to matters like publication restrictions or limitations with the system.

Whatever the case might be, even some big names like the NYT or MIT could be wrongly quoted. One example was related to how it might be due to a legal case between the New York Times and OpenAI where ChatGPT linked to unauthorized versions on other pages.


Read next: Meta Plans Mega Fiber-Optic Subsea Internet Cable Worth Billions That Spans The World
Previous Post Next Post