With ChatGPT blowing the door wide open for all sorts of different forms of generative AI, many have begun to ask a rather pertinent question: could this potentially make misinformation worse than might have been the case otherwise? A number of experts have suggested that chatbots are already being used to generate and spread misinformation, and the rate at which this is occurring is increasing at a very rapid pace.
With all of that having been said and now out of the way, it is important to note that ChatGPT actually has a problem with not telling the truth. The AI is a Large Language Model, so it doesn’t have the ability to actually tell the difference between facts and misinformation. There are numerous examples of the chatbot being successfully tricked into admitting something is true, even though the fact in question has no bearing in reality whatsoever.
Even companies like Google are not safe from such issues. The ChatGPT alternative unveiled by Google, namely Bard, was meant to take the search engine to another level. In spite of the fact that this is the case, it ended up stating a completely fabricated historical fact during the announcement event.
This led to Google’s stock price taking a real hit, but that is the least of the world’s problems. Chatbots are not designed to tell the truth, rather they are designed to provide answers that their users would find most satisfying.
Hence, a lot of the misinformation that chatbots such as Chat GPT might spread may not even be intentional. Rather, they would be an accidental byproduct of the process by which Large Language Models tend to function with all things having been considered and taken into account.
Of course, malicious actors would also want to use it because of the fact that this is the sort of thing that could potentially end up making it easier to spread misinformation. Russian and Chinese malicious actors have already been using it for such nefarious purposes, and as chatbots and other forms of AI get more and more sophisticated, there is no telling how much harm they might do.
Read next: AI Generated Phishing Campaigns Are Already Present on LinkedIn
With all of that having been said and now out of the way, it is important to note that ChatGPT actually has a problem with not telling the truth. The AI is a Large Language Model, so it doesn’t have the ability to actually tell the difference between facts and misinformation. There are numerous examples of the chatbot being successfully tricked into admitting something is true, even though the fact in question has no bearing in reality whatsoever.
Even companies like Google are not safe from such issues. The ChatGPT alternative unveiled by Google, namely Bard, was meant to take the search engine to another level. In spite of the fact that this is the case, it ended up stating a completely fabricated historical fact during the announcement event.
This led to Google’s stock price taking a real hit, but that is the least of the world’s problems. Chatbots are not designed to tell the truth, rather they are designed to provide answers that their users would find most satisfying.
Hence, a lot of the misinformation that chatbots such as Chat GPT might spread may not even be intentional. Rather, they would be an accidental byproduct of the process by which Large Language Models tend to function with all things having been considered and taken into account.
Of course, malicious actors would also want to use it because of the fact that this is the sort of thing that could potentially end up making it easier to spread misinformation. Russian and Chinese malicious actors have already been using it for such nefarious purposes, and as chatbots and other forms of AI get more and more sophisticated, there is no telling how much harm they might do.
Read next: AI Generated Phishing Campaigns Are Already Present on LinkedIn