AI chatbots represent an exciting new chapter in the history of tech, but in spite of the fact that this is the case they are not without their potential dangers and issues. Researchers working across a wide range of institutions have recently highlighted an alarming potential problem that can be caused by AI such as Chat GPT: they could increase the amount of misinformation that can be seen online.
With all of that having been said and now out of the way, it is important to note that the use of AI chatbots could allow malicious actors to create misinformation in a much more cost effective manner. These chatbots can also be used to make misinformation higher in quality than might have been the case otherwise. They can customize the various types of disinformation campaigns that are going around, thereby making them considerably harder for researchers to spot and take down.
One thing to mention here is that this won’t be the first time that chatbots get used for such nefarious purposes. However, they have never been this powerful before, with the content that they generated often being littered with various syntax errors, typos as well as other issues that made what they were trying to accomplish painfully obvious.
AI generated personas have already been seen in disinformation campaigns that are being conducted around the world. Tech like Chat GPT can be used to create convincing fake news within seconds, and the content won’t have the same kinds of errors that would make them so easy to parse.
All of this seems to suggest that Chat GPT and other AI chatbots like it are going to need to be fine tuned in order to make them safe. Putting in filters is important because of the fact that this is the sort of thing that could potentially end up preventing malicious actors from using them to conduct their campaigns online.
Open AI launched a tool that can help people discern whether or not content is AI generated. That will be a crucial tool in the fight against misinformation that is currently ongoing.
Read next: Insights on why social app commerce revenue is underreported by 2.45 times
With all of that having been said and now out of the way, it is important to note that the use of AI chatbots could allow malicious actors to create misinformation in a much more cost effective manner. These chatbots can also be used to make misinformation higher in quality than might have been the case otherwise. They can customize the various types of disinformation campaigns that are going around, thereby making them considerably harder for researchers to spot and take down.
One thing to mention here is that this won’t be the first time that chatbots get used for such nefarious purposes. However, they have never been this powerful before, with the content that they generated often being littered with various syntax errors, typos as well as other issues that made what they were trying to accomplish painfully obvious.
AI generated personas have already been seen in disinformation campaigns that are being conducted around the world. Tech like Chat GPT can be used to create convincing fake news within seconds, and the content won’t have the same kinds of errors that would make them so easy to parse.
All of this seems to suggest that Chat GPT and other AI chatbots like it are going to need to be fine tuned in order to make them safe. Putting in filters is important because of the fact that this is the sort of thing that could potentially end up preventing malicious actors from using them to conduct their campaigns online.
Open AI launched a tool that can help people discern whether or not content is AI generated. That will be a crucial tool in the fight against misinformation that is currently ongoing.
Read next: Insights on why social app commerce revenue is underreported by 2.45 times