Researchers at Google are shedding light on the growing rise of disinformation that continues to peak with time. And according to the authors of the study, AI is playing a major role in this regard.
The news comes as many experts have warned how the search engine giant is giving out inaccurate replies more frequently now than ever. And a lot of those replies have been going viral on social media, leading to a new debate.
But new studies from the search giant and other leading fact-checkers claim that AI is to blame. The authors mentioned how most of the imaged-based inaccuracies are from AI-generated means but what is the leading issue right now is how the picture is way more worrisome than what this study proves it to be.
The report measures the growing rates of AI-based disinformation via images where leading fact-checkers and websites found huge figures of content manipulation taking center stage when sources were studied over the past years.
A staggering 135,000 fact checks took place and it’s amazing how the prevalence of this alarming finding is more now than it was ever before. Remember, AI-based image generators were never hype in the past as they never existed. Therefore, manipulation on that front was also at an all-time low.
Today, it’s a whole new ballgame with figures touching the skies. AI-based disinformation is the leading form of inaccuracies and it’s happening at a time when AI is at its peak in terms of popularity and usage. They have become such a hit that image-based disinformation is at the forefront right now as more AI images keep getting produced as we speak.
The report also highlights how video-based misinformation tends to grow when AI-based image misinformation is in decline, which was an interesting finding. But what critics are more worried about is the methodology used to come to this conclusion as the results could be far worse.
Remember, the data is from samples taken from fact-checker material of the public which don’t select material randomly. Popular sources like Politifact or even Snopes don’t work like that and due to a limited number of resources, they are more inclined towards news sources or material that has gone viral with time.
In this manner, their fact-checking serves a single purpose or is linked to a particular audience. In the past, fact-checking sources focused mostly on the likes of disinformation pertaining to the English Language. So that led to disinformation from other languages turning into a bigger issue altogether.
Such sample counts would end up underestimating the growing rise in AI-produced pictures we see across popular social media pages such as Facebook each day, which don’t get any reporting as a whole.
The arrival of AI image generators has given rise to simple and countless possibilities to produce pictures that go viral across the board. They enable so many alternatives of certain pictures to be produced which don’t always tend to go viral.
Hence, without any uniform sampling, it’s massive and the problem is bigger than what it appears on the surface as these do reflect the real world, but not entirely.
Meanwhile, another means by which the AI issues are bigger than what is being reported has to do with how AI-produced pictures can be seen in videos. These got more popular in 2022 and today, they’re a part of the results that fact-checkers produce in the media domain.
However, this study does not take into consideration how video disinformation could be partly made up or completely done via images produced by AI. Let’s not forget how the GOP ended up giving rise to videos produced via AI images for official campaign content in 2023.
So all we can say is that it’s going to be very difficult to get a real-time analysis of how bad the picture is right now in terms of disinformation arising from AI-based picture generation. However, the fact that Google is promoting AI-generated content across the board does not seem to be helping in any manner.
Image: DIW-Aigen
Read next: Android Security Experts Raise The Alarm Against Surge In Anatsa Banking Trojan As 90 Malicious Apps Installed 5.5 Million Times
The news comes as many experts have warned how the search engine giant is giving out inaccurate replies more frequently now than ever. And a lot of those replies have been going viral on social media, leading to a new debate.
But new studies from the search giant and other leading fact-checkers claim that AI is to blame. The authors mentioned how most of the imaged-based inaccuracies are from AI-generated means but what is the leading issue right now is how the picture is way more worrisome than what this study proves it to be.
The report measures the growing rates of AI-based disinformation via images where leading fact-checkers and websites found huge figures of content manipulation taking center stage when sources were studied over the past years.
A staggering 135,000 fact checks took place and it’s amazing how the prevalence of this alarming finding is more now than it was ever before. Remember, AI-based image generators were never hype in the past as they never existed. Therefore, manipulation on that front was also at an all-time low.
Today, it’s a whole new ballgame with figures touching the skies. AI-based disinformation is the leading form of inaccuracies and it’s happening at a time when AI is at its peak in terms of popularity and usage. They have become such a hit that image-based disinformation is at the forefront right now as more AI images keep getting produced as we speak.
The report also highlights how video-based misinformation tends to grow when AI-based image misinformation is in decline, which was an interesting finding. But what critics are more worried about is the methodology used to come to this conclusion as the results could be far worse.
Remember, the data is from samples taken from fact-checker material of the public which don’t select material randomly. Popular sources like Politifact or even Snopes don’t work like that and due to a limited number of resources, they are more inclined towards news sources or material that has gone viral with time.
In this manner, their fact-checking serves a single purpose or is linked to a particular audience. In the past, fact-checking sources focused mostly on the likes of disinformation pertaining to the English Language. So that led to disinformation from other languages turning into a bigger issue altogether.
Such sample counts would end up underestimating the growing rise in AI-produced pictures we see across popular social media pages such as Facebook each day, which don’t get any reporting as a whole.
The arrival of AI image generators has given rise to simple and countless possibilities to produce pictures that go viral across the board. They enable so many alternatives of certain pictures to be produced which don’t always tend to go viral.
Hence, without any uniform sampling, it’s massive and the problem is bigger than what it appears on the surface as these do reflect the real world, but not entirely.
Meanwhile, another means by which the AI issues are bigger than what is being reported has to do with how AI-produced pictures can be seen in videos. These got more popular in 2022 and today, they’re a part of the results that fact-checkers produce in the media domain.
However, this study does not take into consideration how video disinformation could be partly made up or completely done via images produced by AI. Let’s not forget how the GOP ended up giving rise to videos produced via AI images for official campaign content in 2023.
So all we can say is that it’s going to be very difficult to get a real-time analysis of how bad the picture is right now in terms of disinformation arising from AI-based picture generation. However, the fact that Google is promoting AI-generated content across the board does not seem to be helping in any manner.
Image: DIW-Aigen
Read next: Android Security Experts Raise The Alarm Against Surge In Anatsa Banking Trojan As 90 Malicious Apps Installed 5.5 Million Times