AI Could Promote Rise In Distrust Of Digital Information, Google Study Warns

A recent report published by researchers at Google is shedding light on the dangers of AI and how it could give rise to distrust of all digital information.

The company issued a warning on this front that spoke about how AI is leading to a global spread of content that’s not only of poor quality but features more spam and nefarious themed material. This would make more people distrust digital data sources and material published on the web, the authors warned.

The latest paper also spoke about the tactics used in misusing generative AI where researchers proved nearly 200 incidents that had to do with misuse between the start of 2023 and the middle of this year.

Furthermore, the authors were able to highlight more possible motivations and ways by which attackers are abusing the online system featuring pictures, audio, and text.

If such matters are not addressed, it can lead to the contamination of data available online with AI-based material that could hinder data retrieval and disorganize the reality of events taking place or matters through which a scientific consensus can arise.

The news was published in the research paper that goes on about the misuse dangers of generative AI in today’s time.

The authors who arise from Google’s DeepMind division confirm how there is a massive increase in high-profile people explaining matters involving a lack of evidence and more related to content generated by AI. This is resulting in more shifts in proof and a means that could cost countries a lot in the future.

Similarly, there is discussion about the motive of AI misuse for the sake of monetization of goods and services. Actors are making use of harmful tactics such as content scaling and a rise in false information.

There is more talk about content farming and how prevalent it has become with time. This is the name reserved for a certain kind of strategy involving private people and small firms to give rise to poor-quality articles made from AI as well as ads for products that can be put on top websites to attain profits like Amazon and Etsy.

Since they’re less expensive and bound to generate high returns on ad revenue, we can see why it’s being used more frequently than other techniques.

Meanwhile, intimate images taken without user consent are also shown to be on the rise, enhancing monetization arising from misuse methods. In such cases, tools used for producing videos and selling intimate and explicit videos of celebs who didn’t consent in the first place are included

While such goals seem to be more prevalent with AI misuse, the authors are speaking about how AI technology’s biggest goal is to alter people’s opinions about certain situations which can be very alarming.

In such cases, actors were using different tactics to influence the public’s opinion about reality. This includes a rise in impersonations of famous faces, rolling out false media reports, and also utilizing fictional digital personas to show support for certain causes.

Most of the cases were giving rise to a rise in synthetic pictures revolving around divisive topics like unrest, economic failure, and war.

So as you can see, the recently published research paper by Google is proof of how AI misuse is on the rise and a lot needs to be done to mitigate the risks that this powerful technology brings forward while it undergoes a period of constant evolution.


Read next:

• Meta Changes Its Hate Speech Policy To Combat Rise In Content Targeting Zionists

• Are Your Marketing Campaigns Working? How to Analyze Your Online Performance
Previous Post Next Post