Tech giant OpenAI is shedding light on how it was able to counteract several different covert operations designed to abuse and influence its ChatGPT models.
The company mentioned how several attempts arose between the 2023 and 2024 period and most of those were said to have been Russian, Iranian, Chinese, or Israeli in origin.
Their objective was designed to enhance engagement from global audiences due to the manipulative behavior being embedded in the popular AI tool, it confirmed.
From the start of this month, the campaigns didn’t seem to have increased any form of audience engagement or attained reach for these services. But OpenAI added how it was working closely with several members of the industry like those from the tech world, government, or civil services to prevent such behavior from arising
The news comes during a time when many experts are casting doubts over the capabilities of Generative AI and how it can end up doing more harm than good. And with the elections period above us all, this means doubtful results taking center stage.
The findings were made public by OpenAi including how several actors made intricate attempts to impact the workings of its popular chat assistants to attain their own gains. It’s not only text that was modified but even images and it arose in higher volumes than what was seen before.
Meanwhile, fake engagement was running high due to attempts to use AI to produce inaccurate and false comments throughout posts generated across social media.
In the past year or so, there’s been plenty of doubt cast on this front including what could possibly take place if such influential campaigns did begin.
During a recent press briefing, OpenAI experts claimed that such reports are eye-openers and lead to many questions getting generated including how loopholes must be filled to better comprehend what’s taking place.
An operation from Russa dubbed Doppelganger made use of the firm’s models to produce headlines for several stories and convert media pieces to posts on popular social media sites. This resulted in more engagement and room for comments in different languages to defer support for controversial topics like Ukraine and Russia’s war against it.
Another shocking report spoke about campaigns to debug codes for popular texting app Telegram which rolled out short comments from the world of politics in various languages.
As per the role of Chinese actors, they managed to produce influential posts for Meta’s Facebook and Instagram and used models to search for more similar activities and roll out texts in various languages on social media.
Similar behavior was noted by the makers of ChatGPT in Iran where the International Union of Virtual Media made use of AI to roll out content that appealed to various global communities.
The disclosure by OpenAi is very much like the ones made by various other tech giants that get published routinely.
For example, we saw Meta roll out a new report regarding coordinated behavior where Iran’s marketing companies made use of false accounts across Facebook to carry out influential campaigns across the app and target those in various regions including Asia, EU, Canada, and the US.
While Meta confirmed that it had worked hard to disband the situation, it wouldn’t be wrong to add how such activities keep arising despite the firm’s continuous efforts to put an end to them, once and for all.
Image: DIW-Aigen
Read next: HP, Apple, And Dell Dominate U.S. Laptop Market; Notable Shares Held By Acer, Lenovo, Samsung, Microsoft
The company mentioned how several attempts arose between the 2023 and 2024 period and most of those were said to have been Russian, Iranian, Chinese, or Israeli in origin.
Their objective was designed to enhance engagement from global audiences due to the manipulative behavior being embedded in the popular AI tool, it confirmed.
From the start of this month, the campaigns didn’t seem to have increased any form of audience engagement or attained reach for these services. But OpenAI added how it was working closely with several members of the industry like those from the tech world, government, or civil services to prevent such behavior from arising
The news comes during a time when many experts are casting doubts over the capabilities of Generative AI and how it can end up doing more harm than good. And with the elections period above us all, this means doubtful results taking center stage.
The findings were made public by OpenAi including how several actors made intricate attempts to impact the workings of its popular chat assistants to attain their own gains. It’s not only text that was modified but even images and it arose in higher volumes than what was seen before.
Meanwhile, fake engagement was running high due to attempts to use AI to produce inaccurate and false comments throughout posts generated across social media.
In the past year or so, there’s been plenty of doubt cast on this front including what could possibly take place if such influential campaigns did begin.
During a recent press briefing, OpenAI experts claimed that such reports are eye-openers and lead to many questions getting generated including how loopholes must be filled to better comprehend what’s taking place.
An operation from Russa dubbed Doppelganger made use of the firm’s models to produce headlines for several stories and convert media pieces to posts on popular social media sites. This resulted in more engagement and room for comments in different languages to defer support for controversial topics like Ukraine and Russia’s war against it.
Another shocking report spoke about campaigns to debug codes for popular texting app Telegram which rolled out short comments from the world of politics in various languages.
As per the role of Chinese actors, they managed to produce influential posts for Meta’s Facebook and Instagram and used models to search for more similar activities and roll out texts in various languages on social media.
Similar behavior was noted by the makers of ChatGPT in Iran where the International Union of Virtual Media made use of AI to roll out content that appealed to various global communities.
The disclosure by OpenAi is very much like the ones made by various other tech giants that get published routinely.
For example, we saw Meta roll out a new report regarding coordinated behavior where Iran’s marketing companies made use of false accounts across Facebook to carry out influential campaigns across the app and target those in various regions including Asia, EU, Canada, and the US.
While Meta confirmed that it had worked hard to disband the situation, it wouldn’t be wrong to add how such activities keep arising despite the firm’s continuous efforts to put an end to them, once and for all.
Image: DIW-Aigen
Read next: HP, Apple, And Dell Dominate U.S. Laptop Market; Notable Shares Held By Acer, Lenovo, Samsung, Microsoft