Elon Musk’s Grok AI rolled out a new image generator called Flux 1 this past month. However, things might not be going as planned as a new study is making some alarming claims.
Users are generating fake images with the help of AI technology through Musk’s popular and widely marketed Grok AI chatbot. Shockingly, the X app is doing nothing to prevent the tool from breaking its own rules.
Some of the images are related to the upcoming US Presidential elections where a fake Donald Trump was also rolled out. Similarly, pictures featured Kamala Harris, and then the pictures were shared through the X app. We agree that it’s easy to notice some being so fake but others are hard to decide on whether or not it’s true.
Controversial scenes of leaders giving the September 11 attacks a big thumbs up were also seen but there are also more subtle offerings. Interestingly, the images keep getting attention and now have close to one million views on the app.
Research experts at the Center for Countering Digital Hate shared the information online related to Grok's failure to reject the various prompts and questionable pictures linked to the elections.
Some common prompts included Trump being sick and hospitalized while wearing hospital attire and lying in bed. Another one had to do with a fight taking place at a polling booth and then another had the booth set on fire. If that was not bad enough, images of Harris using drugs were another popular view.
As per the reports, Grok struggled with producing images of VP Kamala Harris and JD Vance but was able to roll out fake images of Trump with ease. Now the reason why is still debatable.
It might be linked to Grok’s ability to produce more convincing pictures as we get closer to the election race and that’s why experts are concerned as they could alter voters’ perception. Meanwhile, we’re seeing other tech giants like ChatGPT and Midjourney stopping misinformation spread by banning various terms from their tools.
Now the question remains. Why is Musk’s X not doing enough to curb the issue? The degree of manipulation on display is jaw-dropping and it’s making a lot of people confused. Many hope X can start enforcing its own rules before it's too late.
Read next: X Anticipates Complete Shutdown In Brazil As Starlink’s Bank Accounts Frozen
Users are generating fake images with the help of AI technology through Musk’s popular and widely marketed Grok AI chatbot. Shockingly, the X app is doing nothing to prevent the tool from breaking its own rules.
Some of the images are related to the upcoming US Presidential elections where a fake Donald Trump was also rolled out. Similarly, pictures featured Kamala Harris, and then the pictures were shared through the X app. We agree that it’s easy to notice some being so fake but others are hard to decide on whether or not it’s true.
Controversial scenes of leaders giving the September 11 attacks a big thumbs up were also seen but there are also more subtle offerings. Interestingly, the images keep getting attention and now have close to one million views on the app.
Research experts at the Center for Countering Digital Hate shared the information online related to Grok's failure to reject the various prompts and questionable pictures linked to the elections.
Some common prompts included Trump being sick and hospitalized while wearing hospital attire and lying in bed. Another one had to do with a fight taking place at a polling booth and then another had the booth set on fire. If that was not bad enough, images of Harris using drugs were another popular view.
As per the reports, Grok struggled with producing images of VP Kamala Harris and JD Vance but was able to roll out fake images of Trump with ease. Now the reason why is still debatable.
It might be linked to Grok’s ability to produce more convincing pictures as we get closer to the election race and that’s why experts are concerned as they could alter voters’ perception. Meanwhile, we’re seeing other tech giants like ChatGPT and Midjourney stopping misinformation spread by banning various terms from their tools.
Now the question remains. Why is Musk’s X not doing enough to curb the issue? The degree of manipulation on display is jaw-dropping and it’s making a lot of people confused. Many hope X can start enforcing its own rules before it's too late.
Read next: X Anticipates Complete Shutdown In Brazil As Starlink’s Bank Accounts Frozen