There have been plenty of concerns surrounding the world of generative AI chatbots including ChatGPT and today, we’re hearing more advancements on that front.
The news of so many increasing and alarming reports speaking about how AI tools may be used to create malware and considering how popular and trending ChatGPT has become is on the rise. And the matter can no longer be ignored.
Such malware is stealing data from victims without them knowing or even having a hit by spoofing up ChatGPT websites and related apps. The news comes as researchers from Meta noticed how such malicious acts were on the rise and even highlighted their identities to be called NodeStealer and also Ducktail.
They post as if they are ChatGPT and a range of other generative AI tools where users are getting targeted through these malicious means and other ways like advertisements and even browser extensions. There are so many platforms too that are running with the sole purpose of putting out ads that are unauthorized and arising from fake business accounts.
Facebook’s parent firm feels such operations have been going unnoticed for far too long and it’s now crunch time in terms of understanding some malware families. It has already witnessed some quick forms of adversarial adaptation after getting detected.
Researchers claim that the threat is major and such groups are really working hard to evade being detected and also trying to withstand disruptions after the spread.
This is why many security teams are tackling this malware as a huge online threat and as a part of a major defense through so many efforts.
Since the start of March, researchers have gone about highlighting nearly ten different types of malware families making use of ChatGPT and a series of other themes that compromise accounts on the web.
Some actors are going one step ahead and making browser extensions that are seen across official web stores that offer tools based on the ChatGPT. This would promote this as a malicious extension on social media and work their way through search results that have sponsorship so people stay tricked and continue to install this malware without noticing.
Moreover, some extensions entailed the likes of ChatGPT and a series of other functionalities including malware that was likely to get rid of suspicion from real web stores.
Today, tech giant Meta says it ended up blocking nearly 1,000 different ChatGPT- based malicious URLs from getting shared across its platforms. Meanwhile, it continues to share them through various partners in the industry.
Read next: A Closer Look At How Cutting-Edge Generative AI Technology Is Altering The Marketing Landscape
The news of so many increasing and alarming reports speaking about how AI tools may be used to create malware and considering how popular and trending ChatGPT has become is on the rise. And the matter can no longer be ignored.
Such malware is stealing data from victims without them knowing or even having a hit by spoofing up ChatGPT websites and related apps. The news comes as researchers from Meta noticed how such malicious acts were on the rise and even highlighted their identities to be called NodeStealer and also Ducktail.
They post as if they are ChatGPT and a range of other generative AI tools where users are getting targeted through these malicious means and other ways like advertisements and even browser extensions. There are so many platforms too that are running with the sole purpose of putting out ads that are unauthorized and arising from fake business accounts.
Facebook’s parent firm feels such operations have been going unnoticed for far too long and it’s now crunch time in terms of understanding some malware families. It has already witnessed some quick forms of adversarial adaptation after getting detected.
Researchers claim that the threat is major and such groups are really working hard to evade being detected and also trying to withstand disruptions after the spread.
This is why many security teams are tackling this malware as a huge online threat and as a part of a major defense through so many efforts.
Since the start of March, researchers have gone about highlighting nearly ten different types of malware families making use of ChatGPT and a series of other themes that compromise accounts on the web.
Some actors are going one step ahead and making browser extensions that are seen across official web stores that offer tools based on the ChatGPT. This would promote this as a malicious extension on social media and work their way through search results that have sponsorship so people stay tricked and continue to install this malware without noticing.
Moreover, some extensions entailed the likes of ChatGPT and a series of other functionalities including malware that was likely to get rid of suspicion from real web stores.
Today, tech giant Meta says it ended up blocking nearly 1,000 different ChatGPT- based malicious URLs from getting shared across its platforms. Meanwhile, it continues to share them through various partners in the industry.
Read next: A Closer Look At How Cutting-Edge Generative AI Technology Is Altering The Marketing Landscape