ChatGPT and the world of Generative AI might not be everyone’s best friend but as per a recent study, the technology is actually keeping cybercriminals in check.
Hackers are skeptical when it comes down to using AI tools including the likes of ChatGPT as they fear they might be the ones getting scammed in the end when launching their malicious motives.
The latest investigation on this front was conducted by Sophos which was seen judging hackers and what their latest interests were all about. This study took into consideration all the forums where you’d find such cybercriminals lurking around.
That’s when researchers noted how afraid such individuals were because of the great many safeguards in place across a long list of tools including ChatGPT. The latter for instance stops hackers from developing dangerous landing pages, putting out infected emails, and even adding malware-infused codes into the systems too.
This forces hackers to do two types of things such as compromise AI accounts that belong to premium subscribers. And for those wondering why, the latter usually come with fewer restrictions than the rest. Another option is to head in the direction of tools that are derived from ChatGPT. The latter was cloned so that hackers could bypass all security measures in place and carry out their desired goals with ease.
However, not all hackers are willing to go for the second option because they feel that it was designed to serve as a disguise and scam them. Threat actors as proven by the study are so scared that AI and large language models would do the exact opposite of what they intended to do and that in itself would cost them big time.
Imagine being played when you thought you were the one in charge. So skepticism is at its peak and it’s greater than the level of enthusiasm taking center stage.
Across the dark web, there were close to 100 different posts related to AI. And when that’s compared with the likes of crypto where 1000 posts were discovered during the same period, you’ll be amazed at what’s at stake.
The authors of the study also spoke about seeing attempts at producing malware or a list of more attack tools. These were created with the help of chatbots powered by AI. Moreover, the outcome was also seen to be skeptical from a host of other users.
So when you look at one end of the spectrum, you see a top threat actor that wishes to put out the true benefits of ChatGPT while in another post, you’ll witness actual information regarding the true identity.
A list of so many thought pieces was discovered that spoke about the negative effects of AI and its impact on society. This includes the types of ethical implications regarding usage. But for not, seeing it ward off cyber threats is definitely something encouraging.
Read next: Popular AI Image Generator Stable Diffusion Fails To Represent Minorities While Promoting Harmful Stereotypes, New Research Claims
Hackers are skeptical when it comes down to using AI tools including the likes of ChatGPT as they fear they might be the ones getting scammed in the end when launching their malicious motives.
The latest investigation on this front was conducted by Sophos which was seen judging hackers and what their latest interests were all about. This study took into consideration all the forums where you’d find such cybercriminals lurking around.
That’s when researchers noted how afraid such individuals were because of the great many safeguards in place across a long list of tools including ChatGPT. The latter for instance stops hackers from developing dangerous landing pages, putting out infected emails, and even adding malware-infused codes into the systems too.
This forces hackers to do two types of things such as compromise AI accounts that belong to premium subscribers. And for those wondering why, the latter usually come with fewer restrictions than the rest. Another option is to head in the direction of tools that are derived from ChatGPT. The latter was cloned so that hackers could bypass all security measures in place and carry out their desired goals with ease.
However, not all hackers are willing to go for the second option because they feel that it was designed to serve as a disguise and scam them. Threat actors as proven by the study are so scared that AI and large language models would do the exact opposite of what they intended to do and that in itself would cost them big time.
Imagine being played when you thought you were the one in charge. So skepticism is at its peak and it’s greater than the level of enthusiasm taking center stage.
Across the dark web, there were close to 100 different posts related to AI. And when that’s compared with the likes of crypto where 1000 posts were discovered during the same period, you’ll be amazed at what’s at stake.
The authors of the study also spoke about seeing attempts at producing malware or a list of more attack tools. These were created with the help of chatbots powered by AI. Moreover, the outcome was also seen to be skeptical from a host of other users.
So when you look at one end of the spectrum, you see a top threat actor that wishes to put out the true benefits of ChatGPT while in another post, you’ll witness actual information regarding the true identity.
A list of so many thought pieces was discovered that spoke about the negative effects of AI and its impact on society. This includes the types of ethical implications regarding usage. But for not, seeing it ward off cyber threats is definitely something encouraging.
Read next: Popular AI Image Generator Stable Diffusion Fails To Represent Minorities While Promoting Harmful Stereotypes, New Research Claims