Images created using generative AI have started being used in scam campaigns, and social engineering is one area in particular where it is used to great effect. According to data that was just released by Darktrace, social engineering attacks that use generative AI have gone up by 135% so far. Malicious actors can use generative AI such as ChatGPT and Midjourney to make their social engineering campaigns more believable than might have been the case otherwise.
With all of that having been said and now out of the way, it is important to note that employees are starting to get increasingly concerned about the implications. 82% stated that they are worried about how realistic scams might start to seem because of the fact that this is the sort of thing that could potentially end up making more users fall prey to them.
This increase has occurred in the span of a single month between January and February with all things having been considered and taken into account. ChatGPT was hailed as revolutionary, but in spite of the fact that this is the case, its ability to facilitate malicious actions must also be addressed.
Malicious emails are on the rise, and potential victims are finding it ever more challenging to discern their origins. Malicious actors could often be spotted due to their poor English, but ChatGPT has mostly done away with such obvious cues.
The addition of considerable linguistic complexity coupled with the ease with which generative AI platforms can be accessed is making for the perfect storm. Companies such as OpenAI must take steps to ensure that such forms of malicious use can be mitigated.
However, the challenges of preventing malicious use are numerous and varied. More work will need to be done to fully understand the manner in which ChatGPT and other generative AI platforms are used before any concrete solutions can be proposed.
Until then, users must remain vigilant when interacting with written content. If an email asks you to click on a link and enter log in details, there is a good chance that it is a scam or a phishing attempt.
Read next: Ecommerce App Installs Go Up But Retention Remains Stagnant
With all of that having been said and now out of the way, it is important to note that employees are starting to get increasingly concerned about the implications. 82% stated that they are worried about how realistic scams might start to seem because of the fact that this is the sort of thing that could potentially end up making more users fall prey to them.
This increase has occurred in the span of a single month between January and February with all things having been considered and taken into account. ChatGPT was hailed as revolutionary, but in spite of the fact that this is the case, its ability to facilitate malicious actions must also be addressed.
Malicious emails are on the rise, and potential victims are finding it ever more challenging to discern their origins. Malicious actors could often be spotted due to their poor English, but ChatGPT has mostly done away with such obvious cues.
The addition of considerable linguistic complexity coupled with the ease with which generative AI platforms can be accessed is making for the perfect storm. Companies such as OpenAI must take steps to ensure that such forms of malicious use can be mitigated.
However, the challenges of preventing malicious use are numerous and varied. More work will need to be done to fully understand the manner in which ChatGPT and other generative AI platforms are used before any concrete solutions can be proposed.
Until then, users must remain vigilant when interacting with written content. If an email asks you to click on a link and enter log in details, there is a good chance that it is a scam or a phishing attempt.
Read next: Ecommerce App Installs Go Up But Retention Remains Stagnant