A recent study by Abnormal Security, an email security platform, has revealed a concerning trend in cybercrime. Cybercriminals are now exploiting generative artificial intelligence (AI), including popular tools like ChatGPT, to develop highly convincing email attacks. These AI-generated attacks pose significant challenges in detection and have raised concerns among security leaders.
Abnormal Security undertook a comprehensive examination to evaluate the frequency of AI-generated email attacks intercepted on their platform. The results unveiled a concerning rise in the adoption of generative AI tools by threat actors to produce email attacks that closely imitate genuine communication, rendering them more challenging to recognize.
The increasing use of generative AI tools, such as ChatGPT, has sparked concerns regarding the repercussions of AI-generated email attacks. The research has verified that cybercriminals are employing AI to create more sophisticated methods of attack, encompassing activities like vendor fraud, enhanced variations of traditional business email compromise (BEC) strategies, and credential phishing.
In the past, people used to rely on finding mistakes in spelling and grammar to recognize phishing attempts. But now, with the advancement of generative AI, the landscape has shifted. This technology allows cybercriminals to create emails that are impeccably written and closely resemble legitimate messages, making it much more difficult for employees to distinguish between what is real and what is fraudulent.
Dan Shiebler, who leads the Machine Learning team at Abnormal Security, drew attention to the change in strategies employed by cybercriminals. Previously, conventional BEC attacks commonly contained familiar content, which facilitated their detection. Nevertheless, the introduction of generative AI tools has empowered cybercriminals to produce distinctive content, rendering it challenging to identify these attacks based on established patterns. Moreover, this capability enables them to increase the magnitude of their attacks.
The research conducted by Abnormal Security uncovered a concerning revelation that threat actors are expanding their focus beyond traditional BEC attacks. They are now utilizing AI tools to carry out vendor impersonation attacks. These attacks, known as vendor email compromise (VEC) attacks, take advantage of the trust established between customers and vendors and employ highly effective social engineering methods. Detecting these attacks, especially when they involve discussions about payments and invoices, becomes increasingly challenging as they lack obvious indicators like typographical errors.
Shiebler expressed serious concerns about a notable surge in attacks showing AI indicators. The incorporation of generative AI in email attacks presents a significant danger, as it allows cybercriminals to develop extremely sophisticated content that tricks recipients into performing malicious actions. The utilization of AI in composing email attacks enables cybercriminals to remove the usual grammatical and typographical errors associated with conventional BEC attacks, thereby increasing the difficulty in detecting them.
AI can also enhance the personalization of email attacks. Malicious actors have the ability to incorporate fragments of their targets' email history or LinkedIn profile information into their AI queries, generating emails that closely align with the context, language, and tone familiar to the victim. This heightened level of customization amplifies the deception of business email compromise (BEC) attacks.
Cybercriminals have previously used new domains and free webmail accounts to evade detection, but security measures quickly caught up with these tactics. The rise of generative AI follows a similar trajectory, making it impractical to block all AI-generated emails indiscriminately as employees rely more on AI platforms for business communications.
Abnormal Security shared an example of an intercepted attack falsely attributed to "Meta for Business" to illustrate the impact of AI-generated attacks. The email claimed the recipient's Facebook Page had violated community standards and provided a link to file an appeal. Unbeknownst to them, the link was directed to a phishing page designed to steal their Facebook credentials. The flawless grammar and language associated with Meta for Business made the attack more convincing.
Detecting AI-generated text presents a significant challenge, but Shiebler suggests employing AI as the most effective method. Abnormal Security's platform uses open-source large language models (LLMs) to evaluate word probability based on context, enabling the classification of emails consistent with AI-generated language. External AI detection tools, like OpenAI Detector and GPTZero, validate these findings.
However, Shiebler acknowledges that this approach is not foolproof. Non-AI-generated emails may contain word sequences similar to AI-generated ones, leading to false AI classifications. Organizations are advised to adopt modern solutions capable of detecting highly sophisticated AI-generated attacks that closely resemble legitimate emails and differentiate them from those with malicious intent.
Moreover, Shiebler emphasizes the importance of maintaining good cybersecurity practices within organizations. Ongoing security awareness training, password management, and multi-factor authentication (MFA) can help mitigate potential damage in the event of a successful attack.
Read next: ChatGPT vs. Google Translate Comparison: Which AI Chatbot is The Best Language Translator
Abnormal Security undertook a comprehensive examination to evaluate the frequency of AI-generated email attacks intercepted on their platform. The results unveiled a concerning rise in the adoption of generative AI tools by threat actors to produce email attacks that closely imitate genuine communication, rendering them more challenging to recognize.
The increasing use of generative AI tools, such as ChatGPT, has sparked concerns regarding the repercussions of AI-generated email attacks. The research has verified that cybercriminals are employing AI to create more sophisticated methods of attack, encompassing activities like vendor fraud, enhanced variations of traditional business email compromise (BEC) strategies, and credential phishing.
In the past, people used to rely on finding mistakes in spelling and grammar to recognize phishing attempts. But now, with the advancement of generative AI, the landscape has shifted. This technology allows cybercriminals to create emails that are impeccably written and closely resemble legitimate messages, making it much more difficult for employees to distinguish between what is real and what is fraudulent.
Dan Shiebler, who leads the Machine Learning team at Abnormal Security, drew attention to the change in strategies employed by cybercriminals. Previously, conventional BEC attacks commonly contained familiar content, which facilitated their detection. Nevertheless, the introduction of generative AI tools has empowered cybercriminals to produce distinctive content, rendering it challenging to identify these attacks based on established patterns. Moreover, this capability enables them to increase the magnitude of their attacks.
The research conducted by Abnormal Security uncovered a concerning revelation that threat actors are expanding their focus beyond traditional BEC attacks. They are now utilizing AI tools to carry out vendor impersonation attacks. These attacks, known as vendor email compromise (VEC) attacks, take advantage of the trust established between customers and vendors and employ highly effective social engineering methods. Detecting these attacks, especially when they involve discussions about payments and invoices, becomes increasingly challenging as they lack obvious indicators like typographical errors.
Shiebler expressed serious concerns about a notable surge in attacks showing AI indicators. The incorporation of generative AI in email attacks presents a significant danger, as it allows cybercriminals to develop extremely sophisticated content that tricks recipients into performing malicious actions. The utilization of AI in composing email attacks enables cybercriminals to remove the usual grammatical and typographical errors associated with conventional BEC attacks, thereby increasing the difficulty in detecting them.
AI can also enhance the personalization of email attacks. Malicious actors have the ability to incorporate fragments of their targets' email history or LinkedIn profile information into their AI queries, generating emails that closely align with the context, language, and tone familiar to the victim. This heightened level of customization amplifies the deception of business email compromise (BEC) attacks.
Cybercriminals have previously used new domains and free webmail accounts to evade detection, but security measures quickly caught up with these tactics. The rise of generative AI follows a similar trajectory, making it impractical to block all AI-generated emails indiscriminately as employees rely more on AI platforms for business communications.
Abnormal Security shared an example of an intercepted attack falsely attributed to "Meta for Business" to illustrate the impact of AI-generated attacks. The email claimed the recipient's Facebook Page had violated community standards and provided a link to file an appeal. Unbeknownst to them, the link was directed to a phishing page designed to steal their Facebook credentials. The flawless grammar and language associated with Meta for Business made the attack more convincing.
Detecting AI-generated text presents a significant challenge, but Shiebler suggests employing AI as the most effective method. Abnormal Security's platform uses open-source large language models (LLMs) to evaluate word probability based on context, enabling the classification of emails consistent with AI-generated language. External AI detection tools, like OpenAI Detector and GPTZero, validate these findings.
However, Shiebler acknowledges that this approach is not foolproof. Non-AI-generated emails may contain word sequences similar to AI-generated ones, leading to false AI classifications. Organizations are advised to adopt modern solutions capable of detecting highly sophisticated AI-generated attacks that closely resemble legitimate emails and differentiate them from those with malicious intent.
Moreover, Shiebler emphasizes the importance of maintaining good cybersecurity practices within organizations. Ongoing security awareness training, password management, and multi-factor authentication (MFA) can help mitigate potential damage in the event of a successful attack.
Read next: ChatGPT vs. Google Translate Comparison: Which AI Chatbot is The Best Language Translator