The world of online scams has been a debatable subject for years in the tech world. But did know that matters might be getting worse, thanks to AI.
Expert researchers at Harvard are talking about online scams that could be less about humans and more about an internal battle of the AIs where one would roll out an attack while another would do its level best to attain defense.
As days go by, the study proves how such scams in the online world continue to peak as we speak. And the fact that they’re getting so meticulously curated means it’s hard to avoid them in general.
A number of researchers from Harvard’s Business School mentioned in this latest report how up to 60% of all study subjects had been duped by AI-generated phishing emails. These were just as bad as phishing texts rolled out by humans as delineated in the review.
The scams tend to trick all kinds of people and that forces them to share personal data.
Scammers then send out emails or other texts and act as if they belong to a giant organization or person who is requesting sensitive data like credit card details when in reality, they are just scammers.
We agree that phishing instances are very old but AI systems are modifying them in a manner where they have become so severe that it’s hard to avoid them.
In this research report, we also witnessed how a large number of LLMs could turn the phishing process into an automated ordeal. You can generate emails, highlight specific targets, and then collect more data on this front. This just causes a reduction in their costs by nearly 95%.
The authors claim that this means we might soon be seeing even more phishing scams that enhance quality drastically as well as quantity over the next couple of years.
While AI models might transform such scams into something worse, researchers added how they might be used for detection and evasion purposes too. So yes, this does seem to sound like a double-edged sword when we take AI into consideration.
A lot of AI models are doing better than before. Many are getting identified and highlighted even in situations where it might not be too obvious. Also, the rate at which humans can detect phishing scams is superseded by AI systems that are doing a much faster and better job.
A few AI models also gave rise to great recommendations in terms of generating responses to phishing scams that were identified correctly.
For instance, we had experiments where LLMs encouraged people to attain attractive discounts to better verify offers that were seen on official company sites so this way, you’d have authentication regarding integrity of the source from where it arose.
The FTC has time and time again told users that since online scams are not slowing down, we need to be more alert on this front. Our goal should be more linked to avoiding such scams by never clicking on those links, especially if the source is someone who’s not familiar to you.
Secondly, the agency mentioned how important it is to check if you have accounts with a firm or know the source, and if not, simply never hesitate to report because when you don’t, the magnitude of danger just multiplies.
Image: DIW-Aigen
Read next: 76% of Developers Use AI Tools 38% Report Inaccuracies 95% See Productivity Boost
Expert researchers at Harvard are talking about online scams that could be less about humans and more about an internal battle of the AIs where one would roll out an attack while another would do its level best to attain defense.
As days go by, the study proves how such scams in the online world continue to peak as we speak. And the fact that they’re getting so meticulously curated means it’s hard to avoid them in general.
A number of researchers from Harvard’s Business School mentioned in this latest report how up to 60% of all study subjects had been duped by AI-generated phishing emails. These were just as bad as phishing texts rolled out by humans as delineated in the review.
The scams tend to trick all kinds of people and that forces them to share personal data.
Scammers then send out emails or other texts and act as if they belong to a giant organization or person who is requesting sensitive data like credit card details when in reality, they are just scammers.
We agree that phishing instances are very old but AI systems are modifying them in a manner where they have become so severe that it’s hard to avoid them.
In this research report, we also witnessed how a large number of LLMs could turn the phishing process into an automated ordeal. You can generate emails, highlight specific targets, and then collect more data on this front. This just causes a reduction in their costs by nearly 95%.
The authors claim that this means we might soon be seeing even more phishing scams that enhance quality drastically as well as quantity over the next couple of years.
While AI models might transform such scams into something worse, researchers added how they might be used for detection and evasion purposes too. So yes, this does seem to sound like a double-edged sword when we take AI into consideration.
A lot of AI models are doing better than before. Many are getting identified and highlighted even in situations where it might not be too obvious. Also, the rate at which humans can detect phishing scams is superseded by AI systems that are doing a much faster and better job.
A few AI models also gave rise to great recommendations in terms of generating responses to phishing scams that were identified correctly.
For instance, we had experiments where LLMs encouraged people to attain attractive discounts to better verify offers that were seen on official company sites so this way, you’d have authentication regarding integrity of the source from where it arose.
The FTC has time and time again told users that since online scams are not slowing down, we need to be more alert on this front. Our goal should be more linked to avoiding such scams by never clicking on those links, especially if the source is someone who’s not familiar to you.
Secondly, the agency mentioned how important it is to check if you have accounts with a firm or know the source, and if not, simply never hesitate to report because when you don’t, the magnitude of danger just multiplies.
Image: DIW-Aigen
Read next: 76% of Developers Use AI Tools 38% Report Inaccuracies 95% See Productivity Boost