Meta Takes Anti-Scam Measures By Expanding Tests On Facial Recognition

Meta is working on more innovative facial technology to help curb scams.

Facebook’s parent firm is currently undergoing tests on facial recognition to combat the increase in celebrity scam ads. The company’s VP for content privacy explained through a blog post on how they are boosting anti-scam measures. This includes automated ones that run as parts of Meta’s ad reviewing system.

The goal is to make it more difficult for threat actors to avoid checks and roll out fake ads that click users on apps like Facebook and Instagram. Frequently, scammers make use of popular names from the entertainment work that link to scam websites. These places ask users to reveal personal details or give money. The more commonly used term for such scams is celeb-bait. These not only look real but are hard to detect.

The tests make use of facial recognition as backstops for checking ads. When a fake celebrity image is detected, the system alerts users that they’re at risk of celeb bait. Now, Meta is working on expanding this by using facial recognition tech for face comparisons against profile images on apps. If a match gets confirmed, the system will block it.

For now, Meta is not using this technology for other purposes than fighting scams. In cases when facial data is generated through ads, it will be deleted. It’s like a one-time comparison, regardless if the system finds a match or not.

Early tests for such approaches with a small number of celebs and public faces did prove promising for the tech giant. Not only did it enhance the speed of detection but also the efficacy of highlighting scams of different kinds.

Meta feels using facial technology can be great when detecting deepfake scam ads or when generative AI is used to produce images of famous faces. The tech giant has been accused for several years of not doing enough to bring this sort of fraud or scam to an end. Many people fall into the loopholes of crypto scams and lose money.

This is why the company is pushing hard to ensure anti-fraud measures do their job and stop turning into a nuisance now. It’s also interesting how it’s getting done at a period when it hopes to store as much user data as possible for AI training purposes.

In the next few weeks, Meta hopes to send in-app alerts to a large number of public figures that are commonly targeted. This makes them aware that they’ve enrolled in Meta’s system. If anyone begs to differ, they are free to opt out by going to their Accounts Center, the company explained.

Meta is also testing facial recognition for celeb imposer account detection. Scammers here will try to impersonate public figures to get better success with fraud. So that’s why Meta is targeting these threat actors who have suspicious accounts and make use of others’ images to get gains.

In other news, Meta is also pushing for facial recognition training on video selfies. This enables quicker account unlocking for those getting locked out of social media accounts after scammers hack them.

It’s going to be quite similar to how users unlock their devices or gain access to apps like the Face ID feature on iPhones. Whenever video selfies are uploaded, they will get encrypted and safely saved. You can never see it on profiles.

Meta says that no tests are getting rolled out for places like the EU and UK for now. However, other parts of the world will be a part of Meta’s efforts, it confirmed.


Read next: Google Chrome's New Tab Groups Feature: Save Your Tabs for Future Use
Previous Post Next Post