Meta Expands Its Use of Facial Recognition Tools To Combat Scams in the UK and EU

Last year, we saw tech giant Meta roll out an innovative approach using face recognition tools in different parts of the world.

Now, we can confirm that the project is being expanded to the UK and EU region to help combat the growing number of scams. The decision comes after Facebook’s parent firm shared that it was ending discussions with so many regulators on the subject who agreed with the idea.

Meta has already used two different kinds of tools, one of which can avoid scams that misappropriate famous people’s pictures to lure victims and get their details. The other has to do with targeting people online to purchase fake products. This is commonly known as celeb-bait.

How this works is if Meta’s systems feel there’s any ad featuring pictures of a celeb or public figure that is celeb-bait, they will use the technology to make the comparisons. This can differentiate real from fake pretty quickly.

The tools compare faces in the ads to the actual images of these people on Facebook and Instagram, which is usually displayed as profile pictures. If and when they feel it might be a scam, the ad is removed and any kind of facial data produced from such ads is deleted. So it’s like removing data collected for this moment of one-time comparison.

The company also shared how it’s not using the material for any kind of other purposes. They do consider the safety aspect attached to users’ facial data. One of Meta’s leading representatives shared how the firm quickly gets rid of facial data collected from both sides or from content once a match is made. During this entire process, information gets encrypted.

As per the tech giant, the feature has entered the early phase of testing with a small number of celebs and public figures. They do show some great results and celebs who were detected would be notified of the scams arising related to their pictures being used for the wrong reasons.

The other tool in question is also based on facial recognition. It assists users that need Meta accounts recovered. Instead of them handing over their personal information like ID copies and whatnot, Meta is working to test video selfies to assist people in verifying the identity when they were locked out of the account online.

The user uploads video selfies and the tool makes use of facial tech to make comparisons between the profile picture on the account and the selfie in question that they wish to access.

The whole ordeal does sound a little troubling to some, and therefore pictures of some people might end up getting misused if they are not in the right hands. As per Meta, the video selfies would get encrypted and stored safely.

Meta also shared that videos would be removed immediately, even if there was no match detected, so those advocating for privacy can relax that Meta won’t be storing the information on its database.

Both Facebook and Instagram will get the chance to opt out of this service in the next few weeks, Meta shared in its latest blog post. It does make sense why all of this is being done in the first place.

Remember, deepfake scams are on the rise and they end up costing people a huge sum of funds. At times, companies are being billed thousands of dollars, if not millions, due to such instances.

As per the Guardian, one scam ring from Georgia defrauded thousands of users from the EU and Canada. Up to $35M was taken after they managed to trick people using fake celeb ads on both Google and Facebook.

Other users get scammed easily through Meta’s other apps such as Instagram. One notable mention is from France where it impacted a woman named Anne. She was scammed and lost a mega $850,000 after threat actors produced images using AI of Brad Pitt in the hospital for extortion.

Image: DIW-Aigen

Read next: UK’s Competition Watchdog Stops Investigation Into Microsoft’s Partnership with OpenAI
Previous Post Next Post