Meta tests facial recognition to detect ‘celeb-bait’ ad fraud and easy account recovery | TechCrunch
Meta is adding facial recognition tests as an anti-scam measure to combat celebrity ads and more broadly, the Facebook owner announced on Monday.
Monika Bickert, Meta’s Vice President of media strategy, wrote in a blog post that other tests aim to strengthen its existing anti-piracy measures, such as automated scans (which using machine learning classifiers) run as part of its ad analysis system. make it harder for fraudsters to fly under its radar and trick Facebook and Instagram users into clicking fake ads.
“Fraudsters often try to use images of famous people, such as news producers or celebrities, to lure people into linking to advertisements that lead to fraudulent websites where they are asked to share information or send money. In this scheme, often referred to as ‘celeb-bait,’ violates our policies and is unfair to the people who use our products,” he wrote.
It is true that celebrities are featured in many legitimate advertisements. “But because celeb-bait ads are often designed to look real, they’re not always easy to spot.”
Tests appear to use facial recognition as a back-end for checking ad banners as a suspect in existing Meta systems when they contain an image of a person at risk of being labeled “celeb-bait.”
“We will try to use facial recognition technology to match ad faces against Facebook and Instagram profile pictures,” Bickert wrote. “If we confirm the game and that the ad is fraudulent, we will ban it.”
Meta says this feature is not used for any purpose other than fighting scam ads. He said: “We immediately delete any facial data presented in ads for this one-time match regardless of whether our system finds a match, and we do not use it for any other purpose.” ,” he said.
The company said the first tests of the method – with “a small group of celebrities and celebrities” (it did not specify who they were) – showed “promising” results in improving speed and the effectiveness of detection and enforcement against this type of scam.
Meta also told TechCrunch that it thinks facial recognition can be effective for detecting serious scam ads, where artificial intelligence is used to generate images of famous people.
The social media giant has been accused for years of failing to stop scammers from defaming celebrities in a bid to use its ad platform to bust scams like dubious cryptocurrencies. to unintended users. So it’s interesting timing that Meta has pushed face-based anti-theft measures for this problem now, at a time when the company is trying to capture as much data as it can to train its business models. AI (as part of the industry-wide challenge of building AI-powered tools).
In the coming weeks Meta said it will start rolling out in-app notifications to a larger group of people affected by celeb-bait – letting them know they’re being registered in the system.
“Citizens enrolled in this coverage can log out of their Account Center at any time,” Bickert noted.
Meta is also testing the use of facial recognition to detect celebrity accounts – for example, where fraudsters want to impersonate people on the platform in order to increase their chances of fraud – and using AI to match high profile pictures on a suspicious account against a person. social media accounts for Facebook and Instagram.
“We hope to test these and other new methods in the near future,” Bickert added.
Video selfies plus AI for account unlocking
In addition, Meta has announced that it is testing the use of facial recognition used in video selfies to enable faster account unlocking for people who are locked out of their Facebook/Instagram accounts. after they were taken by scammers (like someone. they were tricked into giving out their passwords).
This appears to be aimed at appealing to users by promoting the apparent use of facial recognition technology for identity verification – Meta suggests that it will be a quick and easy way to gain account access to than uploading a picture of a government-issued ID (which is standard practice. way to unlock access now).
“Video selfie verification increases the chances of people regaining account access, it only takes a minute to complete and it’s the easiest way for people to verify their identity,” said Bickert. “While we know that criminals will always try to use account recovery tools, this authentication method will be more difficult for criminals to exploit than document-based identity verification.”
The facial recognition system based on the face of the selfie video that Meta is testing will require the user to upload a video selfie that will be processed using facial recognition technology to match the video and profile pictures to the app. the account they are trying to access.
Meta says the process is similar to identity verification used to unlock a phone or access other apps, such as Apple’s FaceID on the iPhone. “Once someone uploads a video selfie, it will be recorded and stored securely,” Bickert added. “It will not be visible on their profile, friends or other people on Facebook or Instagram. We immediately delete any facial data generated after this comparison regardless of whether there is a match or not .”
Standardizing users to upload and save a video selfie for ID verification could be another way for Meta to expand its offerings in the digital identity space – if enough users choose to upload their biometrics .
There are no trials in the UK or EU – yet
All of these facial recognition tests are being done around the world, according to Meta. However the company has noted, clearly, that the tests are not being conducted in the UK or the European Union – where comprehensive data protection regulations apply. (In the specific case of biometrics for ID verification, the bloc’s data protection plan requires express consent from the parties involved for such a case.)
Because of this, Meta’s attempts seem to fit within a broader PR strategy it has launched in Europe in recent months to try to pressure local lawmakers to repeal citizens’ privacy protections. . This time, the compelling reason to push for unlimited data-processing-for-AI is not the (selfish) notion of data diversity or claims of lost economic growth but the goal of straight to fight scammers.
“We are in contact with the UK regulator, policy makers and other experts as the trials continue,” Meta spokesperson Andrew Devoy told TechCrunch. “We will continue to seek feedback from experts and make changes as features evolve.”
However, although the use of the face for the narrow purpose of security may be acceptable to some – and, indeed, it is possible that Meta may work under existing data protection laws – using human data to train AI business models is another kettle of fish.
#Meta #tests #facial #recognition #detect #celebbait #fraud #easy #account #recovery #TechCrunch