Social media platforms like Facebook, Instagram, TikTok and X (formerly known as Twitter) are still littered with scam ads, a watchdog has warned.
Consumer experts Which? found “blatant” fraudulent advertising, from copycats of major retail brands to investment scams and ads using the identities of celebrities despite them having no connection with the product or service.
They found social media sites, as well as YouTube, Google and Bing, were playing host to obvious scam adverts in November and December, even though the Online Safety Act had received Royal Assent weeks earlier.
The Act does not officially come into force on scam adverts until after Ofcom finalises the codes of practice, which the regulator will use to set the standard platforms must meet.
On Meta’s ad library, Which? found Facebook and Instagram hosting multiple copycat adverts impersonating major retailers around the time of the Black Friday sales, including electricals giant Currys plus clothing brands River Island and Marks & Spencer. Each advert attempted to lure victims to bogus sites in a bid to extract their payment details.
On YouTube and TikTok, Which? found sponsored videos in which individuals without Financial Conduct Authority authorisation gave often “highly inappropriate” investment advice.
On X, an advert led to a fake BBC website and featured an article falsely using Martin Lewis to endorse a company which promoted itself as a crypto get-rich-quick platform.
Beneath the advert was a note added by the platform with some context added by other site users, known as readers’ notes. It warned that: “This is yet another crypto scam using celebrities.” Despite the warning, the advert remained live.
Which? said it was concerned that the findings suggested online platforms may not be taking scam adverts seriously enough.
It has called for a dedicated fraud minister to make the problem a “national priority”.
Microsoft, the owner of Bing, and TikTok were the only platforms to tell Which? they had removed the scam or harmful content reported to them.
Facebook, Google, Instagram and X did not report back to Which? on whether the adverts reported to them had been blocked or removed.
Rocio Concha, Which? director of policy and advocacy, said: “Most of the major social media platforms and search engines are still failing to protect their users from scam ads, despite forthcoming laws that will force them to tackle the problem.
“Ofcom must put a code of conduct in place that puts robust duties on platforms to detect and take down scams using the Online Safety Act. The Government needs to make tackling fraud a national priority and appoint a fraud minister who can ensure there is a coordinated pushback against the epidemic of fraud gripping the UK.”
Google, also the parent company of YouTube, said: “Protecting users is our top priority and we have strict ads policies that govern the types of ads and advertisers we allow on our platforms. We enforce our policies vigorously, and if we find ads that are in violation, we remove them.
“We continue to invest significant resources to stop bad actors and we are constantly evaluating and updating our policies and improving our technology to keep our users safe.”
TikTok said its guidelines prohibited fraud and scams, adding that it had removed all the videos Which? shared with it for violating these, as well as related accounts.
Microsoft, Bing’s owner, told Which? that its policies prohibited advertising content that was deceptive, fraudulent or that could be harmful to users, also confirming that it had removed the content reported by the watchdog.
A Government spokesperson said: “Government action has helped reduce fraud by 13% demonstrating progress on the rollout of our fraud strategy.
“The strategy announced the appointment of a new Anti-Fraud Champion, who recently helped secure the world’s first online fraud charter, a commitment from 12 of the world’s biggest tech companies to reduce fraud on their platforms, including fraudulent adverts.
“Our world-leading Online Safety Act will also require platforms to take proactive measures to prevent and swiftly remove fraudulent content. Companies that fail to comply with their new duties could face huge fines.”
House Rules
We do not moderate comments, but we expect readers to adhere to certain rules in the interests of open and accountable debate.