Urgent Warning: AI Scams Surge as Big Tech Fails Consumers

UPDATE: New reports confirm that AI scams are skyrocketing as Big Tech companies struggle to protect consumers. The consumer group Which? has revealed alarming findings: deepfake videos impersonating trusted figures, including financial journalist Martin Lewis and UK Prime Minister Kier Starmer, are misleading the public into investing in fraudulent schemes.

JUST ANNOUNCED: Authorities urge immediate action as these convincing deepfakes present false endorsements for scams, giving the impression of government support. With AI technology evolving rapidly, these scams have become increasingly difficult to detect, causing widespread concern among experts and consumers alike.

A staggering 20% of individuals making investment decisions reportedly trust online influencers, according to the Financial Conduct Authority. This statistic underscores the urgent need for stricter regulations to protect vulnerable internet users from AI-generated deception. In 2025 alone, reports of AI impersonation scams have surged, prompting calls for the UK government to take decisive action against Big Tech firms like YouTube, X (formerly Twitter), and Meta.

Rocio Concha, Director of Policy and Advocacy at Which?, highlights the gravity of the situation: “AI is making it much harder to detect what’s real and what’s not. Fraudsters know this—and are exploiting it ruthlessly.” Concha emphasizes that current measures by tech platforms are insufficient to combat this growing threat, putting millions of users at risk.

The consumer group is demanding that the UK government incorporates rigorous measures into its upcoming fraud strategy, holding Big Tech accountable for the dangerous content proliferating on their platforms. As deepfake technology continues to advance, criminals are leveraging it to create convincing scam websites that mimic reputable sources like Which? and the BBC.

In a response to these concerns, YouTube has introduced a new tool allowing creators to flag AI-generated video clones, marking a potential step toward identifying deepfake content. However, critics argue that such measures do not address the broader issue of financial fraud facilitated by these technologies.

Consumers are urged to exercise caution and verify the authenticity of online content, particularly when it involves investment advice. Always check for official channels, safe links, and legitimate websites to avoid falling victim to these scams.

As this crisis unfolds, the public is left to wonder: What will it take for Big Tech to step up and protect its users? The stakes are high, and the time for action is NOW. Stay informed, stay safe, and share this urgent message to help protect others from AI scams.