Taylor Swift filed three trademark applications in late April 2026 to protect her image and voice, a move that coincides with a new report documenting AI-generated deepfake advertisements on TikTok using her likeness — and those of other celebrities — to steal personal data from users.
The trademark filings include protection for a photograph of Swift holding a pink guitar during her Eras tour, as well as two sound trademarks for the phrases “Hey, it’s Taylor Swift” and “Hey, it’s Taylor.” Swift has not publicly explained the reasoning behind the filings.
AI detection company Copyleaks published findings identifying a cluster of sponsored TikTok videos that appeared to show Swift, Kim Kardashian, Rihanna, and others promoting what researchers described as “potentially fraudulent or malicious services.” The clips used manipulated footage from real interviews and public appearances, featuring realistic-sounding AI-generated voices and visual filters designed to obscure imperfections in the generated footage.
In one ad, a deepfaked version of Swift — using manipulated footage from her October appearance on The Tonight Show Starring Jimmy Fallon — promotes a fictitious program called “TikTok Pay,” claiming users can earn money by watching videos and submitting opinions. Viewers who click through are directed to a third-party site built using the AI platform Lovable, where they are prompted to enter their name and personal information.
The broader scam ad landscape is drawing regulatory attention. The nonprofit Consumer Federation of America recently sued Meta, alleging the company misled users about its efforts to address scam ads on Facebook and Instagram while profiting from them. The US Federal Trade Commission also reported this week that social media scams have surged, with Facebook accounting for the highest total financial losses.
The Copyleaks report suggests that deepfake ads are growing more sophisticated over time, posing ongoing reputational and legal risks for the celebrities whose likenesses are exploited — and financial or privacy risks for the users they target.
Source: WIRED