A new backlash against generative AI is taking shape, and it is less about the technology than who owns your image, voice and likeness. Generative AI is making it trivial for a face to be copied or a voice to be used without permission. A celebrity endorsement can be forged, bought as an ad and pushed into millions of feeds before lawyers issue a cease and desist letter. The fight over AI has entered a harder phase, one measured in licensing fees, fraud claims, takedown demands and liability.

AI risk is piling on issues of bias, hallucination, automation of jobs, ethical misuse, and deepfakes that are causing increasing discomfort and real impacts. Recent reporting shows how these issues are moving beyond conceptual talking points to real world problems. Scammers have used AI made videos of Taylor Swift, Rihanna and other celebrities in fake TikTok promotions, with the clips sending users toward third party sites seeking personal information. These ads copied the look of real interviews and public appearances, then dressed them up as bogus rewards programs. Swift filed trademark applications tied to her likeness and voice after these kinds of deepfake ads spread.

A famous person’s likeness is not just publicity. It is an asset. It sells tours, streaming catalogs, cosmetics, sneakers, films, political support and private equity backed brands. Once AI can imitate that asset cheaply, this moves from simply creepy or blackmailing activity to real problems that can dilute brands and damage business relationships.

Deepfakes Have Become A Commercial Attack

Deepfakes are no longer only a personal harm story, built around compromising images or humiliating videos. They are becoming a business problem.

A fake CEO can be used to push a payment request or a cloned executive can approve a process change that can cause real harm. A fake athlete can steer fans toward a gambling style scam. A fake singer can send loyal followers to a data harvesting site. A fake doctor can sell junk medicine with the borrowed authority of a white coat and a familiar face.

The issues of fakes are also harmful for personal and political reputations. The Guardian reported on May 5, 2026, that Italian Prime Minister Giorgia Meloni condemned an AI generated lingerie image of her that went viral, calling attention to cyberbullying and misinformation.

IBM’s 2025 Cost of a Data Breach Report found that 16% of breaches studied involved attackers using AI tools, most often for phishing or deepfake impersonation attacks. Cybersecurity professionals are now cautioning their enterprise customers to be wary of suspicious audio, urgent requests received via digital channels, fake emergencies and messages asking for money or sensitive information. They urge caution and the need to verify these requests through a trusted channel before acting.

Fake endorsements raise a separate commercial risk. Recent reporting from The Verge and Wired found that AI manipulated videos of celebrities including Taylor Swift and Rihanna were used in TikTok scam ads promoting bogus rewards programs, with users sent to third party sites seeking personal information. That kind of fraud does more than create unjust revenue for scammers. It can dilute a brand, embarrass the person being copied, confuse consumers and reduce the value of legitimate endorsements. It can also force celebrities, companies and platforms to spend money proving they had nothing to do with an ad that never should have traded on their identity in the first place.

Celebrities Are The Early Test Case

Concerns over inappropriate use of people’s likeness and images are making its way into laws and regulatory action. The law is being forced to answer a question it did not have to answer at this scale before: when does a digital version of a person become something that person can control? The reason for these moves is because of the gaps in current intellectual property protection that didn’t anticipate AI.

Copyright protects creative works and outputs while trademark law protects brand identifiers. The right of publicity protects commercial identity, and privacy law guards against personal intrusion. Fraud law punishes deception. AI-generated fakes cut across all of these areas, but they add a new problem. A person or brand can now be turned into synthetic media that performs new acts, says new words and sells new products. The hard part is distinguishing a real, authorized promotion from a fake, unauthorized one.

The U.S. federal government is catching up. The NO FAKES Act was reintroduced in April 2025 by lawmakers seeking protections against unauthorized digital replicas. State governments are also taking action. Tennessee’s ELVIS Act , which took effect on July 1, 2024, updated the state’s personal rights law to protect songwriters, performers and music workers from AI misuse of their voices and unauthorized voice cloning, according to the governor’s office.

The Meloni episode shows how quickly synthetic identity abuse can become a political, reputational and personal harm at once. Italy has already moved deepfake abuse into criminal law. Under Law No. 132 in force as of October 10, 2025, the unlawful dissemination of AI-generated or AI-altered images, videos or voices is now a criminal offense. The law applies when such content is shared without consent, is capable of misleading people about its authenticity and causes unjust harm. Violators face prison terms of one to five years. The law also adds AI-related aggravating factors for certain crimes, including market manipulation.

The challenge also extends to the platforms that help spread AI generated fakes. The Verge reports that platforms are struggling with realistic celebrity deepfake scams. A platform such as Instagram, Tiktok or X may argue that it did not create the fake and so shouldn’t be held liable for their spread, but that answer won’t satisfy people sucked in by fake celebrity ads driving them into scams.

In many ways, AI represents the next phase of platform liability. The internet has been here before, through piracy, ad fraud, election manipulation, harassment and counterfeit goods. Each began as a moderation problem with a real risk to advertisers, brands, celebrities and influencers. With the ease of AI generated images and the speed and low cost of distribution, the issues are much more pressing. The victim may be a star, a politician, a child, a worker or a private citizen. The damage can happen before a human or even automated moderation can take action.

Moving Towards Practical Enforcement

One approach is to make authentication and watermarking a normal part of the AI content pipeline. Put simply, every AI-generated image, video or audio clip should carry some kind of proof of origin, much like a package has a shipping label or a document has a signature. That proof should show where the content came from, whether it was altered and whether the person or brand shown in it gave permission.

Disclosure labels are helpful, but only when the label travels with the content and people believe it. A label that says “AI-generated” means little if the video is downloaded, cropped, reposted and stripped of its warning. Watermarks can help, especially when they are both visible to humans and machine-readable by platforms, search engines and security tools. But they only work if they survive editing, reposting and platform hopping.

Takedown and moderation systems play a different role. They help to reduce the spread of unauthorized media, but only after the fake is already public. That matters, but it is still a cleanup tool, not a prevention tool. By the time a victim files a complaint and a platform removes the clip, the fake may have already been copied, archived, shared in private groups or used to scam consumers. That is why the stronger answer is not just faster takedowns. It is a system that makes fake or unauthorized content easier to detect before it spreads.

A New Market For Permission Is Coming

There is a tension in the use of AI in the creative industry. On the one hand, studios see the benefit of AI to enhance outputs in much the same way they used computer graphics to create more realistic outputs that helped enhance the message. On the other hand, AI models trained on intellectual property they didn’t own are being used to create outputs that owners can’t control.

This represents a challenge on multiple levels if consent is required for each use of a protected image. A studio will need permission to use a dead actor’s image. A label will need permission to clone a singer’s voice. Ad platforms will need confidence that a celebrity endorsement is real.

That creates a new challenge and opportunity for the AI markets. Companies will need to create registries of rights and licenses, as well as voice authentication and takedown systems. Insurance companies will need to expand coverage to cover synthetic media, and talent agencies will need to include contracts with AI usage and enforcement contracts. Even estate planning might have to change, covering the rights to digital usage of likeness and estate property.

As AI continues to mature and spread, and the tools become easier to use, the challenges around the use and misuse of images and brand IP will only continue to become more urgent. While generative AI did not create impersonation, it lowered the cost, raised the quality and widened the radius of consequences.