How AI-Generated Music Became A $4 Billion Fraud Machine
Generative AI made it easy to produce and distribute fraudulent music at scale. A CISAC and PMP Strategy study projects that nearly 25% of creators’ revenues are at risk by 2028, potentially amounting to four billion euros.
Looking back, Deezer became the first streaming platform to independently detect and tag AI-generated music at the platform level in January 2025.
By the end of last year, it detected and tagged more than 13.4 million AI tracks, removed them from algorithmic recommendations and editorial playlists, stopped storing hi-res versions, and started selling its detection technology to other platforms. The numbers it has published since provides the most precise picture the industry has of what is actually happening.
In April 2026, Deezer reported receiving 75,000 fully AI-generated tracks per day, representing 44% of all daily uploads, more than two million tracks per month. Of the streams those tracks generate, 85% are fraudulent. Thibault Roucou, Deezer’s head of streaming, stated it directly in Music Week: "Generating fake streams continues to be the main purpose for uploading AI-generated music."
Apple Music VP Oliver Schusser confirmed to The Hollywood Reporter that Apple Music demonetized two billion fraudulent streams in 2025 alone, representing nearly $17 million in royalties that would have been taken from legitimate artists, according to a royalty calculator from law firm Manatt Phelps & Phillips.
Apple’s fraud stream rate was less than half a percent of total streams. Two billion fraudulent plays at less than half a percent.
How The Fraud Machine Gets Fed With AI Music
The supply side of that operation runs on tools like Suno . Two million paid subscribers generating seven million songs every single day, the equivalent of Spotify’s entire historical catalog every two weeks, according to internal investor documents obtained by Billboard.
Suno crossed $300 million in annual recurring revenue in February 2026 and closed a $250 million Series C at a $2.45 billion valuation in November 2025. The mechanics of how that supply feeds the fraud pipeline are now well documented: Fraudsters once uploaded a small number of tracks and ran bots to replay them repeatedly, generating obvious spikes that triggered detection.
They now use AI generators to flood platforms with millions of tracks and stream each one just a few thousand times, enough to generate royalties from each but not enough to trigger detection systems tuned for high-volume replay.
As Melissa Morgia, Chief Global Content Protection Officer at IFPI, told a panel on the sidelines of the seventeenth session of WIPO’s Advisory Committee on Enforcement in February 2025, AI is “the ultimate enabler” of streaming fraud because it allows bad actors to stay under the radar but still operate at a sufficient scale that their activities are lucrative.
Michael Smith ran the underlying operation for seven years. Using AI to generate hundreds of thousands of songs and bot networks distributed across thousands of automated accounts, he was at peak velocity generating revenue from over 660,000 streams a day. He was convicted in March 2026 in the first federal criminal streaming fraud case in U.S. history and agreed to forfeit more than $8 million.
For tracks that do get flagged, a commercial ecosystem exists specifically to defeat detection. Undetectr markets itself as built to remove AI-generated artifacts from music across six processing dimensions: spectral correction, timing humanization, pitch variation, dynamic range restoration, noise floor normalization, and metadata cleanup. TrackWasher is the other operating.
They are commercial products with pricing pages, marketed openly to anyone generating music with Suno who wants it to pass distributor inspection. The detection and evasion layers are scaling in parallel.
How The Upload Layer Became An Open Invitation
The upload layer on every major streaming platform was designed to solve specific problems: copyright infringement, content moderation, quality thresholds. The question of whether a human made the music was never one of them. Paul Bender, musician and member of Hiatus Kaiyote, ran a direct test of what happens when that assumption breaks.
In 2025, through a project called Operation Clown Dump, he and collaborators deliberately generated the worst AI slop they could produce and uploaded it through standard music distributors to streaming services under each other’s real names. The success rate was 100%. One track was titled "Funky Bagpipes Is Why We Need Authentication (This Is Fraud)." It went through without a flag.
Warner Music Group settled its lawsuit against Suno in November 2025, with Suno retaining its full functionality including the ability for users to download and distribute songs freely. The settlement covered only Warner; Universal Music Group and Sony’s suits against Suno remain active.
Two months after the Warner deal, UMG chairman Lucian Grainge appeared to warn against firms "validating business models that fail to respect artists’ work."
When the System Protects the Fraudster
Copyright law assumes the entity filing a claim created the work it’s claiming. Murphy Campbell is a folk musician who makes traditional Appalachian ballads. In January 2026, she discovered songs on her Spotify profile she hadn’t uploaded: her voice, cloned through AI, her songs processed into something she described as a "bro-country singer," distributed under her name through a distributor called Vydia, owned by gamma, the music and media company founded by former Apple Music Global Creative Director Larry Jackson and backed by Apple, Eldridge and A24.
Vydia then filed copyright claims against Campbell’s original YouTube videos, the exact videos that had been scraped to clone her voice. YouTube’s Content ID processes initial claims automatically. Campbell lost monetization on her own content. She described the experience as being "in a weird limbo where I'm telling robots to take down music robots made." After Campbell's story went viral, Vydia withdrew every claim. Vydia's founder Roy LaManna issued a statement denying any connection between Vydia and the entity that uploaded the fake tracks, registered as Timeless Sounds IR, insisting the two incidents were entirely separate.
Rolling Stone documented the same mechanic targeting Paul Bender, Veronica Swift, and Grace Mitchell. Someone uploaded fake AI tracks to the Spotify page of Blaze Foley, a country-folk singer murdered in 1989. In each case the system processed the fraudulent upload the same way it processes everything else: automatically, without verification, on the assumption that whoever filed the claim made the thing. Sony Music requested the removal of more than 135,000 songs created to impersonate artists on its roster, a figure reported in early 2025 that the company itself said was likely a fraction of actual volume.
Taylor Swift filing three trademark applications with the U.S. Patent and Trademark Office on April 24 is the clearest public signal of where the legal layer stands.
Two sound marks covering her voice. One visual trademark tied to her Eras Tour look. Her legal team is reaching for trademark law because copyright law has a gap it was never designed to handle: It protects what you made, and AI can now replicate how you sound without touching any existing recording.
Trademark attorney Josh Gerben, who first spotted the filings, wrote that AI "now allows users to generate entirely new content that mimics an artist’s voice without copying an existing recording, creating a gap that trademarks may help fill."
Michael Pelczynski, Chief Strategy and Impact Officer at Voice-Swap, goes further: “A trademark can help define a claim, but AI voice protection requires infrastructure. One that can prove origin, manage authorization, and give talent control over how their voice is licensed, used or commercialized.”
What Spotify Is Building on Top of All This
Spotify holds recordings, listener interactions, stems, usage patterns, metadata and performance analytics on essentially the entire history of recorded music in commercial circulation. All of it was licensed for streaming.
On the Q1 2026 earnings call, co-CEO Gustav Söderström stated that Spotify has “the capabilities and technologies we need” to build a derivatives business on top of existing artists’ music. He has not disclosed what those capabilities were trained on.
As Music Tech Policy documented in February 2026, creators currently have no disclosure, no audit trail, no licensing registry, no opt-in structure and no compensation framework regarding whether their work has already been absorbed into Spotify’s generative systems.
On April 30, Spotify announced Verified by Spotify, a new badge requiring sustained listener engagement, good standing with platform policies, and an identifiable presence off the platform including concerts, merchandise and linked social accounts. Profiles primarily representing AI-generated music are not eligible. The badge authenticates the artist, not the music. A verified human artist can still upload fully AI-generated tracks under their verified name. Spotify is drawing a line around identity while taking no position on content.
Why None of the Fixes Fix the Problem
Fraud at this scale requires three things to work: a cheap supply of content, a distribution channel with no meaningful gate, and an enforcement system that can be turned around. Generative AI provided the first. Every major streaming platform provided the second without ever designing for a world where it mattered. Copyright’s first-to-register logic provided the third. They are the same infrastructure, built on the same assumption, failing in the same direction at the same time.
On January 13, Bandcamp banned all music generated wholly or in substantial part by AI. On April 30, Believe and TuneCore blocked distribution of tracks made on what they called "pirate studios," naming Suno. Deezer started platform-level tagging in June 2025 and is now licensing its detection technology to collecting societies. Undetectr published a guide to bypassing TuneCore’s new screening the same week it launched. Over 75% of the most popular AI tracks on platforms are distributed through DistroKid, which allows AI music with no upload limits.
As Jongpil Lee, CEO of Neutune, an AI research lab building rights infrastructure for the music industry puts it: “Detection identifies AI-generated content. Attribution answers what it was built from. Building that second layer is what makes licensing, compensation and transparency possible at scale”.
The royalty pool is projected to lose 25% of creators’ revenues by 2028, potentially four billion euros. 10,000 AI tracks a day on Deezer in January 2025. 75,000 by April 2026.
Loading article...