The Era of Stolen Musical DNA — Why 135,000 Deleted Songs Won't End This Crisis
Summary
Sony's mass deletion of 135,000 deepfake tracks is merely the opening salvo in the music industry's existential battle. With 60,000 AI-generated songs flooding streaming platforms daily, the line between authentic and counterfeit is dissolving — and the real crisis arrives when listeners stop caring about the difference.
Key Points
135,000 Deletions Are Just the Tip of the Iceberg
Sony Music's removal of 135,000 AI deepfake tracks from streaming platforms was officially confirmed in the March 18, 2026 IFPI Global Music Report as the largest takedown action in history. The targets included songs impersonating artists like Beyonce, Harry Styles, and Queen. But placing this number in context reveals how dire the situation truly is. According to Deezer's official data, approximately 60,000 AI-generated songs are uploaded to streaming platforms every single day — a figure that surged from 10,000 in January 2025. Suno alone generates 7 million tracks per day, effectively reproducing Spotify's entire catalog every two weeks.
Deezer's analysis found that up to 85% of AI-generated music streams were classified as fraudulent and had their revenue clawed back, up from 70% the prior year. In 2025, 13.4 million tracks were detected and tagged as AI-generated. The criminal case of Michael Smith — who created hundreds of thousands of AI songs and used over 1,000 bot accounts to siphon more than $10 million — shows just how lucrative this fraud has become. A deletion-centric response is structurally incapable of keeping pace with AI generation speeds and is fundamentally unsustainable as a strategy.
Technological Defense Lines: Promising but Structurally Limited
Spotify's Artist Profile Protection, Binghamton University's MMMC (My Music My Choice), and South Korea's blockchain-based defense systems represent a rapidly growing arsenal of technical countermeasures. Spotify is beta-testing a tool that prevents AI-cloned tracks from being attributed to real artist profiles, allowing musicians to pre-approve or reject uploads and auto-pass legitimate releases through a unique 'artist key' code. MMMC, presented at the NeurIPS 2025 workshop, applies imperceptible modifications to audio waveforms so that any AI cloning attempt produces only distorted noise — a proactive defense approach tested across 150+ songs in multiple genres.
In South Korea, six music rights organizations including KOMCA launched the K-Music Rights Organization Mutual Growth Committee on February 26, 2026, committing to build a unified blockchain-based infrastructure. However, these technologies are effective only against the current generation of AI music generators. As next-generation models emerge, detection rates could plummet dramatically. The defenders are structurally condemned to lag one step behind the attackers in this technological arms race.
Fragmented Legislation and the Global Regulatory Void
Tennessee's ELVIS Act (signed March 2024, effective July 2024) was the first U.S. law to explicitly protect against AI voice cloning, followed by similar bills in California, New York, Texas, and Illinois. At the federal level, the NO FAKES Act (S.1367) has been introduced in both chambers of Congress, seeking to establish the first federal right of publicity with statutory damages of $5,000 to $25,000 per violation and rights extending 70 years after death. The EU AI Act entered full enforcement in 2026, imposing transparency obligations on AI-generated music with penalties up to 35 million euros or 7% of global revenue.
South Korea's government has also committed R&D investment toward AI music defense. These are all steps in the right direction. But the fundamental problem is that AI deepfakes know no borders. Tennessee law cannot regulate a deepfake track generated in another country. When each nation applies different standards, the jurisdiction with the loosest regulations becomes a safe harbor for AI deepfake music. The absence of a unified global standard is the critical vulnerability that undermines every individual legislative effort.
Sleeping with the Enemy — The License-Based Coexistence Model
The November 2025 partnership between Warner Music and AI music platform Suno has drawn 'sleeping with the enemy' criticism, but it may represent the most pragmatic survival strategy available. This deal — the first of its kind in the industry — was struck simultaneously with a copyright lawsuit settlement. Under the agreement, Suno trains next-generation models using WMG's licensed catalog while guaranteeing artists and songwriters full control over their name, image, voice, and compositions. UMG has reached a similar settlement with Udio and plans to launch a license-based remix and mashup platform in 2026.
Historical precedent supports this approach: sampling culture in the 1990s was initially treated as illegal, but once licensing systems were established, hip-hop grew into a massive legitimate genre. Suno has rapidly scaled to 2 million paid subscribers and $300 million in annual recurring revenue, suggesting a roughly 40% probability that this model becomes the industry standard by 2027. The open question is whether this framework can extend protection to independent artists, not just major label rosters.
Audience Indifference Is the Real Crisis
No matter how sophisticated the technological defenses or legal frameworks become, every barrier crumbles the moment audiences stop distinguishing between human-made and AI-generated music. When the fake Drake track 'Heart on My Sleeve' surfaced in 2023, the public reaction was shock and outrage. Three years later, the response has shifted to something far more dangerous: 'So what if it's AI? The song's actually pretty good.' This mirrors the cultural shift that transformed fast fashion — where consumers adopted a 'close enough is good enough' mentality.
When music becomes something consumed based on 'does it sound good to me' rather than 'who made this,' the fundamental identity of music as an art form demands redefinition. This is a cultural problem that no technology or legislation can solve. The IFPI survey showed 69% of fans oppose unauthorized AI training, but that number could flip as familiarity breeds acceptance. The day listeners stop asking whether their favorite artist actually performed a track is the day the artist's voice is truly stolen.
Positive & Negative Analysis
Positive Aspects
- Industry Awakening and Immediate Action
Sony's 135,000-track purge sent a warning signal across the entire music industry. Streaming platforms like Spotify and Deezer have shifted from bystander mode to active gatekeeping. Deezer's classification of 85% of AI streams as fraudulent — and its subsequent revenue clawbacks — demonstrates genuine willingness to take substantive action against the AI music problem. This industry-wide awakening, however belated, is establishing minimum safety nets for artist protection in the short term.
- Rapid Advancement of Technical Defense Lines
Binghamton University's MMMC, Spotify's Artist Profile Protection, and South Korea's blockchain authentication systems are being developed in parallel, creating multiple layers of defense. Blockchain-based original authentication is a particularly compelling counter-intuitive approach — instead of 'catching fakes,' it 'proves authenticity.' This paradigm has the potential to fundamentally increase transparency in music rights management. The simultaneous development of both detection and authentication creates a far more robust defense architecture than relying on any single technology.
- Accelerating Legislative Progress
Since the ELVIS Act passed as the first AI voice protection law in the United States, similar bills have been pursued across multiple states, and the federal-level NO FAKES Act promises more comprehensive protection. South Korea's government R&D investment represents a leading edge in Asia. Once legal foundations are established, technical defenses gain the force of law, enabling real consequences for violations. The EU AI Act's penalties — up to 35 million euros or 7% of revenue — give these regulations serious teeth.
- Birth of New Business Models
The Warner Music-Suno partnership demonstrates the possibility of bringing AI music inside the regulatory perimeter and creating license-based revenue models. A structure where artists license their voice data for additional income creates an entirely new revenue stream on top of existing recording and streaming income. Just as sampling built hip-hop into a massive genre once proper licensing frameworks emerged, AI music could similarly give rise to new musical ecosystems on a legitimate foundation.
- Heightened Artist Rights Consciousness
The deepfake crisis has paradoxically elevated artists' awareness of their rights over their own voice and identity. Previously, copyright was primarily limited to composition and lyrics. Now, the concept of 'voice rights' as a new form of intellectual property is taking shape. This shift in consciousness will, over time, cultivate a culture where artists more proactively manage and protect their digital identities — setting precedents that extend well beyond music into every domain where personal likeness carries value.
Concerns
- The Overwhelming Speed Gap Between AI Generation and Deletion
With 60,000 songs uploaded daily, Sony's multi-month effort to delete 135,000 tracks amounts to barely clearing two days' worth of uploads. AI music generation technology is getting faster and cheaper, while detection and removal demand significant time and resources. In this asymmetric war, the defense side faces structurally unfavorable odds. Given the pace of generative AI advancement, today's detection technology could be obsolete tomorrow — making the whack-a-mole approach a losing strategy by design.
- Absence of Unified Global Regulation
The ELVIS Act is valid only in Tennessee, the AI Transparency Act only in the United States, and blockchain defenses only on Korean platforms. For AI deepfake tracks distributed across global streaming platforms and across borders, there is effectively a legal vacuum. The country with the loosest regulations becomes a sanctuary for AI deepfake music — a structural problem that individual national laws cannot solve. Negotiating a global treaty involves conflicting national interests and could take years, during which AI technology continues advancing unchecked.
- Existential Threat to Emerging Artists
Major-label artists with established names have access to legal resources and platform protections. But unsigned artists just starting out have no resources to fight back when their voices are cloned. They lack the litigation budgets, the platform leverage, and the public recognition that make enforcement possible. Independent artists are destined to be the biggest casualties of this crisis. Even the license-based coexistence model is designed around major label catalogs, leaving indie musicians with little coverage or recourse.
- The Commoditization of Music
As AI enables cheap mass production of music, the industry risks shifting its value system from 'who made this' to 'how cheaply can it be produced at scale.' From a streaming platform's perspective, inexpensive AI music filling out the catalog is actually economically advantageous — creating a structural incentive that actively worsens the problem. If music becomes a true commodity, artist motivation to create and the diversity of musical culture will both suffer severe damage. The art form itself is at stake.
- Deepening Platform Dependency and Structural Incentive Conflicts
Relying on streaming platforms as the primary guardians of artist protection introduces a fundamental conflict of interest. Platforms benefit from larger catalogs because more content means longer user engagement. This means they lack a strong economic incentive to fully exclude AI music. The tension between artist protection and platform revenue maximization cannot be resolved through self-regulation alone — external regulation must accompany any voluntary platform initiatives to create meaningful accountability.
Outlook
Looking at what's likely to unfold over the next few months, Sony's mass deletion will serve as the first domino in a chain reaction. Universal Music Group and Warner Music are almost certain to launch purge campaigns of comparable scale. I expect the cumulative number of AI deepfake deletions by the three major labels to exceed 500,000 tracks before the first half of 2026 is over. Once Spotify's Artist Profile Protection tool officially launches, competing platforms will scramble to introduce similar protective mechanisms, creating an entirely new market dynamic — what I'd call the 'artist protection arms race.' By this summer, we should start seeing Apple Music and YouTube Music positioning themselves with even stronger protection tools in a bid to attract artists away from rivals. The competitive pressure alone will force platforms to innovate faster than they would through regulation, turning artist protection into a genuine market differentiator for the first time in streaming history.
Deezer's classification of 85% of AI streams as fraudulent puts enormous pressure on every other platform. If Spotify and Apple Music conduct similar audits, the results could reveal that AI-generated music occupies a far larger share of the streaming market than anyone assumed. I believe 15 to 20 percent of the total streaming catalog may be AI-generated or AI-involved. If that number is officially confirmed, it will trigger an earthquake across the entire industry. An immediate reassessment of streaming revenue distribution structures would become unavoidable. Proposals to create a separate royalty pool for AI music — or exclude it from revenue sharing entirely — will land on negotiating tables within three to four months.
Specifically, Spotify could announce a policy within the third quarter of this year that slashes AI-generated track revenue share by 50 percent or more, or separates them into a distinct category. Currently paying an average of $0.003 to $0.005 per stream, even a moderate restructuring of payouts for AI tracks would represent a seismic shift in how streaming economics work. The ripple effect of such a policy change would extend beyond Spotify itself — if the market leader draws a hard line, every competing platform will face pressure to follow suit or risk being seen as complicit in AI music fraud. Advertisers, too, would start demanding transparency about whether their brand placements appear alongside human-made or AI-generated content, adding yet another layer of commercial pressure.
As this extends into the one-to-two-year horizon, the landscape gets genuinely complicated. ELVIS Act-inspired bills will be introduced in at least 20 additional U.S. states, and the EU will likely pursue amendments expanding the AI Act's scope to explicitly address music copyright protection. But here's where the trouble starts: legislation is being written at least three to five times slower than AI technology evolves. Laws taking effect in early 2027 will have been drafted based on 2025-era AI capabilities, and by then, voice cloning technology will have advanced to a level that makes current tools look primitive. The NO FAKES Act (S.1367), with its $5,000 to $25,000 statutory damages per violation and 70-year post-mortem rights extension, represents the most ambitious federal attempt yet — but even if it passes, enforcement mechanisms across international borders remain largely theoretical.
The regional response landscape deserves particular attention. South Korea is investing government R&D budgets heavily into blockchain-based original authentication systems, making it the most aggressive player on the technological defense front. This urgency is partly driven by the fact that K-pop artists represent 53 percent of global deepfake pornography victims, according to a 2023 Security Hero report — a statistic that makes AI voice cloning a deeply personal issue for the Korean music ecosystem. The February 2026 launch of the K-Music Rights Organization Mutual Growth Committee, backed by KOMCA and five other rights organizations, signals a coordinated national effort that could serve as a model for other countries.
Japan is taking a legal-first approach through copyright law amendments, drawing on its long tradition of strong intellectual property protections. China is charting its own distinctive path — partially permitting AI music generation while mandating licenses for commercial use, effectively attempting to harness AI music's economic potential while maintaining state control over its deployment. Europe is attempting unified regulation under the EU AI Act framework, with its formidable penalties of up to 35 million euros or 7 percent of global revenue, but achieving consensus among 27 member states will take at least 18 months. In Latin America, Brazil plans to introduce AI content regulation legislation in the second half of 2026, but enforcement capacity remains its biggest challenge given the scale of informal digital economies across the region. This fragmentation in regional approaches means that jurisdictions with lax regulation will inevitably become bypass routes for AI deepfake music — a structural loophole that repeats itself across every geography.
The most important development to watch in the medium term is the rise of the license-based AI music model. The Warner Music-Suno partnership is the prototype, and I put the probability of this model becoming the industry standard by 2027 at roughly 40 percent. If it succeeds, artists would license their voice data, AI platforms would use those voices legally, and revenue would be shared according to agreed terms. The parallel to sampling culture is striking — what started as outright piracy in the early days of hip-hop evolved into a legitimate industry once licensing infrastructure was established. UMG's similar settlement with Udio, which will yield a license-based remix and mashup platform in 2026, adds further momentum to this trajectory. Suno's explosive growth — 2 million paid subscribers and $300 million in annual recurring revenue — demonstrates that consumer demand for AI music tools is not going away, making regulated coexistence the more pragmatic path than attempted suppression.
But simultaneously, the technological arms race between detection tools like Binghamton's MMMC and AI generation technology will intensify dramatically. Current detection tools identify today's generation of AI music with reasonable accuracy, but next-generation AI models could cause detection rates to plummet. The structural problem in this arms race is that defense always lags one step behind offense. South Korea's blockchain-based authentication system offers an intriguing alternative to this dynamic, because instead of 'detecting fakes,' it 'certifies the genuine article.' I believe this counter-intuitive approach could prove more effective in the medium term. I predict that over 50 percent of major streaming platforms will adopt blockchain-based original authentication systems by 2027. The beauty of this approach is that it sidesteps the arms race entirely — you don't need to keep up with evolving fakes if you can simply verify the real thing at the point of origin.
The real seismic shift, though, arrives in the three-to-five-year window. By around 2029, I expect 'music certification marks' to become as commonplace as organic food labels. Tags like 'Human-Made' or 'AI-Assisted' will appear on every streamed track, and a segment of consumers will opt for premium subscription models that exclusively feature human-created music. This mirrors the structure of today's organic food market — mass-produced affordable music coexisting alongside 'artisan-crafted' premium music. The premium tier could command subscription prices 30 to 50 percent higher than standard plans, creating a viable economic model for platforms that curate exclusively human-created catalogs.
In the long run, the music industry's structure itself is likely to bifurcate into two fundamentally different ecosystems. On one side, AI will dominate the mass production of background music, mood playlists, and functional music — everything from elevator music and meditation soundscapes to workout playlists and ambient cafe soundtracks. On the other, human artists will anchor a premium market built on unique emotional expression and personal narrative, bolstered by live performances, limited-edition physical releases, and NFT-based ownership models that provide verifiable provenance. The former will rapidly consume the 60 to 70 percent of the current streaming market occupied by 'passively consumed' background music, while the latter will evolve toward higher per-unit value but a smaller overall market size. This bifurcation will reshape career paths for musicians: the mid-tier artist who currently earns a modest living from streaming alone will face the most existential pressure, squeezed between AI's cost advantage in commodity music and the premium market's demand for genuine star power.
Let me walk through the scenarios in concrete terms. The bull case envisions a global AI music treaty, universal blockchain certification, and an established license-based model. In this scenario, artist revenue could increase by 30 percent or more compared to current levels, driven by voice licensing income and a streaming premium of two to three times the standard rate for certified 'Human-Made' music. Certified human-made tracks would command per-stream rates of $0.008 to $0.012 compared to the current $0.003 to $0.005, while voice licensing royalties would add an entirely new income stream estimated at $2,000 to $10,000 annually for mid-tier artists. Some projections estimate the global AI music licensing market could reach $12 billion annually by 2030. I assign this scenario a 25 percent probability.
The base case — which I consider most likely at 50 percent probability — sees different regulatory frameworks coexisting across regions. Major labels maintain revenue through license-based models, but a significant gap in independent artist protection persists. AI music captures 25 to 35 percent of the total streaming market, and per-stream rates for human artists decline by 15 to 20 percent. In a global music streaming market currently worth approximately $41 billion, over $10 billion would be redistributed toward AI-generated music. Average monthly streaming revenue for independent artists would drop from the current $150 to $300 range to roughly $100 to $200. The proportion of indie artists able to sustain themselves as full-time musicians would fall from the current 12 percent to below 7 percent. Live performance revenue would become even more critical, but venue economics and touring costs would prevent this from fully compensating for lost streaming income.
The bear case — also at 25 percent probability — sees regulatory efforts fail entirely, AI generation technology perfectly evade detection, and consumers become completely indifferent to the distinction between AI and human music. In this scenario, music becomes a true commodity where 'who made it' carries zero premium. Per-stream rates collapse to below $0.001, making it impossible for any artist to sustain a living through streaming alone. The per-track revenue would be so low that an artist would need over a million streams per month just to cover basic living expenses. Live performance would become the only viable revenue model, effectively returning the industry to a pre-recording-era economic structure. If this becomes reality, the very foundation of the music industry as we know it crumbles.
Of course, I could be wrong. If quantum computing-based voice authentication technology emerges by 2027 and overcomes current detection limitations, an entirely different scenario opens up. Or a consumer-led 'human music movement' could take hold more powerfully than expected, creating social pushback against AI-generated music — not unlike the vinyl revival that nobody predicted would reach the scale it has today. History offers precedent for optimism: when the Napster crisis hit in the late 1990s, everyone declared the music industry dead, but the emergence of streaming actually drove growth beyond previous peaks. The IFPI reports that global recorded music revenue reached $31.7 billion in 2025 with 837 million paid subscribers — numbers that would have seemed impossible during the darkest days of piracy. There's always room for an innovation we haven't imagined yet.
More broadly, the AI deepfake music crisis doesn't exist in isolation. The advancement of voice cloning technology carries ripple effects across every voice-based industry — podcasts, audiobooks, voice assistants, call centers, and even telephony-based identity verification systems. A cloned voice that can fool a music listener can also fool a bank's voice authentication, making this a security issue that extends far beyond entertainment. The regulatory framework that the music industry establishes in this fight will set the template for the entire voice industry, which will in turn influence regulation of video deepfakes, AI-generated text, and the entire AI content ecosystem. What the music industry decides in this battle will serve as the litmus test for how society governs AI-generated content across every creative domain.
A word to readers: the next time you listen to your favorite artist, pause and ask yourself — 'Is this actually them?' The day you stop asking that question is the day their voice is truly stolen. And if possible, support your favorite artists through official channels and live performances. In the age of AI, that remains the most reliable way to stand behind the people who make the music you love. Five years from now, read this piece again. You'll know then whether I was too pessimistic — or not pessimistic enough.
Sources / References
- Sony removes a staggering 135,000 deepfake songs from music streaming services — TechRadar
- Spotify tests new tool to stop AI slop from being attributed to real artists — TechCrunch
- AI Slop Is Threatening Musicians. Can Tech Companies Stem the Tide? — Time
- 85% of AI-generated music streams have been demonetised: Deezer — DJ Mag
- Deepfake songs tool developed by researchers — TechXplore
- Korean Music Industry Groups Race to Build Blockchain Defense Against AI Exploitation — BabelFM
- AI Music Timeline: Fake Drake, Suno, Udio & Label Settlements — Billboard
- Spotify Hands Artists a New Weapon Against AI Voice Clones — WebProNews
- Global Music Report 2026: Global Recorded Music Revenues Grow 6.4% — IFPI
- Warner Music Group strikes 'landmark' deal with Suno; settles copyright lawsuit — Music Business Worldwide
- AI music generator Suno hits 2M paid subscribers and $300M in annual recurring revenue — TechCrunch
- Feds Score Guilty Plea in First-Ever U.S. Streaming Fraud Case — An $8M Scheme Aided by AI Music — Billboard
- NO FAKES Act of 2025 (S.1367) — Congress.gov
- Deezer says up to 85% of its AI-music streams are now fraudulent — Music Ally
- AI Music Settlement: Why Universal Music Group is Teaming with Udio — Rolling Stone