Gadget

Spotify cracks down on AI abuse in music

Spotify is stepping up to protect artists from misuse of artificial intelligence — specifically voice cloning, spam uploads, and deceptive content. The company has announced new policies that focus on impersonation, music spam, and AI transparency in credits.

Spotify frames the issue plainly: “Unauthorised use of AI to clone an artist’s voice exploits their identity, undermines their artistry, and threatens the fundamental integrity of their work.” Artists may choose to license their voices to AI projects, the company says, but only when they control that choice.

The new impersonation policy is the clearest line yet drawn by a major streaming service. It bans vocal deepfakes unless an artist has explicitly authorised them, and sets rules to protect against fraudulent uploads to official profiles. The rise of generative AI has made both of these problems easier to exploit. Spotify’s move is as much about reassurance for creators as it is about trust for listeners.

Spam is another target. Spotify says that in the past 12 months it has removed more than 75-million spam uploads. These include tracks churned out in bulk by AI, duplicate uploads, and music designed to trick algorithms or inflate royalties. A new filtering system is being introduced to detect these patterns earlier and more effectively. It will be phased in to avoid penalising genuine creators, but the aim is to protect both the royalty pool and the listening experience.

Transparency is the third strand of the policy. Spotify is adopting a disclosure standard developed through the international body DDEX. It will allow creators and rights holders to state clearly whether AI was used in making a track, and how — whether in vocals, instrumentation or production. “The industry needs a nuanced approach to AI transparency, not to be forced to classify every song as either ‘is AI’ or ‘not AI’,” Spotify says. This allows artists to use AI as a tool without being mislabelled or misunderstood.

Behind these measures lies a bigger question: how should platforms respond to the dual forces of AI disruption and rights-holder pressure? On one side, AI opens creative possibilities and lowers barriers to entry. On the other, it threatens to swamp streaming libraries with synthetic content, confuse audiences about authenticity, and reduce royalties for working musicians. Spotify’s strategy is to stake out a middle ground — welcoming innovation, but under rules that keep artists and rights holders in control.

The scale of the challenge explains the urgency. With more than 100-million tracks on its platform, Spotify faces daily uploads in the hundreds of thousands. Even if only a fraction are spam or impersonations, the effect can distort playlists, revenue distribution, and user trust. Rivals such as YouTube Music and Apple Music face similar issues, but Spotify’s announcement sets a benchmark that others may follow.

For South African artists, the changes carry particular weight. The local music scene has seen an explosion of global interest in genres like amapiano, Afro-house and gqom. These styles are distinctive and highly exportable — which also makes them attractive for copying and imitation. Without strong rules, an amapiano hit could be cloned by an anonymous creator, uploaded under false credentials, and draw streams away from the original.

Royalties are already a source of tension for South African musicians. Many argue that streaming payouts are too small to sustain careers unless artists achieve international success. Spam uploads and AI-generated content spread revenue even thinner. By promising stronger filtering and clearer credits, Spotify offers some assurance that originality and earnings are being defended.

The cultural dimension is as important as the financial one. South African music has always had influence beyond its borders, from township jazz in the 20th century to kwaito in the 1990s, and now amapiano on the world stage. If streaming libraries are flooded with unlabelled AI music, that cultural voice risks being drowned out. For many musicians, Spotify’s policies could be the difference between being recognised or being lost in the noise.

Listener trust is also at stake. Fans value authenticity, whether in the voice of an international star or in the rhythms of a township producer. If they feel unable to distinguish between human creativity and machine mimicry, their confidence in platforms could falter. Spotify’s disclosure standards aim to prevent that by making AI contributions visible and accountable.

At the same time, Spotify avoids banning AI outright. The company recognises that many musicians are experimenting with AI as part of their creative process. From beat-making assistants to mastering tools, the technology is becoming part of everyday workflows. By building disclosure into credits rather than prohibiting AI, Spotify makes the distinction clear: the issue is not AI itself but control and transparency.

The wider picture is that AI has forced the music industry to rethink the value of originality. Spotify’s new rules send a message: creativity is welcome, but identity theft, spam and deception are not. For musicians in South Africa and elsewhere, that message is overdue.

Exit mobile version