Artificial intelligence models can now clone a voice with just a few seconds of audio, fueling a surge of deepfake songs online and creating a growing crisis for musicians who don’t want their voices hijacked. Beyond the obvious intellectual property rights issue, this can lead to lost revenue and take an emotional toll on artists who put their heart and soul into their songs.
But researchers have a solution. In collaboration with the startup company Cauth AI, faculty and students at Binghamton University, State University of New York have developed My Music My Choice (MMMC), a digital safeguard that empowers artists by protecting their songs from generative AI cloning.
Consider this scenario: Bad Bunny has just released a new song, but suddenly, the internet is flooded with countless studio-quality versions sung by famous/infamous people around the world, thanks to generative AI. With everyone able to produce their own high-quality version of the song, even the most diehard fans of Bad Bunny would be hard-pressed to tell the real track from a synthetic imitation.
Umur Aybars Ciftci , a research assistant professor in the First-Year Research Immersion Program at Binghamton University, and his collaborator, Ilke Demir, CEO and founder of Cauth AI, want to stop that from happening to today’s artists.
“Even though this AI technology has been developed for fun and entertainment, a lot of people are using it for nefarious purposes,” Ciftci said. “You can easily take someone's voice and make them sing something that they normally don't sing, or steal someone's songs and make it look like it is your song to begin with.”
My Music My Choice works by adding small, imperceptible changes to a song’s waveform. When you play the song back, the vocal will sound exactly the same to your ears. But when an AI model tries to replicate the song, it will only produce distorted noise. From the AI model’s perspective, the slight shifts done by My Music My Choice make the protected audio sound like a completely different vocal track – and the AI model struggles to replicate it.
“Collaborating with disruptive startups like Cauth AI provides us with a unique vantage point into the frontline challenges of the industry, essentially bridging the gap between lab-scale concepts and industrial-scale impact. Our goal is to build a model that figures out exactly which tiny modifications to introduce so that people hear no difference at all, while AI voice-cloning systems are thrown off,” Ciftci said. “In other words, we’re trying to minimize the impact on human listeners while maximizing disruption for the machines.”
If you're a musician with a new track, he added, this is something you could apply to a song before releasing it to protect it from AI voice cloning.
The researchers tested the tool on 150 music tracks across multiple genres, and they
will continue testing this system on larger data samples. They also want to compare My Music My Choice with similar methods, though Ciftci said there aren't many out there.
Binghamton students Gerald Pena Vargas, Alicia Unterreiner, and David Ponce contributed to this research.
The paper, “My Music My Choice: Adversarial Protection Against Vocal Cloning in Songs,” was presented at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025) Workshop: AI for Music.
Experimental study
My Music My Choice: Adversarial Protection Against Vocal Cloning in Songs
7-Dec-2025