Earlier this year, Bad Bunny slammed rumors about dropping a new track with Justin Bieber. In an interview with TIME for a cover story on his rapid success, he straight-up said, “That’s fake. You never know what I’m going to do.” But just last month, a track with what seemed like Bad Bunny and Bieber singing together made the rounds on TikTok and got millions of likes. Turns out, Bad Bunny wasn’t fibbing in the interview – the song was AI-generated. An artist called FlowGPT used AI tech to mimic the voices of Bad Bunny, Bieber, and Daddy Yankee in a reggaeton banger.
Bad Bunny himself couldn’t stand it, labeling the song as “sh*t” in Spanish and telling his fans not to bother listening. The TikTok clip got taken down. However, plenty of fans of the three big stars still enjoyed it. The song and the mixed reactions to it highlight the tricky impact of AI on the music scene. In the last few years, thanks to strides in machine learning, folks can now recreate the sounds of their musical heroes right from their living rooms.
Also Read: How ChatGPT could be abused by people with nefarious intentions
Music producers turning to AI to speed up mixing
There’s this artist called Ghostwriter who blew up online by imitating Drake and The Weeknd. Then there’s another creator who playfully matched Frank Sinatra’s smooth voice with some explicit Lil Jon lyrics. Other AI tools let users whip up songs by just typing in prompts, basically like the audio version of text-to-image tools, like DALL-E.
The ongoing debate over safeguarding artists, pushing boundaries in innovation, and figuring out how humans and machines work together in making music will be a topic of exploration for years to come.
Music producers are already turning to AI for the less glamorous aspects of their work. It can fix vocal pitch glitches and speed up the mixing and mastering process, making it faster and cheaper. Even The Beatles hopped on the AI train, using it to extract John Lennon’s voice from a 1978 demo. They stripped away the instruments and background noise to create a brand-new, flawlessly-produced song.
AI is also part and parcel of how many folks enjoy music: streaming services like Spotify and Apple Music depend on AI algorithms to recommend songs tailored to people’s listening preferences.
Music production through AI is both exciting and concerning
Then there’s the whole deal with making music using AI, which has stirred up both excitement and concern. Musicians are getting into tools like BandLab, which suggests cool musical loops based on prompts, serving as a creative outlet for overcoming writer’s block. The AI app Endel whips up personalized, ever-changing soundtracks for concentrating, chilling, or snoozing, taking cues from people’s preferences and biometric data. And there are other AI tools that whip up entire recordings just from text prompts.
There’s this fresh YouTube tool backed by Google DeepMind’s Lyria, a massive language model. You can type in something like “A ballad about how opposites attract, upbeat acoustic,” and bam, you get an instant snippet of a song sung by someone who sounds a lot like Charlie Puth.
These technologies stir up a bunch of worries. If AI can whip up a “Charlie Puth song” in a snap, what does that mean for Charlie Puth himself or other aspiring musicians who worry about being edged out? Should AI companies be free to train their huge language models on songs without the creators’ say-so? AIs are even getting used to resurrect the voices of the deceased – for instance, a new Edith Piaf biopic will feature a reconstructed AI-made version of her voice. How will our views on memory and legacy shift if any historical voice can be brought back to life?
Need for some AI regulations in the music industry?
Even the folks most pumped about the tech are starting to feel uneasy. Just last month, Edward Newton-Rex, the VP of audio at Stability AI, stepped down, voicing concerns that he might have been part of a movement pushing musicians out of gigs.
“Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works,” he wrote in a public letter.
These issues are probably going to play out in court over the next few years. Back in October, big players like Universal Music Group sued the startup Anthropic when its AI model Claude 2 started regurgitating copyrighted lyrics word for word. A Sony Music bigwig told Congress they’ve sent out nearly 10,000 requests to take down unauthorized vocal deepfakes.
Also Read: Google ChatGPT-killer Gemini AI apparently “thinks more carefully”
Will AI music progress beyond mimicking artists?
Loads of artists just want to steer clear of this altogether. Dolly Parton, for example, labeled AI vocal clones as “the mark of the beast.” On the flip side, AI companies are pushing back, claiming that their use of copyrighted songs falls under “fair use,” comparing it more to tributes, parodies, or cover songs.
The singer-songwriter Holly Herndon is one of the artists trying to stay ahead of these big shifts. In 2021, she made her own vocal deepfake called Holly+, letting anyone morph their voice into hers.
She clarifies that the aim of the project isn’t to pressure other artists into giving up their voices. Instead, it’s about nudging them to actively engage in these broader discussions and assert control in a music industry where tech giants are gaining more and more influence. “I think it’s a huge opportunity to rethink what the role of the artist is,” she told TIME. “There’s a way to still have some agency over the digital version of yourself, but be more playful and less punitive.”
Dromgoole, the musician who co-founded Bronze, an AI company, is optimistic that AI music will progress beyond just mimicking singers’ voices and instantly churning out tunes. In recent years, Bronze has teamed up with artists such as Disclosure and Jai Paul to craft AI versions of their music that keep changing, so you never hear the exact same thing twice. The aim isn’t to employ AI to produce a flawless, marketable fixed song, but to utilize it in challenging our ideas about what music can truly be.