The game console is singing the baby to sleep in the other room. You’re listening as Alexa counts down the weekly top 10 — when it suddenly dawns on you that your favorite tunes are no longer made by musicians. Instead, hit-making algorithms are filling your playlist. Yep, your favorite artists are robots.
This isn’t a scene from an episode of Black Mirror. Generative music is already a reality, and it’s enabling devices in ways that are changing how music is both consumed and composed.
Hatsune Miku, a singing voice synthesizer, has released more than 100,000 released songs and performed in sold-out 3D concerts worldwide, has 900,000 fans on Facebook and corporate collaborations with Sega, Toyota USA, Google and more.
Last year, Endel, a Berlin-based startup, partnered with Warner Music Group, which signed the startup’s algorithm to a major music label deal, the first of its kind. And just this month, three-time platinum recording artist Travis Scott, whose 2018 LP Astroworld gave him his first No. 1 debut on the Billboard 200 and a Grammy nomination, had his sound duplicated — the track is Jack Park Canny Dope Man — by an artificial intelligence machine trained on the rapper’s music.
Last August, Sony CSL Paris unveiled an AI tool that adds kick-drum beats to preexisting songs. Lil Miquela, a computer-generated 19-year-old Brazilian American model, musical artist and influencer created in 2016 that has more than 1 million Instagram followers, closed a $125 million investment round led by Spark Capital in January. And Mubert, founded in 2015 as the world’s first generative streaming service, already has 200,000 users.
Alexander Lerch, head of the Music Informatics Group at the Center for Music Technology at Georgia Tech, believes this trend will only grow.
“From a technological point of view, we have more and more powerful tools that support composers or even try to replace composers,” Lerch says. “I think there will be a lot of implications, especially if you think about things like producers’ animated music. Those I would go as far as to say would get replaced by machines.”
Initially, generative music was thought of as a composition not specified by a score but by an algorithm. This concept was first applied back in the 1950s by gradually changing the simplest of music motifs over time. Composers like John Cage, Steve Reich and Terry Riley would take recordings with the same phrase and play it back at different speeds to yield different mathematical results, thus creating a very basic community-based generative system: absent of deliberate sound.
It was Brian Eno, with the release of his album Generative Music 1, in 1996, who popularized the concept. Unlike his predecessors, Eno removed all human elements from the creative process, while still achieving a unique system-made sound. Now when people refer to generative music, they mean music that is ever-different and ever-changing, created by a system.
“He [Eno] certainly gets credit for coming up with the name and what I consider to be the rules of generative music in that it does not end,” says Alex Bainter, a part-time generative music programmer who runs the site Generative.fm.
Today’s advancements are Eno’s vision amplified: complex data-fed systems that can emulate and even fool listeners into thinking they’re hearing the spark of human creativity — something that has confounded computer experts for years. Technological advancements have made it possible, says Briana Brownell, a technology data scientist and CEO of Pure Strategy. “For a long time, a lot of these computer-generated music pieces and the algorithms needed a super powerful computer to really understand the science behind how it actually works,” she says. “But now there are so many tools out there that anyone can use them.”
Just as we witnessed the digitization of music take us from CDs to MP3s, a similar evolution is taking place in the streaming era. There are new opportunities for algorithms and computer-based programming to occupy the spaces artists once did. Algorithmic technology also allows streaming platforms to personalise music and playlists and even to predict what listeners want to hear.
Alfred Darlington, also known as Daedelus, an electronic production and engineering professor at Berklee College of Music in Boston, says that society’s willingness to give up more information is a big part of it as well.
“Ultimately, I think it comes down to a willingness by producers, composers, etc., to give up themselves more,” Darlington says. “We have a rich dataset out there that these AI systems can mine to find similarities, links, hidden agendas that we don’t perceive.”
When it comes to movie and television scores, gaming or licensing for a video online, generative music will continue to grow in sectors where a composer is not essential.
But the advent of computer-composed scores does not ring a death knell for musicians. Though artists are leaning more on previous hits to help predict future No. 1 hits, a generative composed song has never charted — yet.
“I’m less likely to believe that this type of technology will replace artists,” says Apostolos Zervos, CEO of Akazoo, a global, on-demand music streaming subscription company. “My position is that over time what this will probably do is enhance the tools that artists have to complement their artistic endeavor rather than [to] replace.”
As streaming continues to dominate the music industry’s revenue, with artists making less and subscriptions steadily climbing, there is no question that generative music and AI will increasingly play a bigger role in how artists approach their music in the future. That may not thrill everyone.
But, hey, if a robot can get a baby to sleep …