In an interview from late 2019, musician Grimes proclaimed that ‘we’re approaching the end of human art, once there’s Artificial General Intelligence, they’re going to be so much better at making art than us.’ The backlash to this statement was understandable, AI is already costing blue collar workers their jobs, and the thought of something as personal and as moving as music being done by robots is perhaps a step too far for many. But this ignores the reality that AI and music have a long past.
Alan Turing, the famed forebearer of computer science, built a machine in 1951 that generated three simple melodies. Even that great musical adventurer David Bowie started playing around with a digital lyric randomiser in the 1990s to seek inspiration. And as a sign of things to come, a music theory professor trained a computer programme to write new compositions in the style of Bach, and when the pieces were played to an audience next to a genuine Bach piece, the listeners had difficulty telling the two apart.
Thus, it should come as no surprise that artists such as Holly Hendon have been pushing the envelope further. Holly Hendon’s 2019 album Proto used machine learning to enable an AI named Spawn to produce a number of different voices, including her own and that of a larger choir, that alongside a stack of synths created a new world of music and blurred the lines between AI and humanity.
AI’s ability to create a grey area between machine and human, has led to the creation of softwares like Amper and Endel, that have enabled non-musicians to create musical pieces ranging from background music for film, TV or video games alongside personalised soundscapes. In Amper’s case this occurs through the creator being able to inform the programme of the type of genre, mood and tempo they want for their piece. For Endel, the weather, the listener’s heart rate, physical activity rate and circadian rhythms are taken into consideration when generating gentle music designed to help people, sleep, study or relax. Amper has proven so popular that its creators have announced they will be releasing a consumer friendly interface that anyone can use.
Of course, there’s a lot more that AI can do in terms of music production. Los Angeles band YACHT trained a machine learning system on their entire catalogue of music for their most recent album Chain Tripping. The machine then spat out hours of melodies and lyrics based on what it had learned, allowing the band to sift through the output and splice together the most intriguing bits into coherent songs. The result was an album that was unique in its sound and output. This resulted in a Grammy for the band for best immersive audio album.
CJ Carr and Zack Zukowski the two men behind AI death metal band Dadabots are two other musicians who’ve seen the way AI can produce quality music based on a few samples. In their case, this took the form of feeding the AI short segments of music, a few seconds at a time. As the training went on, the AI learned to identify features of the music and began producing more detailed samples including riffs and transitions. Whilst some parts of the music don’t seem totally human (guitars playing at a tempo too fast for the average person), to the untrained ear it seems very convincing.
All of this of course does raise a question. If someone were to follow Dadabots’ example and say they wish to create a song in the style of a band such as Metallica, by feeding Metallica songs into a machine learning AI, who would the final product belong to?
In the UK, under existing copyright law, the owner of a work is generally the person who created it. The example of Metallica, therefore, is not answered under UK copyright law. Does the AI generated song belong to the person who created the AI programme as they created the source code that makes the programme work, or the person who fed the samples into the AI (due to them feeding the samples into the AI and thus technically authoring the work) or to Metallica who created the songs sampled by the AI.
In the US, similar issues exist, due to the absence of the word human in copyright law, which leaves the law open to interpretation regarding AI content produced ‘in the style of’. This is largely because, as some legal experts claim, the law is highly unlikely to prosecute an AI/ AI programme creator for producing a ‘in the style of’ AI song due to the song not being an original work of the artist that the song is in the style of.
Furthermore, in terms of copyright claims themselves, for a claim to be successful an artist has to prove that someone deliberately copied a song or songs of theirs to produce their own song. With AI, that would be difficult to prove due to the difficulty in reverse engineering a neural network to see what songs were fed to the AI. Ultimately it’s just a collection of numerical weights and configurations. Additionally, whilst artists can sue one another for failing to credit them on their songs, a company could protect its AI by claiming it is a trade secret, forcing the artist to have to fight in court to discover how the programme works, a costly process that might yield little.
Grimes was wrong to suggest that AI will replace humans in the creation of art. It seems instead that we are looking at a future where humans and AI work together to produce unique and fascinating pieces of music. That AI has the potential to open up what is sometimes an exclusive industry to the masses should be celebrated, and though potential legal issues lurk in the shadows, this should not dissuade people from experimenting and polishing their craft. The more people push boundaries, the more we shall see what works and what doesn’t. Such is the course of evolution.
- Independence Day 2020: it’s my party and I can cry if I want to
- Could Tyrick Mitchell be set for Crystal Palace debut?
- With no coronavirus vaccine, are NHS surgeries safe?
- Who is Manchester United teenager Ethan Laird?
- The moral case for the world to have access to covid-19 drug