'Fraud Music' or Genius?

AI in the music industry.

AI music is fraudulent and should be banned from all streaming platforms.

This is a stance that Universal Music Group have taken when it comes to the use of AI in the industry. The question is: If AI is being used for music creation to any extent, does it make the artist complicit in an act of fraudulence ? It certainly raises questions of authenticity and originality. And whether or not you agree with UMG’s stance, they are understandably  rattled by the use of AI tools.

AI is being adopted in a number of ways when it comes to creating music. Firstly, there are tools that assist with the composition and songwriting. By analysing rhythm, melody and harmony patterns, AI is able to compose new pieces at a rapid rate. Applying the knowledge it has gained from looking at past libraries of music, the softwares are able to create new and original tracks, which (I’m not going to lie) are actually quite good; some noteworthy ones – Mubert, Soundful, Loudly. Then you have tools that take simplicity to a whole new level. For example, Beatbot allows you to create short songs based on text prompts. This is a gamechanger for people like me - I am not  as clued up on the heavy technicalities of music production.  I went on and prompted it with ‘folk, electronic, disco, funk with a bit of classical’ and within seconds, five samples were generated. As you’d imagine, there were some slightly weird sounds but they definitely were not dreadful and - as far as I’m aware - pretty original. If talented musicians used tools like this to their advantage, it could revolutionise the music world. 

Beatbot Interface

Then you have AI production assistance – mixing, mastering and refining audio tracks. Platforms like LANDR and iZotope generate professionally mastered versions of songs within minutes, making them release-ready for streaming platforms like Spotify and Apple. 

Finally,  arguably the most contentious area is in sample and sound generation. AI powered tools that can create new sounds, instruments and samples for musicians to incorporate into their compositions. Very significant features include the vocal AI plugins – technology that uses AI to produce, mimic or manipulate vocal recordings. I’ve touched on voice generators previously, but in the context of music this is particularly juicy! Creating a track using the sound of your favourite singer’s voice opens up a can or worms and will potentially have huge repercussions.

In April, the release of ‘Heart on my Sleeve’, wrought havoc amongst the music industry for this very reason. A song using deepfake vocals of Drake and the Weekend was uploaded onto the web by an anonymous user Ghostwriter. Within 48hrs it was viral, gaining millions of downloads. However, it was swiftly taken down from streaming platforms; condemned for infringing content being used by AI. If I’m being honest, I think what freaked the industry out most, was the fact that the song (can confirm) was actually a banger! It also clearly showed an appetite for it and although in this instance the top players in the music industry were able to throw their legal weight around and have it taken down, it will become increasingly difficult with the more content that is being produced at a speedy rate. So, how much can the industry contain it?

Ghostwriter Heart on my Sleeve

One woman nailing it in this particular space is Holly Herndon (see Spotlight) She is one of a few musical artists out there that see AI tools as an advantage, and not a threat. She launched Holly+, an AI powered instrument that allows people to sing in her own voice. Allowing others to make music with her voice, she claims it is a way of reclaiming agency and creative autonomy. If anyone can rap like Drake, or sing like The Weekend, then she argues it is more crucial than ever to give artists the power to determine what happens to the likeness of their voice. In Holly+, by recording a variety of phrases in her entire vocal range, the AI is then able to learn and replicate the sonic properties of a specific sound, generating entirely new vocals that mirror Herndon’s unique style - a process she refers to as ’spawning’. I’ve watched her do a demo where she’s singing in multiple different languages and it’s exceptionally cool.

Of course, there are legal and ethical questions around this. I’d be lying to say I’m not tempted to see if I could sing in the vocal range of Aretha Franklin or rap like Little Simz. But what would the repercussions be? Would I be helping to put my favourite artists out of business? And more broadly, would fans still pay to see Beyonce live if there were a million people that could sing like her? Herndon also tackles these questions, arguing that AI will make new opportunities for musicians, whether that’s through the creation of new innovative music or by finding new forms of collaboration. She also has a vision to make the data consensual and the new capabilities of AI mutually beneficial. This would allow others to use the artist’s voice, whilst they maintain sovereignty over it and the work that is being derived from their voice. Adding this layer of consent to the training data may mean that artists no longer need to fear being replaced by AI tools and can start to work with the tech instead.  

More Thoughts

Back to thoughts
Back to Thoughts