Artificial intelligence is starting to change the music industry. In some cases, artists are using AI to create parodies of famous singers. The AI model can be trained on existing music and then used to generate content in the style of an already established artist – a new song that sounds like it’s being sung by Michael Jackson or Kurt Cobain, for instance.
In other cases, people are just using artificial intelligence to create background music or to create different elements and instruments on their own tracks. An example of this could be a singer-songwriter who plays the guitar and sings all of their songs, but they use AI to put in drums and other instrumentation. It gives them the ability to create much more complex music, even if they don’t know how to play these other instruments.
What problems will this create?
This is going to create a lot of legal questions. For one thing, how can people use the music that they’ve created? Can they sell a song made by AI? Where did the AI get training and was that training permitted? If someone is making a “parody” song using another artist’s voice, do they deserve to earn royalties from that song?
In the area of lyrics, things seem to be a bit more problematic than when strictly looking at music. The AI model may generate background instruments that are unique, even if they are based on real-world instrumentation. But there have already been lawsuits – such as one started by the massive Universal Music Group – focusing on AI models that were trained on copyrighted song lyrics.
The outcome of these court cases will help to define how artificial intelligence is used in the music industry moving forward. Those involved need to keep a close eye on this development to see how it impacts their legal rights.