With the ever-expanding horizons for artificial intelligence in numerous industries, the music industry is too witnessing innovation, which can dramatically transform it in the not so distant future.
OpenAI has created a neural network called Jukebox, which is trained using raw radio data of 1.2 million songs of different pop such as teenybop, heavy metal, hip-hop, and country among others.
Video games for one have already started using computer-generated music which is capable of playing loops and crescendos based on what a player is doing in the game.
The capabilities of Jukebox have reached such prowess by now that with a few seconds of chords and name of an artist such as Elvis Presley, Katy Perry, or Nas, it can autocomplete the remaining song. Although there’s one catch here, the algorithm at present has grown much better at producing orchestral classics as compared to rock and roll, which is due to fact that mathematical models used by AI researchers at present better complement the structure of orchestral classics.
To further understand how it has grown to such expertise is by marking millions of timestamps in a song which is much higher compared to a piece of writing which only runs in thousands i.e. OpenAI’s language generator GPT-2.
The chatbots too have gained perfection in mimicking artist voices which have lead to the filing of lawsuits by Jay-Z against deepfakes that showed him singing Billy Joel’s songs. The overall finesse of neural networks although hasn’t reached so far so as to beat humans entirely but it’s better at it every day.
Another important aspect that needs greater clarity is how intellectual property rights will shape AI in the future – making it a subject of concern because of blurring boundaries.