Google has released a machine learning tool that can generate music tracks by itself based on entered text. MusicLM isn’t for everyone yet, but Google has put a research paper and samples online.
Google writes in paper They created MusicLM, a hierarchical sequence-to-sequence machine learning model. The tool can generate 24 kHz music tracks lasting several minutes based on a text message. In addition to text, the router can also generate music based on a beep or buzz, or in response to a photo or painting. Google gives an example of a painting by Salvador Dali, from which MusicLM composes its own song.
The same tool cannot be used by everyone. Google already has it on a separate site Put samples online with corresponding prompts. These are descriptions like: “Slow tempo, reggae song driven by bass and drums. Continuous electric guitar. High pitched with ringing notes. Vocals are relaxed with a relaxed feel, very expressive.” MusicLM can also create multi-minute songs in a so-called narrative mode where the command prompt By conveying what is happening at different times in the song.
Google trained the tool on a data set of 280,000 hours of music. In addition to the tool, Google has also made a dataset called MusicCaps publicly available to researchers. This dataset consists of 5,500 musician descriptions including their original music. In the paper, Google doesn’t write anything about copyrighted material and how the tool handles it.
“Professional web ninja. Certified gamer. Avid zombie geek. Hipster-friendly baconaholic.”