Brain2Music: Google can tell music in your head from brain imaging data

Brain2Music: Google can tell music on your head from Brain Imaging Data

The experiment, carried out by a team of Google and Osaka University academics in Japan, is the first of its type. The AI technology, named Brain2Music, is supposed to function by analyzing brain imaging data from people who are listening to music. After analyzing a person’s brain activity, the AI creates a song that fits the genre, rhythm, mood, and instrumentation of the music being listened to.

The AI pipeline received brain imaging data from a technique known as functional magnetic resonance imaging. This is an imaging technique that can detect regional, time-varying changes in the brain. In other words, fMRI can determine which areas of the brain are activated and when while listening to music.

What is Brain2Music?

Participants in the experiment listened to 15-second music excerpts from blues, classical, country, disco, hip-hop, jazz, and pop. Following data collection, the AI algorithm was trained to find correlations between musical parts and participants’ brain activity. The AI would then turn the image data into music-emulating music from the original song snippets. The researchers then fed this information into a Google AI model called MusicLM.

Google revealed the experimental AI technology earlier this year, which works by converting your word descriptions into music. “The agreement, in terms of the mood of the reconstructed music and the original music, was around 60 percent,” study co-author Timo Denk, a software engineer at Google in Switzerland, told Live Science. “The method is pretty robust across the five subjects we evaluated,” Denk said. “If you take a new person and train a model for them, it’s likely that it will also work well.”

Exit mobile version