Neuroscientists decoded a song from brain activity
Electrical signals between brain cells encoded the song's rhythm and harmony
Listen to this story:
Have feedback on the audio version of this story? Let us know!
Scientists just achieved a new mind-reading feat. They plucked a song from people’s brains.
In a new experiment, electrodes measured their brain activity as they listened to a song. From those measurements, a computer could then produce noises that sounded like the song the people had heard. This shows that music can be decoded from brain activity.
Researchers shared their work August 15 in PLOS Biology.
In the past, scientists have used electrodes and brain scans to decode words and even whole thoughts from people’s brain activity. The new study adds music to the mix. It also reveals how different brain areas pick up various parts of sound.
This work could someday improve communication devices for people who can’t speak.
Educators and Parents, Sign Up for The Cheat Sheet
Weekly updates to help you use Science News Explores in the learning environment
Thank you for signing up!
There was a problem signing you up.
Music to brain activity and back again
To decode a song, researchers tracked the brain activity of 29 people. All had electrodes implanted in their brains. These electrodes were meant to track the source of their epilepsy. That’s a condition where someone has recurring seizures. But in this experiment, the researchers found an additional use for the electrodes. They used them to eavesdrop on the electrical signals moving between brain cells, or neurons.
So that doctors could monitor their epilepsy, the participants were in the hospital. For part of that time, they listened to a song: Pink Floyd’s 1979 hit “Another Brick in the Wall.”
Neurons responded to the song — especially in parts of the brain involved in processing sound. Electrodes detected neural signals associated with hearing lyrics. They also picked up signals linked with rhythm, harmony and other aspects of music.
With that information, the team built a computer model to create sounds based on brain activity. And it was able to produce noises that sounded like the original song.
“It’s a real tour de force,” says Robert Zatorre. This neuroscientist in Canada works at McGill University in Montreal. He did not take part in the study.
“You’re recording the activity of neurons directly from the brain,” Zatorre says. So “you get very direct information about exactly what the patterns of activity are.”
Music on the mind
The new study also highlights which parts of the brain respond to different aspects of music. Take one known as the superior temporal gyrus, or STG. It’s found in the lower middle of each side of the brain. Activity in one part of the STG got stronger at the onset of specific sounds. Say, when a guitar note played. Another area of the STG ramped its activity up when it heard singing.
The STG on the right side of the brain — but not the left — seemed crucial to decoding music. The researchers tried running their computer model without information from that area. When they did, the model couldn’t recreate the song nearly as well.
“Music is a core part of human experience,” says Ludovic Bellier. This neuroscientist was part of the research team. He works at the University of California, Berkeley.
Bellier has been playing instruments since he was six. “Understanding how the brain processes music can really tell us about human nature,” he says. “You can go to a country and not understand the language, but be able to enjoy the music.”
Probing how the mind perceives music is difficult, though. That’s because the areas that process music and sounds are deep inside the brain and hard to access.
Zatorre wonders if the existing model, trained on just one song, could decode other things from brain activity. “Does [it] work on other kinds of sounds, like a dog barking or phone ringing?” he asks.
The goal is to someday decode and generate many types of sounds, Bellier says. That could improve devices that turn thoughts into sound for people who are paralyzed, have brain lesions or have other conditions that make it hard to talk.
In the nearer term, adding musical elements of speech, such as pitch, into such devices could help people better express themselves.