Nav: Home

Using a cappella to explain speech and music specialization

February 27, 2020

Speech and music are two fundamentally human activities that are decoded in different brain hemispheres. A new study used a unique approach to reveal why this specialization exists.

Researchers at The Neuro (Montreal Neurological Institute-Hospital) of McGill University created 100 a capella recordings, each of a soprano singing a sentence. They then distorted the recordings along two fundamental auditory dimensions: spectral and temporal dynamics, and had 49 participants distinguish the words or the melodies of each song. The experiment was conducted in two groups of English and French speakers to enhance reproducibility and generalizability. The experiment is demonstrated here: https://www.zlab.mcgill.ca/spectro_temporal_modulations/

They found that for both languages, when the temporal information was distorted, participants had trouble distinguishing the speech content, but not the melody. Conversely, when spectral information was distorted, they had trouble distinguishing the melody, but not the speech. This shows that speech and melody depend on different acoustical features.

To test how the brain responds to these different sound features, the participants were then scanned with functional magnetic resonance imaging (fMRI) while they distinguished the sounds. The researchers found that speech processing occurred in the left auditory cortex, while melodic processing occurred in the right auditory cortex.

Music and speech exploit different ends of the spectro-temporal continuum

Next, they set out to test how degradation in each acoustic dimension would affect brain activity. They found that degradation of the spectral dimension only affected activity in the right auditory cortex, and only during melody perception, while degradation of the temporal dimension affected only the left auditory cortex, and only during speech perception. This shows that the differential response in each hemisphere depends on the type of acoustical information in the stimulus.

Previous studies in animals have found that neurons in the auditory cortex respond to particular combinations of spectral and temporal energy, and are highly tuned to sounds that are relevant to the animal in its natural environment, such as communication sounds. For humans, both speech and music are important means of communication. This study shows that music and speech exploit different ends of the spectro-temporal continuum, and that hemispheric specialization may be the nervous system's way of optimizing the processing of these two communication methods.

Solving the mystery of hemispheric specialization

"It has been known for decades that the two hemispheres respond to speech and music differently, but the physiological basis for this difference remained a mystery," says Philippe Albouy, the study's first author. "Here we show that this hemispheric specialization is linked to basic acoustical features that are relevant for speech and music, thus tying the finding to basic knowledge of neural organization."
-end-
Their results were published in the journal Science on Feb. 28, 2020. It was funded by a Banting fellowship to Albouy and by grants to senior author Robert Zatorre from the Canadian Institutes for Health Research and from the Canadian Institute for Advanced Research. A cappella recordings were made with the help of McGill University's Schulich School of Music.

The Neuro

The Neuro (The Montreal Neurological Institute-Hospital) is a world-leading destination for brain research and advanced patient care. Since its founding in 1934 by renowned neurosurgeon Dr. Wilder Penfield, The Neuro has grown to be the largest specialized neuroscience research and clinical center in Canada, and one of the largest in the world. The seamless integration of research, patient care, and training of the world's top minds make The Neuro uniquely positioned to have a significant impact on the understanding and treatment of nervous system disorders. In 2016, The Neuro became the first institute in the world to fully embrace the Open Science philosophy, creating the Tanenbaum Open Science Institute. The Montreal Neurological Institute is affiliated with McGill University and is a Killam Institution. The Montreal Neurological Hospital is part of the Neuroscience Mission of the McGill University Health Centre. In 2020, The Neuro launched its largest fundraising campaign in history, Brains Need Open Minds.

McGill University

Related Speech Articles:

New genes linked to severe childhood speech disorder
An international study has discovered nine new genes linked to the most severe type of childhood speech disorder, apraxia.
How important is speech in transmitting coronavirus?
Normal speech by individuals who are asymptomatic but infected with coronavirus may produce enough aerosolized particles to transmit the infection, according to aerosol scientists at UC Davis.
Using a cappella to explain speech and music specialization
Speech and music are two fundamentally human activities that are decoded in different brain hemispheres.
Speech could be older than we thought
The theory of the 'descended larynx' has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels.
How the brain detects the rhythms of speech
Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables.
New findings on human speech recognition at TU Dresden
Neuroscientists at TU Dresden were able to prove that speech recognition in humans begins in the sensory pathways from the ear to the cerebral cortex and not, as previously assumed, exclusively in the cerebral cortex itself.
Babbling babies' behavior changes parents' speech
New research shows baby babbling changes the way parents speak to their infants, suggesting that infants are shaping their own learning environments.
Hearing through your fingers: Device that converts speech
A novel study published in Restorative Neurology and Neuroscience provides the first evidence that a simple and inexpensive non-invasive speech-to-touch sensory substitution device has the potential to improve hearing in hearing-impaired cochlear implant patients, as well as individuals with normal hearing, to better discern speech in various situations like learning a second language or trying to deal with the 'cocktail party effect.' The device can provide immediate multisensory enhancement without any training.
AI can detect depression in a child's speech
A machine learning algorithm can detect signs of anxiety and depression in the speech patterns of young children, potentially providing a fast and easy way of diagnosing conditions that are difficult to spot and often overlooked in young people.
Synthetic speech generated from brain recordings
A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract -- an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx.
More Speech News and Speech Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Listen Again: Meditations on Loneliness
Original broadcast date: April 24, 2020. We're a social species now living in isolation. But loneliness was a problem well before this era of social distancing. This hour, TED speakers explore how we can live and make peace with loneliness. Guests on the show include author and illustrator Jonny Sun, psychologist Susan Pinker, architect Grace Kim, and writer Suleika Jaouad.
Now Playing: Science for the People

#565 The Great Wide Indoors
We're all spending a bit more time indoors this summer than we probably figured. But did you ever stop to think about why the places we live and work as designed the way they are? And how they could be designed better? We're talking with Emily Anthes about her new book "The Great Indoors: The Surprising Science of how Buildings Shape our Behavior, Health and Happiness".
Now Playing: Radiolab

The Third. A TED Talk.
Jad gives a TED talk about his life as a journalist and how Radiolab has evolved over the years. Here's how TED described it:How do you end a story? Host of Radiolab Jad Abumrad tells how his search for an answer led him home to the mountains of Tennessee, where he met an unexpected teacher: Dolly Parton.Jad Nicholas Abumrad is a Lebanese-American radio host, composer and producer. He is the founder of the syndicated public radio program Radiolab, which is broadcast on over 600 radio stations nationwide and is downloaded more than 120 million times a year as a podcast. He also created More Perfect, a podcast that tells the stories behind the Supreme Court's most famous decisions. And most recently, Dolly Parton's America, a nine-episode podcast exploring the life and times of the iconic country music star. Abumrad has received three Peabody Awards and was named a MacArthur Fellow in 2011.