how the brain distinguishes between voice and sound

July 17, 2019

Is the brain capable of distinguishing a voice from the specific sounds it utters? In an attempt to answer this question, researchers from the University of Geneva (UNIGE), Switzerland, - in collaboration with the University of Maastricht, the Netherlands - devised pseudo-words (words without meaning) spoken by three voices with different pitches. Their aim? To observe how the brain processes this information when it focuses either on the voice or on speech sounds (i.e. phonemes). The scientists discovered that the auditory cortex amplifies different aspects of the sounds, depending on what task is being performed. Voice-specific information is prioritised for voice differentiation, while phoneme-specific information is important for the differentiation of speech sounds. The results, which are published in the journal Nature Human Behaviour, shed light on the cerebral mechanisms involved in speech processing.

Speech has two distinguishing characteristics: the voice of the speaker and the linguistic content itself, including speech sounds. Does the brain process these two types of information in the same way? "We created 120 pseudo-words that comply with the phonology of the French language but that make no sense, to make sure that semantic processing would not interfere with the pure perception of the phonemes," explains Narly Golestani, professor in the Psychology Section at UNIGE's Faculty of Psychology and Educational Sciences (FPSE). These pseudo-words all contained phonemes such as /p/, /t/ or /k/, as in /preperibion/, /gabratade/ and /ecalimacre/.

The UNIGE team recorded the voice of a female phonetician articulating the pseudo-words, which they then converted into different, lower to higher pitched voices. "To make the differentiation of the voices as difficult as the differentiation of the speech sounds, we created the percept of three different voices from the recorded stimuli, rather than recording three actual different people," continues Sanne Rutten, researcher at the Psychology Section of the FPSE of the UNIGE.

How the brain distinguishes different aspects of speech

The scientists scanned their participants using functional magnetic resonance imaging (fMRI) at high magnetic field (7 Tesla). This method allows to observe brain activity by measuring the blood oxygenation in the brain: the more oxygen is needed, the more that particular area of the brain is used. While being scanned, the participants listened to the pseudo-words: in one session they had to identify the phonemes /p/,/t/ or /k/, and in another they had to say whether the pseudo-words had been read by voice 1, 2 or 3.

The teams from Geneva and the Netherlands first analysed the pseudo-words to better understand the main acoustic parameters underlying the differences in the voices versus the speech sounds. They examined differences in frequency (high / low), temporal modulation (how quickly the sounds change over time) and spectral modulation (how the energy is spread across different frequencies). They found that high spectral modulations best differentiated the voices, and that fast temporal modulations along with low spectral modulations best differentiated the phonemes.

The researchers subsequently used computational modelling to analyse the fMRI responses, namely the brain activation in the auditory cortex when processing the sounds during the two tasks. When the participants had to focus on the voices, the auditory cortex amplified the higher spectral modulations. For the phonemes, the cortex responded more to the fast temporal modulations and to the low spectral modulations. "The results show large similarities between the task information in the sounds themselves and the neural, fMRI data," says Golestani.

This study shows that the auditory cortex adapts to a specific listening mode. It amplifies the acoustic aspects of the sounds that are critical for the current goal. "This is the first time that it's been shown, in humans and using non-invasive methods, that the brain adapts to the task at hand in a manner that's consistent with the acoustic information that is attended to in speech sounds," points out Rutten. The study advances our understanding of the mechanisms underlying speech and speech sound processing by the brain. "This will be useful in our future research, especially on processing other levels of language - including semantics, syntax and prosody, topics that we plan to explore in the context of a National Centre of Competence in Research on the origin and future of language that we have applied for in collaboration with researchers throughout Switzerland," concludes Golestani.
-end-


Université de Genève

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.