Read my lips: Using multiple senses in speech perception

February 11, 2009

When someone speaks to you, do you see what they are saying? We tend to think of speech as being something we hear, but recent studies suggest that we use a variety of senses for speech perception - that the brain treats speech as something we hear, see and even feel. In a new report in Current Directions in Psychological Science, a journal of the Association for Psychological Science, psychologist Lawrence Rosenblum describes research examining how our different senses blend together to help us perceive speech.

We receive a lot of our speech information via visual cues, such as lip-reading, and this type of visual speech occurs throughout all cultures. And it is not just information from lips- when someone is speaking to us, we will also note movements of the teeth, tongue and other non-mouth facial features. It's likely that human speech perception has evolved to integrate many senses together. Put in another way, speech is not meant to be just heard, but also to be seen.

The McGurk Effect is a well-characterized example of the integration between what we see and what we hear when someone is speaking to us. This phenomenon occurs when a sound (such as a syllable or word) is dubbed with a video showing a face making a different sound. For example, the audio may be playing "ba," while the face looks as though it is saying "va." When confronted with this, we will usually hear "va" or a combination of the two sounds, such as "da." Interestingly, when study participants are aware of the dubbing or told to concentrate only on the audio, the McGurk Effect still occurs. Rosenblum suggests that this is evidence that once senses are integrated together, it is not possible to separate them.

Recent studies indicate that this integration occurs very early in the speech process, even before phonemes (the basic units of speech) are established. Rosenblum suggests that physical movement of speech (that is, our mouths and lips moving) create acoustic and visual signals which have a similar form. He argues that as far as the speech brain is concerned, the auditory and visual information are never really separate. This could explain why we integrate speech so readily and in such a way that the audio and visual speech signals become indistinguishable from one another.

Rosenblum concludes that visual-speech research has a number of clinical implications, especially in the areas of autism, brain injury and schizophrenia and that "rehabilitation programs in each of these domains have incorporated visual-speech stimuli."
-end-
For more information about this study, please contact: Lawrence D. Rosenblum (rosenblu@citrus.ucr.edu)

A video is available on youtube: http://www.youtube.com/watch?v=jtsfidRq2tw.

Psychological Science is ranked among the top 10 general psychology journals for impact by the Institute for Scientific Information. For a copy of the article "Speech Perception as a Multimodal Phenomenon" and access to other Psychological Science research findings, please contact Barbara Isanski at 202-293-9300 or bisanski@psychologicalscience.org

Association for Psychological Science

Related Speech Articles from Brightsurf:

How speech propels pathogens
Speech and singing spread saliva droplets, a phenomenon that has attracted much attention in the current context of the Covid-19 pandemic.

How everyday speech could transmit viral droplets
High-speed imaging of an individual producing common speech sounds shows that the sudden burst of airflow produced from the articulation of consonants like /p/ or /b/ carry salivary and mucus droplets for at least a meter in front of a speaker.

Speech processing hierarchy in the dog brain
Dog brains, just as human brains, process speech hierarchically: intonations at lower, word meanings at higher stages, according to a new study by Hungarian researchers.

Computational model decodes speech by predicting it
UNIGE scientists developed a neuro-computer model which helps explain how the brain identifies syllables in natural speech.

Variability in natural speech is challenging for the dyslexic brain
A new study brings neural-level evidence that the continuous variation in natural speech makes the discrimination of phonemes challenging for adults suffering from developmental reading-deficit dyslexia.

How the brain controls our speech
Speaking requires both sides of the brain. Each hemisphere takes over a part of the complex task of forming sounds, modulating the voice and monitoring what has been said.

How important is speech in transmitting coronavirus?
Normal speech by individuals who are asymptomatic but infected with coronavirus may produce enough aerosolized particles to transmit the infection, according to aerosol scientists at UC Davis.

Using a cappella to explain speech and music specialization
Speech and music are two fundamentally human activities that are decoded in different brain hemispheres.

Speech could be older than we thought
The theory of the 'descended larynx' has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels.

How the brain detects the rhythms of speech
Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables.

Read More: Speech News and Speech Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.