Nav: Home

Whether our speech is fast or slow, we say about the same

January 17, 2017

The purpose of speech is communication, not speed -- so perhaps some new research findings, while counterintuitive, should come as no surprise. Whether we speak quickly or slowly, the new study suggests, we end up conveying information at about the same rate, because faster speech packs less information in each utterance.

The study suggests we tend to converse within a narrow channel of communication data so that we do not provide too much or too little information at a given time, said Uriel Cohen Priva, author of the study in the March issue of Cognition and assistant professor in the Department of Cognitive, Linguistic and Psychological Sciences at Brown University.

"It seems the constraints on how much information per second we should transmit are fairly strict, or stricter than we thought they were," Cohen Priva said.

In information theory, rarer word choices convey greater "lexical information," while more complicated syntax, such as the passive voice, conveys greater "structural information." To stay within the channel, those who talk quickly speak with more common words and simpler syntax, while those with a slower pace tend to use rarer, more unexpected words and more complicated wordings, Cohen Priva found.

The study provides only hints about why a constrained information rate might govern conversation, Cohen Priva said. It could derive from either a speaker's difficulty in formulating and uttering too much information too quickly or from a listener's difficulty in processing and comprehending speech delivered at too fast a pace.

Analyzing speech

To conduct the study, Cohen Priva analyzed two independent troves of conversational data: the Switchboard Corpus, which contains 2,400 annotated telephone conversations, and the Buckeye Corpus, which consists of 40 lengthy interviews. In total, the data included the speech of 398 people.

Cohen Priva made several measurements on all that speech to determine each speaker's information rate -- how much lexical and structural information they conveyed in how much time -- and the speech rate -- how much they said in that time.

Deriving meaningful statistics required making complex calculations to determine the relative frequency of words both on their own and given the words that preceded and followed them. Cohen Priva compared how long people take to say each word on average vs. how long a particular speaker required. He also measured how often each speaker used the passive voice, compared to the active voice, and in all the calculations accounted for each person's age, gender, the speech rate of the other member of the conversation, and other possible confounds.

Ultimately he found across the two independent dimensions -- lexical and structural -- and the two independent data sources -- Switchboard and Buckeye -- that the same statistically significant correlation held true: as speech sped up, the information rate declined.

"We could assume that there are widely different capacities of information per second that people use in speech and that each of them is possible and you can observe each and every one," Cohen Priva said. "But had that been the case, then finding these effects would have been very difficult to do. Instead, it's reliably found in two corpora in two different domains."

Do gender differences offer a clue?

Cohen Priva found a key difference involving gender that might offer a clue about why conversation has an apparently constrained information rate. It may be a socially imposed constraint for the listener's benefit.

On average, while both men and women exhibited the main trend, men conveyed more information than women at the same speech rate. There is no reason to believe that the ability to convey information at a given rate differs by gender, Cohen Priva said. Instead, he hypothesizes, women may tend to be more concerned with making sure their listeners understand what they are saying. Other studies, for example, have shown that in conversation women are more likely than men to "backchannel," or provide verbal cues like "uh huh" to confirm understanding as the dialogue proceeds.

Cohen Priva said the study has the potential to shed some light on the way people craft their utterances. One hypothesis in the field is that people choose what they intend to say and then slow their speech as they utter more rare or difficult words (e.g. if harder, then slower). But he said his data is consistent with a hypothesis that the overall speech rate dictates word choice and syntax (e.g. if faster, then simpler).

"We need to consider a model in which fast speakers consistently choose different types of words or have a preference for different types of words or structures," he said.

In other words, how one speaks appears related to how quickly one speaks.
-end-


Brown University

Related Speech Articles:

New genes linked to severe childhood speech disorder
An international study has discovered nine new genes linked to the most severe type of childhood speech disorder, apraxia.
How important is speech in transmitting coronavirus?
Normal speech by individuals who are asymptomatic but infected with coronavirus may produce enough aerosolized particles to transmit the infection, according to aerosol scientists at UC Davis.
Using a cappella to explain speech and music specialization
Speech and music are two fundamentally human activities that are decoded in different brain hemispheres.
Speech could be older than we thought
The theory of the 'descended larynx' has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels.
How the brain detects the rhythms of speech
Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables.
New findings on human speech recognition at TU Dresden
Neuroscientists at TU Dresden were able to prove that speech recognition in humans begins in the sensory pathways from the ear to the cerebral cortex and not, as previously assumed, exclusively in the cerebral cortex itself.
Babbling babies' behavior changes parents' speech
New research shows baby babbling changes the way parents speak to their infants, suggesting that infants are shaping their own learning environments.
Hearing through your fingers: Device that converts speech
A novel study published in Restorative Neurology and Neuroscience provides the first evidence that a simple and inexpensive non-invasive speech-to-touch sensory substitution device has the potential to improve hearing in hearing-impaired cochlear implant patients, as well as individuals with normal hearing, to better discern speech in various situations like learning a second language or trying to deal with the 'cocktail party effect.' The device can provide immediate multisensory enhancement without any training.
AI can detect depression in a child's speech
A machine learning algorithm can detect signs of anxiety and depression in the speech patterns of young children, potentially providing a fast and easy way of diagnosing conditions that are difficult to spot and often overlooked in young people.
Synthetic speech generated from brain recordings
A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract -- an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx.
More Speech News and Speech Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Clint Smith
The killing of George Floyd by a police officer has sparked massive protests nationwide. This hour, writer and scholar Clint Smith reflects on this moment, through conversation, letters, and poetry.
Now Playing: Science for the People

#562 Superbug to Bedside
By now we're all good and scared about antibiotic resistance, one of the many things coming to get us all. But there's good news, sort of. News antibiotics are coming out! How do they get tested? What does that kind of a trial look like and how does it happen? Host Bethany Brookeshire talks with Matt McCarthy, author of "Superbugs: The Race to Stop an Epidemic", about the ins and outs of testing a new antibiotic in the hospital.
Now Playing: Radiolab

Nina
Producer Tracie Hunte stumbled into a duet between Nina Simone and the sounds of protest outside her apartment. Then she discovered a performance by Nina on April 7, 1968 - three days after the assassination of Dr. Martin Luther King Jr. Tracie talks about what Nina's music, born during another time when our country was facing questions that seemed to have no answer, meant then and why it still resonates today.  Listen to Nina's brother, Samuel Waymon, talk about that April 7th concert here.