How speech propels pathogens

October 02, 2020

Speech and singing spread saliva droplets, a phenomenon that has attracted much attention in the current context of the Covid-19 pandemic. Scientists from the CNRS, l'université de Montpellier, and Princeton University* sought to shed light on what takes place during conversations. A first study published in PNAS revealed that the direction and distance of airflow generated when speaking depend on the sounds produced. For example, the accumulation of plosive consonants, such as the "P" in "PaPa," produces a conical airflow that can travel up to 2 metres in 30 seconds. These results also emphasize that the time of exposure during a conversation influences the risk of contamination as much as distance does. A second study published on 2 October in the journal Physical Review Fluids describes the mechanism that produces microscopic droplets during speech: saliva filaments form on the lips for the consonants P and B, for example, and are then extended and fragmented in the form of droplets. This research is being continued with the Metropolitan Opera Orchestra ("MET Orchestra") in New York, as part of a project to identify the safest conditions for continuing this prestigious orchestra's activity.
-end-
*The French scientists work at the Centre for Structural Biology (CNRS/Université de Montpellier/Inserm) and the Alexander Grothendieck Institute of Montpellier (CNRS/Université de Montpellier).

CNRS

Related Speech Articles from Brightsurf:

How speech propels pathogens
Speech and singing spread saliva droplets, a phenomenon that has attracted much attention in the current context of the Covid-19 pandemic.

How everyday speech could transmit viral droplets
High-speed imaging of an individual producing common speech sounds shows that the sudden burst of airflow produced from the articulation of consonants like /p/ or /b/ carry salivary and mucus droplets for at least a meter in front of a speaker.

Speech processing hierarchy in the dog brain
Dog brains, just as human brains, process speech hierarchically: intonations at lower, word meanings at higher stages, according to a new study by Hungarian researchers.

Computational model decodes speech by predicting it
UNIGE scientists developed a neuro-computer model which helps explain how the brain identifies syllables in natural speech.

Variability in natural speech is challenging for the dyslexic brain
A new study brings neural-level evidence that the continuous variation in natural speech makes the discrimination of phonemes challenging for adults suffering from developmental reading-deficit dyslexia.

How the brain controls our speech
Speaking requires both sides of the brain. Each hemisphere takes over a part of the complex task of forming sounds, modulating the voice and monitoring what has been said.

How important is speech in transmitting coronavirus?
Normal speech by individuals who are asymptomatic but infected with coronavirus may produce enough aerosolized particles to transmit the infection, according to aerosol scientists at UC Davis.

Using a cappella to explain speech and music specialization
Speech and music are two fundamentally human activities that are decoded in different brain hemispheres.

Speech could be older than we thought
The theory of the 'descended larynx' has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels.

How the brain detects the rhythms of speech
Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables.

Read More: Speech News and Speech Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.