Anxious about public speaking? Your smart speaker could help

April 25, 2020

UNIVERSITY PARK, Pa. - Individuals who fear talking in front of a crowd could soon have a new tool to ease public speaking anxiety: their smart speaker.

A team of researchers at Penn State has developed a public-speaking tutor on the Amazon Alexa platform. The tutor enables users to engage in cognitive restructuring exercise - a psychological technique that helps anxious individuals recognize and modify negative thinking behaviors. When users deployed the tutor in a recent study, their pre-speech anxiety was relieved, according to the researchers.

"This study represents a significant shift in our use of smart speakers, from a tool that answers questions to one that acts as a helper or coach," said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State.

According to Jinping Wang, doctoral student in the Bellisario College of Communications and lead author on the paper, users' interactions with Alexa not only helped to ease their speech anxiety, but their feedback suggests that the tutor could be a viable alternative to person-to-person coaching sessions.

"There is often a concern of being judged by human tutors or human therapists," said Wang. "If we can use a machine like Alexa to provide such training to individuals with speech anxiety or social anxiety, we can help them get rid of their concern of being judged by a human."

In the study, participants were guided to interact with an Amazon Echo smart speaker and were randomly assigned to interact with either a highly social Alexa or one that was less social in its greetings and expressions. The participants were then encouraged to use what they learned to prepare and present a short speech through a virtual reality application that simulated a room with a 20-person audience. After their speech, participants completed a questionnaire about their experience.

The researchers found that the high-sociable condition - through which Alexa adopted a more personal conversation style- provided a better user experience by establishing a sense of interpersonal closeness with the user.

"If you think about the usual interactions with Alexa, they're quite dry and very functional," said Saeed Abdullah, assistant professor of information sciences and technology and a collaborator on the project. "But providing some sort of social cues seems to result in positive outcomes for users."

Sundar added, "People are not simply anthropomorphizing the machine, but are responding to increased sociability by feeling a sense of closeness with the machine, which is associated with lowered speech anxiety."

According to the researchers, the approach has the potential to assist individuals who are anxious about public speaking, from the comfort of their own homes. Smart speakers could be utilized similarly in future work to explore aiding individuals with other forms of anxiety.

"Alexa is one of those things that lives in our homes," concluded Sundar. "As such, it occupies a somewhat intimate space in our lives. It's often a conversation partner, so why not use it for other things rather than just answering factual questions?"
Hyun Yang and Ruosi Shao, doctoral students in the Bellisario College of Communications, contributed to the project. The researchers' paper was accepted after blind peer review to the 2020 ACM Conference on Human Factors and Computing Systems, which has been canceled due to the global coronavirus outbreak. The work is being published in the conference proceedings, released on April 25.

Penn State

Related Speech Articles from Brightsurf:

How speech propels pathogens
Speech and singing spread saliva droplets, a phenomenon that has attracted much attention in the current context of the Covid-19 pandemic.

How everyday speech could transmit viral droplets
High-speed imaging of an individual producing common speech sounds shows that the sudden burst of airflow produced from the articulation of consonants like /p/ or /b/ carry salivary and mucus droplets for at least a meter in front of a speaker.

Speech processing hierarchy in the dog brain
Dog brains, just as human brains, process speech hierarchically: intonations at lower, word meanings at higher stages, according to a new study by Hungarian researchers.

Computational model decodes speech by predicting it
UNIGE scientists developed a neuro-computer model which helps explain how the brain identifies syllables in natural speech.

Variability in natural speech is challenging for the dyslexic brain
A new study brings neural-level evidence that the continuous variation in natural speech makes the discrimination of phonemes challenging for adults suffering from developmental reading-deficit dyslexia.

How the brain controls our speech
Speaking requires both sides of the brain. Each hemisphere takes over a part of the complex task of forming sounds, modulating the voice and monitoring what has been said.

How important is speech in transmitting coronavirus?
Normal speech by individuals who are asymptomatic but infected with coronavirus may produce enough aerosolized particles to transmit the infection, according to aerosol scientists at UC Davis.

Using a cappella to explain speech and music specialization
Speech and music are two fundamentally human activities that are decoded in different brain hemispheres.

Speech could be older than we thought
The theory of the 'descended larynx' has stated that before speech can emerge, the larynx must be in a low position to produce differentiated vowels.

How the brain detects the rhythms of speech
Neuroscientists at UC San Francisco have discovered how the listening brain scans speech to break it down into syllables.

Read More: Speech News and Speech Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to