Nav: Home

The "eyes" say more than the "mouth" and can distinguish English sounds

June 30, 2020

Overview

A research team of the Department of Computer Science and Engineering and the Electronics-Inspired Interdisciplinary Research Institute at Toyohashi University of Technology has discovered that the difference in the ability to hear and distinguish English words including L and R, which are considered difficult for Japanese people, appears in pupillary (the so-called "black part of the eye") responses. While the pupil has the role of adjusting the amount of light that enters the eye, it is known that the size changes reflect the cognitive state of humans. In this study, the research team conducted experiments to simultaneously measure the size of the pupil while playing English words in combinations such as "Light" and "Right", and clarified that it is possible to objectively estimate the ability to distinguish English words from the eyes.

Details

The improvement of English proficiency is drawing attention in various fields as our society becomes more globalized. However, we Japanese are said to be very weak in the pronunciation of and the hearing and distinguishing of L and R which are sounds that do not originally exist in Japanese, such as in "glass" and "grass". Similarly, to how words that cannot be recognized cannot be pronounced, the ability of each person to hear and distinguish English is a very important indicator in effective English learning.

As in the proverb "The eyes say more than the mouth", it is known that the human pupil reflects a variety of our cognitive states. So, the research team tried to estimate the ability to hear and distinguish L and R by focusing on pupillary dilation response in which the pupil dilates with respect to the difference in sound. Specifically, the research team occasionally mixed in English words containing R (e.g., "right") with the continuously playing English words containing L (e.g., "light"), and investigated how the pupils of test participants responded to those sounds. Participants were classified into two groups according to their scores in the English auditory distinguishing test performed in advance, to compare the pupillary response of both groups. As a result, the group with a strong ability to hear and distinguish L and R showed a larger pupillary response than the group with a weak ability to hear and distinguish them. It was also found that this pupillary response itself could estimate the ability of the participants who had been tested in advance to hear and distinguish English, with extremely high accuracy. Participants of the experiment were not required to pay attention to the English words they were listening to, they just needed to let them play, but their ability to hear and distinguish could be estimated from their pupillary responses alone at that time. The researchers believe that in future, this finding could be a new indicator for the easy estimation of the ability of a person to hear and distinguish English.

"Up to now, the evaluation of an individual's English listening ability has been carried out by actually performing a test in which they are made to listen to English words, and scoring whether the answers are correct or incorrect. However, we focused on the pupil, which is a biological signal, with the goal of extracting objective abilities that did not depend on the participants' responses. Although the English words including L and R which were not being taken notice of were easy for all participants to potentially hear and distinguish, pupillary responses differed according to their abilities. So, I believe that this indicates that there is a possibility that our pupillary responses are reflecting differences in unconscious language processing", the lead author Yuya Kinzuka, a PhD candidate, explained.

Professor Shigeki Nakauchi, who is the leader of the research team, explained, "It was difficult for even the person themselves to recognize their listening ability, which sometimes led to a decrease in training motivation. However, this research has made it possible for not only the person themselves, but also a third party to visualize the listening ability of the learner objectively from the outside. I expect that in the future, objective measurement of the ability to hear and distinguish things will progress in various fields such as language and music." In addition, research member Professor Tetsuto Minami explains, "This discovery shows that not only simple sounds such as pure tones, but also higher-order factors such as differences in utterances are reflected in pupillary response. I expect that it will be useful as an English learning method if it is possible to improve the ability to hear and distinguish things by controlling pupil dilation from the outside."

Future Outlook

The research team has indicated that a new method to estimate, from pupillary response, the ability to hear and distinguish English based on these research results could lead to the construction of a system wherein the distinction between L and R which Japanese are not good at, can be efficiently studied. Furthermore, it is known that learning difficulties caused by the distinguishing of sounds that do not exist in the native language also occur when an English speaker learns Chinese. Ultimately, we hope that this will become a new indicator of estimating language ability that is not limited to Japanese. In addition, these research results are expected to be useful for language learning in patients with movement disorders and speech disorders as there is no need for the participants to pay attention to or physically respond to English words.
-end-
Reference

Kinzuka, Y., Minami, T., & Nakauchi, S. (2020). Pupil dilation reflects English /l//r/ discrimination ability for Japanese learners of English: a pilot study. Scientific Reports, 10(1), 1-9, DOI: 10.1038/s41598-020-65020-1 https://www.nature.com/articles/s41598-020-65020-1

The present study was conducted with the assistance of Grants-in-Aid for Scientific Research A(26240043) and Basic Research B(17H01807) from the Japan Society for the Promotion of Science.

Toyohashi University of Technology (TUT)

Related Language Articles:

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.
Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.
'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.
How does language emerge?
How did the almost 6000 languages of the world come into being?
New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.
Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.
Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.
Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.
Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.
Sign language reveals the hidden logical structure, and limitations, of spoken language
Sign languages can help reveal hidden aspects of the logical structure of spoken language, but they also highlight its limitations because speech lacks the rich iconic resources that sign language uses on top of its sophisticated grammar.
More Language News and Language Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Warped Reality
False information on the internet makes it harder and harder to know what's true, and the consequences have been devastating. This hour, TED speakers explore ideas around technology and deception. Guests include law professor Danielle Citron, journalist Andrew Marantz, and computer scientist Joy Buolamwini.
Now Playing: Science for the People

#576 Science Communication in Creative Places
When you think of science communication, you might think of TED talks or museum talks or video talks, or... people giving lectures. It's a lot of people talking. But there's more to sci comm than that. This week host Bethany Brookshire talks to three people who have looked at science communication in places you might not expect it. We'll speak with Mauna Dasari, a graduate student at Notre Dame, about making mammals into a March Madness match. We'll talk with Sarah Garner, director of the Pathologists Assistant Program at Tulane University School of Medicine, who takes pathology instruction out of...
Now Playing: Radiolab

How to Win Friends and Influence Baboons
Baboon troops. We all know they're hierarchical. There's the big brutish alpha male who rules with a hairy iron fist, and then there's everybody else. Which is what Meg Crofoot thought too, before she used GPS collars to track the movements of a troop of baboons for a whole month. What she and her team learned from this data gave them a whole new understanding of baboon troop dynamics, and, moment to moment, who really has the power.  This episode was reported and produced by Annie McEwen. Support Radiolab by becoming a member today at Radiolab.org/donate.