Nav: Home

Babies' first words can be predicted based on visual attention, IU study finds

December 05, 2016

BLOOMINGTON, Ind. -- Indiana University psychologists have shown that a baby's most likely first words are based upon their visual experience, laying the foundation for a new theory of infant language learning.

The findings also suggest new possibilities for the treatment of children with language deficits and autism.

Drawing on theories of statistical learning, IU researcher Linda Smith and colleagues found that the number of times an object enters an infant's field of vision "tips the scales" in favor of associating certain words with certain objects.

The research appears in the journal of the Royal Society Philosophical Transactions B in a special issue on statistical learning.

"We think that children's first words are predictable based on their visual experience with objects and the prevalence of those objects in their visual world," said Smith, a professor in the IU Bloomington College of Arts and Sciences' Department of Psychological and Brain Sciences and senior author on the study.

"Visual memory may be the initial key to getting words stuck on objects -- familiar visual objects like table, shirt, bottle or spoon," she added. "It's an aggregated experience; those very first words may be learned -- slowly and incrementally -- for a few visually pervasive objects. This may be how infants begin to break into language before their first birthday."

The study's results could also help inform interventions for children with delayed speech and other language disorders.

"Difficulty learning words could stem from visual processing problems," Smith added. "Children who are late talkers have slow or age-delayed visual processing skills for objects, for example. Children with autism have object-processing problems as well."

Although many researchers have studied infants' first words to understand learning, Smith said none have approached the question from the visual side.

"While studying language acquisition from the 'word side' may benefit those studying later stages of language learning -- at the ages of 18 months to 3 years -- it cannot account for how children break into language," she said.

Under the new theory, which Smith and colleagues call the Pervasiveness Hypothesis, a few highly prevalent objects stand out to infants among the "clutter" of other less frequent objects to become their first words.

To conduct their study, IU researchers looked at videos that showed the visual field of eight children -- five girls and three boys -- between 8 and 10 months old, the period before children engage in verbal interactions with parents and caregivers.

The videos came from head-mounted cameras worn by the children an average of 4.4 hours. Caregivers were told the cameras would observe children's daily activities, not words or objects specifically. Caregivers could choose when to activate the camera.

For the purpose of the study, researchers observed mealtime scenes, defined as any eating by anyone at any time or location -- in cars, at playtime or in a high chair, for example. The recordings yielded 917,207 mealtime frames, with one image sampled every five seconds. Five objects were recorded for each frame: a total of 745 objects.

Using an accepted method to index child vocabulary, the researchers then divided the named objects into "first nouns," which are acquired by half of all 16-month-olds; "early nouns," which are known by half of all 30-month-olds; and "late nouns," which are acquired at later stages of learning.

First nouns include words such as table, shirt, chair, bowl, cup, bottle, food, spoon and plate.

The study's results revealed a strong correlation between the most frequently appearing objects and "first nouns," with the top 15 of these words appearing in the images collected by the study.

"The comparison of first and early nouns was particularly striking, since both sets of object names are acquired quite early in childhood and refer to objects common in households with infants," said Elizabeth Clerkin, a Ph.D. student in the IU Bloomington Department of Psychological and Brain Sciences and first author on the study.

"That infants' visual environment during mealtime consistently involves a very small number of objects -- and the names of these high-frequency objects are among those normally learned first by infants -- suggests visual experience is doing the heavy lifting in very early word learning," she added.

Whether children who experience speech disorders are not picking up visual regularities in the environment or simply live in households with fewer regularities, Smith said it's vital to explore the role of both words and vision in language learning.

"Taking account of the visual brings a whole new dimension of word-learning into view," she added. "If all you ever worry about is the word side of word-learning, you may be missing half the problem: visual cues that aid language learning."
-end-
In addition to Smith and Clerkin, co-authors on the study are professor Chen Yu and laboratory technician Elizabeth Hart of the IU Bloomington Department of Psychological and Brain Sciences and Georgia Institute of Technology professor James M. Rehg.

This research is supported in part by the National Science Foundation. The study grew out of a larger NSF grant to IU to create a collection of over 500 million images to track the visual regularities in the lives of children from birth through age 24 months.

Indiana University

Related Language Articles:

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.
'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.
How does language emerge?
How did the almost 6000 languages of the world come into being?
New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.
Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.
Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.
Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.
Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.
Sign language reveals the hidden logical structure, and limitations, of spoken language
Sign languages can help reveal hidden aspects of the logical structure of spoken language, but they also highlight its limitations because speech lacks the rich iconic resources that sign language uses on top of its sophisticated grammar.
Lying in a foreign language is easier
It is not easy to tell when someone is lying.
More Language News and Language Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Listen Again: Meditations on Loneliness
Original broadcast date: April 24, 2020. We're a social species now living in isolation. But loneliness was a problem well before this era of social distancing. This hour, TED speakers explore how we can live and make peace with loneliness. Guests on the show include author and illustrator Jonny Sun, psychologist Susan Pinker, architect Grace Kim, and writer Suleika Jaouad.
Now Playing: Science for the People

#565 The Great Wide Indoors
We're all spending a bit more time indoors this summer than we probably figured. But did you ever stop to think about why the places we live and work as designed the way they are? And how they could be designed better? We're talking with Emily Anthes about her new book "The Great Indoors: The Surprising Science of how Buildings Shape our Behavior, Health and Happiness".
Now Playing: Radiolab

The Third. A TED Talk.
Jad gives a TED talk about his life as a journalist and how Radiolab has evolved over the years. Here's how TED described it:How do you end a story? Host of Radiolab Jad Abumrad tells how his search for an answer led him home to the mountains of Tennessee, where he met an unexpected teacher: Dolly Parton.Jad Nicholas Abumrad is a Lebanese-American radio host, composer and producer. He is the founder of the syndicated public radio program Radiolab, which is broadcast on over 600 radio stations nationwide and is downloaded more than 120 million times a year as a podcast. He also created More Perfect, a podcast that tells the stories behind the Supreme Court's most famous decisions. And most recently, Dolly Parton's America, a nine-episode podcast exploring the life and times of the iconic country music star. Abumrad has received three Peabody Awards and was named a MacArthur Fellow in 2011.