Native language governs the way toddlers interpret speech sounds, according to Penn study

October 01, 2007

Toddlers are learning language skills earlier than expected and by the age of 18 months understand enough of the lexicon of their own language to recognize how speakers use sounds to convey meaning.

They also ignore sounds that don't play a significant role in speaking their native tongue, according to a study by a University of Pennsylvania psychologist.

The study shows how important the child's first year is in acquiring language. By listening to their parents and learning words, children discover how speech in their language works, a process that is vital for gaining command of vocabulary and grammar.

This is the first time scientists have shown that children as young as 18 months actively interpret the phonetic characteristics of their particular language when they learn words. Previously, scientists had speculated that this ability would emerge much later in life, once children had already amassed large vocabularies.

Previous research showed that at birth infants can distinguish most of the phonetic contrasts used by all the world's languages. This ''universal'' capacity shifts over the first year to a language-specific pattern in which infants retain or improve categorization of native-language sounds but fail to discriminate many non-native sounds. Eventually, they learn to ignore subtle speech distinctions that their language does not use. This is why Japanese toddlers, like Japanese adults, cannot tell apart the English "r" and "l" sounds and why English speakers have trouble with certain French vowels because they all sound the same to non-native speakers due to language learning in infancy. The Penn study shows that even when two words sound very different, toddlers know whether to take this difference seriously or to chalk it up to random variation depending on how their language works.

"The results demonstrate that at 18 months children have a rudimentary understanding of the 'sound system' of their language and that knowledge guides their interpretation of the sounds they encounter," said Daniel Swingley, assistant professor in the Department of Psychology at Penn who worked with colleagues from the University of British Columbia and the Max-Planck-Institute for Psycholinguistics.

"Children can easily hear how the same word can be pronounced in different ways. We might say, 'Is that your kiiiiiitty"' or, 'Show me the kitty.' In English, we're still talking about the same cat. But children have to figure this out. In other languages, like Japanese or Finnish, those two versions of "kitty" could mean completely different things. Our study showed that 18-month-olds have already learned this and apply that knowledge when learning new words."

Psychologists tested vowel duration ("kitty" versus "kiiiitty") in three experiments comparing Dutch- and English-learning 18-month-olds. Children were shown two different toys. With one toy, researchers repeated a word dozens of times, naming it a "tam." The other toy was named too, with the same label only with the vowel acoustically longer in duration ("taam").

Dutch children, learning a language that includes words differentiated by how long the vowel is pronounced, interpret the variations as meaningful and learn which word goes with each object. English speakers ignored the elongation of vowel sounds.

English learners did not somehow lack the cognitive power to learn both words. They can hear the difference between the words, and they succeed on words that really are different in English ("tam" vs. "tem"). The difference arose from the phonological generalizations children had already made from their brief experience with English: "tam" and "taam", like "kitty" and "kiiiitty", mean the same thing. Dutch children, on the other hand, interpreted vowel duration as lexically contrastive in keeping with the properties of their language.
The study, to appear in the Oct. 1 issue of the Proceedings of the National Academy of Sciences, was funded by the Max-Planck-Gesellschaft, the Nederlandse Organisatie voor Wetenschappelijk Onderzoek's Spinoza Prize, the National Science Foundation, the National Institutes of Health and the Canadian Natural Sciences and Engineering Research Council.

The study was performed by Swingley, Christiane Dietrich of the Max-Planck-Institute for Psycholinguistics and Janet F. Werker of the University of British Columbia.

University of Pennsylvania

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to