Nav: Home

How sign language users learn intonation

September 28, 2015

A spoken language is more than just words and sounds. Speakers use changes in pitch and rhythm, known as prosody, to provide emphasis, show emotion, and otherwise add meaning to what they say. But a language does not need to be spoken to have prosody: sign languages, such as American Sign Language (ASL), use movements, pauses and facial expressions to achieve the same goals. In a study appearing today in the September 2015 issue of Language, three linguists look at intonation (a key part of prosody) in ASL and find that native ASL signers learn intonation in much the same way that users of spoken languages do.

Diane Brentari (University of Chicago), Joshua Falk (University of Chicago), and George Wolford (Purdue University) studied how deaf children (ages 5-8) who were native learners of ASL used intonational features like 'sign lengthening' and facial cues as they acquired ASL. They found that children learned these features in three stages of "appearance, reorganization, and mastery": accurately replicating their use in simpler contexts, attempting unsuccessfully at first to use them in more challenging contexts, then using them accurately in all contexts as they fully learn the rules of prosody. Previous research has shown that native learners of spoken languages acquire intonation following a similar pattern. Brentari et al. also found that young signers of ASL use certain intonational features with different frequencies than adult ASL signers.

This study, "The acquisition of prosody in American Sign Language", is the first comparative analysis of prosody in ASL between children and adults who are native ASL signers, and helps demonstrate the similarities in language acquisition between signed and spoken languages. This research may also make it easier to accurately transcribe certain linguistic units of ASL, which could benefit automatic ASL translation through motion-capture software. Brentari et al.'s research was supported by grants from the National Science Foundation and the University of Chicago's Center for Gesture, Sign, and Language.
-end-
An open-access version of this article is available online: http://www.linguisticsociety.org/sites/default/files/08e_91.3Brentari.pdf

Other highlights from the September 2015 issue of Language include:

A comparative study on how different sign languages express spatial relationships: Pamela Perniss, (University of Brighton), Inge Zwitserlood (Radboud University) and Asli Ozyurek, (Max Planck Institute for Psycholinguistics)

Evidence that languages can borrow prefixes and suffixes directly from other languages, rather than only 'borrowing indirectly' from loanwords: Frank Seifart (University of Amsterdam)

Language, a leading journal in the discipline of linguistics, is published quarterly by the Linguistic Society of America.

Linguistic Society of America

Related Language Articles:

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.
Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.
Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.
Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.
Sign language reveals the hidden logical structure, and limitations, of spoken language
Sign languages can help reveal hidden aspects of the logical structure of spoken language, but they also highlight its limitations because speech lacks the rich iconic resources that sign language uses on top of its sophisticated grammar.
More Language News and Language Current Events

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Rethinking Anger
Anger is universal and complex: it can be quiet, festering, justified, vengeful, and destructive. This hour, TED speakers explore the many sides of anger, why we need it, and who's allowed to feel it. Guests include psychologists Ryan Martin and Russell Kolts, writer Soraya Chemaly, former talk radio host Lisa Fritsch, and business professor Dan Moshavi.
Now Playing: Science for the People

#537 Science Journalism, Hold the Hype
Everyone's seen a piece of science getting over-exaggerated in the media. Most people would be quick to blame journalists and big media for getting in wrong. In many cases, you'd be right. But there's other sources of hype in science journalism. and one of them can be found in the humble, and little-known press release. We're talking with Chris Chambers about doing science about science journalism, and where the hype creeps in. Related links: The association between exaggeration in health related science news and academic press releases: retrospective observational study Claims of causality in health news: a randomised trial This...