Can you hear what the neural net hears?

October 31, 1999

Biomedical engineers at the University of Southern California have created the world's first machine system that can recognize spoken words better than humans can. In benchmark testing, USC's speech recognition system bested all existing computer systems and outperformed the keenest human ears. The system may eventually advance voice control of computers and other machines, help the deaf, aid air traffic controllers and others who must understand speech in noisy environments, and instantly produce clean transcripts of conversations, with each speaker correctly identified. "We'll definitely see an improvement in the interaction between man and computer," said ONR Program Officer Joel Davis, who helps fund the research. "With speech recognition capability, computer keyboards could become obsolete."

The U.S. Navy is supporting the research for its potential benefits to Navy sonar. A demonstration of the Berger-Liaw Neural Network Speaker-Independent Speech Recognition System can be found on line at: http://www.usc.edu/ext-relations/news_service/real/real_video.html .
-end-


Office of Naval Research
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.