UCSD scientists explain and improve upon 'enigmatic' probability formula

October 16, 2003

San Diego, Oct. 16, 2003 -- Scientists at the University of California, San Diego (UCSD) have developed new insight into a formula that helped British cryptanalysts crack the German Enigma code in World War II. Writing in the Oct. 17 edition of the journal Science, UCSD Jacobs School of Engineering professor Alon Orlitsky and graduate students Narayana P. Santhanam and Junan Zhang shed light on a lingering mathematical mystery and propose a new solution that could help improve automatic speech recognition, natural language processing, and other machine learning software.

In the article, Orlitsky and his colleagues unlock some of the secrets of the "Good-Turing estimator," a formula for estimating the probability of elements based on observed data. The formula is named after famed mathematicians I.J. Good and Alan Turing who, during WWII, were among a group of cryptanalysts charged with breaking the Enigma cipher -- the code used to encrypt German military communications. Working at Bletchley Park outside of London, their work has been credited by some with shortening the war by several years. (It also led to the development of the first modern computer, and was documented in a number of books and movies.)

The cryptanalysts were greatly aided by their possession of the Kengruppenbuch, the German cipher book that contained all possible secret keys to Enigma, and had been previously captured by British Intelligence. They documented the keys used by various U-boat commanders in previously decrypted messages and used this information to estimate the distributions of pages from which commanders picked their secret keys.

The prevailing technique at the time estimated the likelihood of each page by simply using its empirical frequency, the fraction of the time it had been picked in the past. But Good and Turing developed an unintuitive formula that bore little resemblance to conventional estimators. Surprisingly, this Good-Turing estimator outperformed the more intuitive approaches. Following the war, Good published the formula, mentioning that Turing had an "intuitive demonstration" for its power, but not describing what that demonstration entailed.

Since then, Good-Turing has been incorporated into a variety of applications such as information retrieval, spell-checking, and speech recognition software, where it is used to learn automatically the underlying structure of the language. But despite its usefulness, "its performance has remained something of an enigma itself," said Orlitsky, a professor in the Electrical and Computer Engineering department. While some partial explanations were given as to why Good-Turing may work well, no objective evaluation or results have been established for its optimality. Additionally, scientists observed that while it worked well under many circumstances, at times, its performance was lacking.

Now, Orlitsky, Santhanam, and Zhang believe they have unraveled some of the mystery surrounding Good-Turing, and constructed a new estimator that, unlike the historic formula, is reliable under all conditions. Motivated by information-theoretic and machine-learning considerations, they propose a natural measure for the performance of an estimator. Called attenuation, it evaluates the highest possible ratio between the probability assigned to each symbol in a sequence by any distribution, and the corresponding probability assigned by the estimator.

The UCSD researchers show that intuitive estimators, such as empirical frequency, can attenuate the probability of a symbol by an arbitrary amount. They also prove that Good-Turing performs well in general. While it can attenuate the probability of symbols by a factor of 1.39, it never attenuates by a factor of more than 2. Motivated by these observations, they derived an estimator whose attenuation is 1. This means that as the length of any sequence increases, the probability assigned to each symbol by the new estimator is as high as that assigned to it by any distribution.

"While there is a considerable amount of work to be done in simplifying and further improving the new estimator," concluded Orlitsky, "we hope that this new framework will eventually improve language modeling and hence lead to better speech recognition and data mining software."
-end-
* "Always Good-Turing: Asymptotically Optimal Probability Estimation," Science Magazine. http://www.sciencemag.org/

Media Contact: Doug Ramsey 858-822-5825 dramsey@ucsd.edu

University of California - San Diego

Related Language Articles from Brightsurf:

Learning the language of sugars
We're told not to eat too much sugar, but in reality, all of our cells are covered in sugar molecules called glycans.

How effective are language learning apps?
Researchers from Michigan State University recently conducted a study focusing on Babbel, a popular subscription-based language learning app and e-learning platform, to see if it really worked at teaching a new language.

Chinese to rise as a global language
With the continuing rise of China as a global economic and trading power, there is no barrier to prevent Chinese from becoming a global language like English, according to Flinders University academic Dr Jeffrey Gil.

'She' goes missing from presidential language
MIT researchers have found that although a significant percentage of the American public believed the winner of the November 2016 presidential election would be a woman, people rarely used the pronoun 'she' when referring to the next president before the election.

How does language emerge?
How did the almost 6000 languages of the world come into being?

New research quantifies how much speakers' first language affects learning a new language
Linguistic research suggests that accents are strongly shaped by the speaker's first language they learned growing up.

Why the language-ready brain is so complex
In a review article published in Science, Peter Hagoort, professor of Cognitive Neuroscience at Radboud University and director of the Max Planck Institute for Psycholinguistics, argues for a new model of language, involving the interaction of multiple brain networks.

Do as i say: Translating language into movement
Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Learning language
When it comes to learning a language, the left side of the brain has traditionally been considered the hub of language processing.

Learning a second alphabet for a first language
A part of the brain that maps letters to sounds can acquire a second, visually distinct alphabet for the same language, according to a study of English speakers published in eNeuro.

Read More: Language News and Language Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.