Geoffrey E. Hinton named first recipient of the $100,000 David E. Rumelhart Prize for analysis of human cognition

May 03, 2001

PITTSBURGH--The Glushko-Samuelson Foundation and the Cognitive Science Society have announced that Geoffrey E. Hinton is the first recipient of the David E. Rumelhart Prize for contemporary contributions to the formal analysis of human cognition.

Hinton, the director of the Gatsby Computational Neuroscience Unit at University College, London, was chosen from a large field of outstanding nominees because of his seminal contributions to the understanding of neural networks.

"Hinton's insights into the analysis of neural networks played a central role in launching the field in the mid-1980s," said Psychology Professor James McClelland of Carnegie Mellon University, who chairs the prize selection committee.

"Geoff also played a major role in conveying the relevance of neural networks to higher-level cognition." Lawrence Barsalou, a professor at Emory University and president of the Cognitive Science Society, agreed. "Hinton's contributions to cognitive science have been pivotal. As the first recipient he sets a great example for future awards."

Hinton will receive the prize, which includes a monetary award of $100,000, at the annual meeting of the Society in Edinburgh, Scotland, in early August, 2001.

The Rumelhart prize acknowledges intellectual generosity and effective mentoring as well as scientific insight. "Dave Rumelhart gave away many scientific ideas, and made important contributions to the work of many of his students and co-workers" said Robert J. Glushko, president of the Glushko-Samuelson foundation.

"Hinton stands out not only for his own contributions but for his exemplary record in mentoring young scientists," he added Eighteen graduate students have received their doctoral degrees under Hinton's supervision.

In conjunction with naming Hinton as the first recipient of the David E. Rumelhart Prize, the Glushko-Samuelson foundation announced that the prize will be awarded on an annual basis, instead of biennially.

"This change reflects the number of outstanding scientists who were nominated for the award," noted Glushko. "I am pleased that my foundation can play a role in honoring their contributions to cognitive science." The second recipient of the prize will be announced at the Edinburgh meeting of the society, and will give the prize lecture at the next annual meeting, which will be at George Mason University in August, 2002.
-end-
For further information, please visit the David E. Rumelhart Prize web site:

http://www.cnbc.cmu.edu/derprize/DerPrize2001.html

or contact:

Robert J. Glushko, 415-644-8731
James L. McClelland, 412-268-3157

Carnegie Mellon University

Related Neural Networks Articles from Brightsurf:

Deep neural networks show promise for predicting future self-harm based on clinical notes
Medical University of South Carolina researchers report in JMIR Medical Informatics that they have developed deep learning models to predict intentional self-harm based on information in clinical notes.

Researchers develop new model of the brain's real-life neural networks
Researchers at the Cyber-Physical Systems Group at the USC Viterbi School of Engineering, in conjunction with the University of Illinois at Urbana-Champaign, have developed a new model of how information deep in the brain could flow from one network to another and how these neuronal network clusters self-optimize over time.

The brain's memory abilities inspire AI experts in making neural networks less 'forgetful'
Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a ''major, long-standing obstacle to increasing AI capabilities'' by drawing inspiration from a human brain memory mechanism known as ''replay.''

New data processing module makes deep neural networks smarter
Artificial intelligence researchers have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization.

Neural cartography
A new x-ray microscopy technique could help accelerate efforts to map neural circuits and ultimately the brain itself.

Researchers study why neural networks are efficient in their predictions
A study has tested the predictions of a neural network to check whether they coincide with actual results.

Optimizing neural networks on a brain-inspired computer
Neural networks in both biological settings and artificial intelligence distribute computation across their neurons to solve complex tasks.

Teaching physics to neural networks removes 'chaos blindness'
Teaching physics to neural networks enables those networks to better adapt to chaos within their environment.

A clique away from more efficient networks
An old branch of mathematics finds a fertile new field of application.

Unravelling complex brain networks with automated 3D neural mapping
KAIST researchers developed a new algorithm for brain imaging data analysis that enables the precise and quantitative mapping of complex neural circuits onto a standardized 3D reference atlas.

Read More: Neural Networks News and Neural Networks Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.