Nav: Home

New learning procedure for neural networks

March 10, 2016

Rustling leaves, a creaking branch: To a mouse, these sensory impressions may at first seem harmless -- but not if a cat suddenly bursts out of the bush. If so, they were clues of impending life-threatening danger. Robert Gütig of the Max Planck Institute of Experimental Medicine in Göttingen has now found how the brain can link sensory perceptions to events occurring after a delay. In a computer model, he has developed a learning procedure in which the model neurons can learn to distinguish between many different stimuli by adjusting their activity to the frequency of the cues. The model even works when there is a time delay between the cue and the event or outcome. Not only is Gütig's learning procedure vital for the survival of every living creature in that it enables them to filter environmental stimuli; it also helps solve a number of technological learning difficulties. One possible application is in the development of speech recognition programs.

In the animal world, dangers are frequently preceded by warning signs: telltale sounds, movements and odours may be clues of an imminent attack. If a mouse survives an attack by a cat, its future will be brighter if it learns from the failed attempt and reads the clues early next time round. However, mice are constantly bombarded with a vast number of sensorial impressions, most of which are not associated with danger. So how do they know which sounds and odours from their environment presage a cat attack and which do not?

This poses a problem for the mouse's brain. In most cases, the crucial environmental stimuli are temporally dispersed from the actual attack, so the brain must link a clue and the resulting event (e.g. a sound and an attack) even though there is a delay between them. Previous theories have not provided satisfactory explanations as to how the brain bridges the gap between a cue and the associated outcome. Robert Gütig of the Max Planck Institute of Experimental Medicine has discovered how the brain can solve this problem. On the computer, he programmed a neural network that reacts to stimuli in the same way as a cluster of biological cells. This network can learn to filter out the cues that predict a subsequent event.

It depends on the frequency

The network learns by strengthening or weakening specific synapses between the model neurons. The foundation of the computer model is a synaptic learning rule under which individual neurons can increase or decrease their activity in response to a simple learning signal. Gütig has used this learning rule to establish a new learning procedure. "This 'aggregate-label' learning procedure is built on the concept of setting the connections between cells in such a way that the resulting neural activity over a certain period is proportional to the number of cues," explains Gütig. In this way, if a learning signal reflects the occurrence and intensity of certain events in the mouse's environment, the neurons learn to react to the stimuli that predict those events.

However, Gütig's networks can learn to react to environmental stimuli even when no learning signals are available in the environment. They do this by interpreting the average neural activity within a network as a learning signal. Individual neurons learn to react to stimuli that occur in the same numbers as those to which other neurons in the network react. This 'self-supervised' learning follows a principle different to the Hebbian theory that has frequently been applied in artificial neural networks. Hebbian networks learn by strengthening the synapses between neurons that spike at the same time or in quick succession. "In self-supervised learning, it is not necessary for the neural activity to be temporally aligned. The total number of spikes in a given period is the deciding factor for synaptic change," says Gütig. This means that such networks can link sensory clues of different types, e.g. visual, auditory and olfactory, even when there are significant delays between their respective neural representations.

Not only does Gütig's learning procedure explain biological processes; it could also pave the way for far-reaching improvements to technological applications such as automatic speech recognition. "That would facilitate considerable simplification of the training requirements for computer-based speech recognition. Instead of laboriously segmented language databases or complex segmentation algorithms, aggregate-label learning could manage with just the subtitles from newscasts, for example," says Gütig.
-end-
Original publication:

Robert Gütig
Spiking neurons can discover predictive features by aggregate-label learning
Science March 4, 2016; DOI: 10.1126/science.aab4113

Max-Planck-Gesellschaft

Related Neurons Articles:

The first 3D map of the heart's neurons
An interdisciplinary research team establishes a new technological pipeline to build a 3D map of the neurons in the heart, revealing foundational insight into their role in heart attacks and other cardiac conditions.
Mapping the neurons of the rat heart in 3D
A team of researchers has developed a virtual 3D heart, digitally showcasing the heart's unique network of neurons for the first time.
How to put neurons into cages
Football-shaped microscale cages have been created using special laser technologies.
A molecule that directs neurons
A research team coordinated by the University of Trento studied a mass of brain cells, the habenula, linked to disorders like autism, schizophrenia and depression.
Shaping the social networks of neurons
Identification of a protein complex that attracts or repels nerve cells during development.
With these neurons, extinguishing fear is its own reward
The same neurons responsible for encoding reward also form new memories to suppress fearful ones, according to new research by scientists at The Picower Institute for Learning and Memory at MIT.
How do we get so many different types of neurons in our brain?
SMU (Southern Methodist University) researchers have discovered another layer of complexity in gene expression, which could help explain how we're able to have so many billions of neurons in our brain.
These neurons affect how much you do, or don't, want to eat
University of Arizona researchers have identified a network of neurons that coordinate with other brain regions to influence eating behaviors.
Mood neurons mature during adolescence
Researchers have discovered a mysterious group of neurons in the amygdala -- a key center for emotional processing in the brain -- that stay in an immature, prenatal developmental state throughout childhood.
Connecting neurons in the brain
Leuven researchers uncover new mechanisms of brain development that determine when, where and how strongly distinct brain cells interconnect.
More Neurons News and Neurons Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Processing The Pandemic
Between the pandemic and America's reckoning with racism and police brutality, many of us are anxious, angry, and depressed. This hour, TED Fellow and writer Laurel Braitman helps us process it all.
Now Playing: Science for the People

#568 Poker Face Psychology
Anyone who's seen pop culture depictions of poker might think statistics and math is the only way to get ahead. But no, there's psychology too. Author Maria Konnikova took her Ph.D. in psychology to the poker table, and turned out to be good. So good, she went pro in poker, and learned all about her own biases on the way. We're talking about her new book "The Biggest Bluff: How I Learned to Pay Attention, Master Myself, and Win".
Now Playing: Radiolab

Invisible Allies
As scientists have been scrambling to find new and better ways to treat covid-19, they've come across some unexpected allies. Invisible and primordial, these protectors have been with us all along. And they just might help us to better weather this viral storm. To kick things off, we travel through time from a homeless shelter to a military hospital, pondering the pandemic-fighting power of the sun. And then, we dive deep into the periodic table to look at how a simple element might actually be a microbe's biggest foe. This episode was reported by Simon Adler and Molly Webster, and produced by Annie McEwen and Pat Walters. Support Radiolab today at Radiolab.org/donate.