Nav: Home

New learning procedure for neural networks

March 10, 2016

Rustling leaves, a creaking branch: To a mouse, these sensory impressions may at first seem harmless -- but not if a cat suddenly bursts out of the bush. If so, they were clues of impending life-threatening danger. Robert Gütig of the Max Planck Institute of Experimental Medicine in Göttingen has now found how the brain can link sensory perceptions to events occurring after a delay. In a computer model, he has developed a learning procedure in which the model neurons can learn to distinguish between many different stimuli by adjusting their activity to the frequency of the cues. The model even works when there is a time delay between the cue and the event or outcome. Not only is Gütig's learning procedure vital for the survival of every living creature in that it enables them to filter environmental stimuli; it also helps solve a number of technological learning difficulties. One possible application is in the development of speech recognition programs.

In the animal world, dangers are frequently preceded by warning signs: telltale sounds, movements and odours may be clues of an imminent attack. If a mouse survives an attack by a cat, its future will be brighter if it learns from the failed attempt and reads the clues early next time round. However, mice are constantly bombarded with a vast number of sensorial impressions, most of which are not associated with danger. So how do they know which sounds and odours from their environment presage a cat attack and which do not?

This poses a problem for the mouse's brain. In most cases, the crucial environmental stimuli are temporally dispersed from the actual attack, so the brain must link a clue and the resulting event (e.g. a sound and an attack) even though there is a delay between them. Previous theories have not provided satisfactory explanations as to how the brain bridges the gap between a cue and the associated outcome. Robert Gütig of the Max Planck Institute of Experimental Medicine has discovered how the brain can solve this problem. On the computer, he programmed a neural network that reacts to stimuli in the same way as a cluster of biological cells. This network can learn to filter out the cues that predict a subsequent event.

It depends on the frequency

The network learns by strengthening or weakening specific synapses between the model neurons. The foundation of the computer model is a synaptic learning rule under which individual neurons can increase or decrease their activity in response to a simple learning signal. Gütig has used this learning rule to establish a new learning procedure. "This 'aggregate-label' learning procedure is built on the concept of setting the connections between cells in such a way that the resulting neural activity over a certain period is proportional to the number of cues," explains Gütig. In this way, if a learning signal reflects the occurrence and intensity of certain events in the mouse's environment, the neurons learn to react to the stimuli that predict those events.

However, Gütig's networks can learn to react to environmental stimuli even when no learning signals are available in the environment. They do this by interpreting the average neural activity within a network as a learning signal. Individual neurons learn to react to stimuli that occur in the same numbers as those to which other neurons in the network react. This 'self-supervised' learning follows a principle different to the Hebbian theory that has frequently been applied in artificial neural networks. Hebbian networks learn by strengthening the synapses between neurons that spike at the same time or in quick succession. "In self-supervised learning, it is not necessary for the neural activity to be temporally aligned. The total number of spikes in a given period is the deciding factor for synaptic change," says Gütig. This means that such networks can link sensory clues of different types, e.g. visual, auditory and olfactory, even when there are significant delays between their respective neural representations.

Not only does Gütig's learning procedure explain biological processes; it could also pave the way for far-reaching improvements to technological applications such as automatic speech recognition. "That would facilitate considerable simplification of the training requirements for computer-based speech recognition. Instead of laboriously segmented language databases or complex segmentation algorithms, aggregate-label learning could manage with just the subtitles from newscasts, for example," says Gütig.
-end-
Original publication:

Robert Gütig
Spiking neurons can discover predictive features by aggregate-label learning
Science March 4, 2016; DOI: 10.1126/science.aab4113

Max-Planck-Gesellschaft

Related Neurons Articles:

New tool to identify and control neurons
One of the big challenges in the Neuroscience field is to understand how connections and communications trigger our behavior.
Neurons that regenerate, neurons that die
In a new study published in Neuron, investigators report on a transcription factor that they have found that can help certain neurons regenerate, while simultaneously killing others.
How neurons use crowdsourcing to make decisions
When many individual neurons collect data, how do they reach a unanimous decision?
Neurons can learn temporal patterns
Individual neurons can learn not only single responses to a particular signal, but also a series of reactions at precisely timed intervals.
A turbo engine for tracing neurons
Putting a turbo engine into an old car gives it an entirely new life -- suddenly it can go further, faster.
Brain neurons help keep track of time
Turning the theory of how the human brain perceives time on its head, a novel analysis in mice reveals that dopamine neuron activity plays a key role in judgment of time, slowing down the internal clock.
During infancy, neurons are still finding their places
Researchers have identified a large population of previously unrecognized young neurons that migrate in the human brain during the first few months of life, contributing to the expansion of the frontal lobe, a region important for social behavior and executive function.
How many types of neurons are there in the brain?
For decades, scientists have struggled to develop a comprehensive census of cell types in the brain.
Molecular body guards for neurons
In the brain, patterns of neural activity are perfectly balanced.
Engineering researchers use laser to 'weld' neurons
University of Alberta researchers have developed a method of connecting neurons, using ultrashort laser pulses -- a breakthrough technique that opens the door to new medical research and treatment opportunities.

Related Neurons Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Digital Manipulation
Technology has reshaped our lives in amazing ways. But at what cost? This hour, TED speakers reveal how what we see, read, believe — even how we vote — can be manipulated by the technology we use. Guests include journalist Carole Cadwalladr, consumer advocate Finn Myrstad, writer and marketing professor Scott Galloway, behavioral designer Nir Eyal, and computer graphics researcher Doug Roble.
Now Playing: Science for the People

#529 Do You Really Want to Find Out Who's Your Daddy?
At least some of you by now have probably spit into a tube and mailed it off to find out who your closest relatives are, where you might be from, and what terrible diseases might await you. But what exactly did you find out? And what did you give away? In this live panel at Awesome Con we bring in science writer Tina Saey to talk about all her DNA testing, and bioethicist Debra Mathews, to determine whether Tina should have done it at all. Related links: What FamilyTreeDNA sharing genetic data with police means for you Crime solvers embraced...