Nav: Home

Model sheds light on inhibitory neurons' computational role

January 09, 2017

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory have developed a new computational model of a neural circuit in the brain, which could shed light on the biological role of inhibitory neurons -- neurons that keep other neurons from firing.

The model describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons. The circuit performs what neuroscientists call a "winner-take-all" operation, in which signals from multiple input neurons induce a signal in just one output neuron.

Using the tools of theoretical computer science, the researchers prove that, within the context of their model, a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation. Because the model makes empirical predictions about the behavior of inhibitory neurons in the brain, it offers a good example of the way in which computational analysis could aid neuroscience.

The researchers will present their results this week at the conference on Innovations in Theoretical Computer Science. Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT, is the senior author on the paper. She's joined by Merav Parter, a postdoc in her group, and Cameron Musco, an MIT graduate student in electrical engineering and computer science.

For years, Lynch's group has studied communication and resource allocation in ad hoc networks -- networks whose members are continually leaving and rejoining. But recently, the team has begun using the tools of network analysis to investigate biological phenomena.

"There's a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems," Lynch says. "We're trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties."

Artificial neurology

In recent years, artificial neural networks -- computer models roughly based on the structure of the brain -- have been responsible for some of the most rapid improvement in artificial-intelligence systems, from speech transcription to face recognition software.

An artificial neural network consists of "nodes" that, like individual neurons, have limited information-processing power but are densely interconnected. Data are fed into the first layer of nodes. If the data received by a given node meet some threshold criterion -- for instance, if it exceeds a particular value -- the node "fires," or sends signals along all of its outgoing connections.

Each of those outgoing connections, however, has an associated "weight," which can augment or diminish a signal. Each node in the next layer of the network receives weighted signals from multiple nodes in the first layer; it adds them together, and again, if their sum exceeds some threshold, it fires. Its outgoing signals pass to the next layer, and so on.

In artificial-intelligence applications, a neural network is "trained" on sample data, constantly adjusting its weights and firing thresholds until the output of its final layer consistently represents the solution to some computational problem.

Biological plausibility

Lynch, Parter, and Musco made several modifications to this design to make it more biologically plausible. The first was the addition of inhibitory "neurons." In a standard artificial neural network, the values of the weights on the connections are usually positive or capable of being either positive or negative. But in the brain, some neurons appear to play a purely inhibitory role, preventing other neurons from firing. The MIT researchers modeled those neurons as nodes whose connections have only negative weights.

Many artificial-intelligence applications also use "feed-forward" networks, in which signals pass through the network in only one direction, from the first layer, which receives input data, to the last layer, which provides the result of a computation. But connections in the brain are much more complex. Lynch, Parter, and Musco's circuit thus includes feedback: Signals from the output neurons pass to the inhibitory neurons, whose output in turn passes back to the output neurons. The signaling of the output neurons also feeds back on itself, which proves essential to enacting the winner-take-all strategy.

Finally, the MIT researchers' network is probabilistic. In a typical artificial neural net, if a node's input values exceed some threshold, the node fires. But in the brain, increasing the strength of the signal traveling over an input neuron only increases the chances that an output neuron will fire. The same is true of the nodes in the researchers' model. Again, this modification is crucial to enacting the winner-take-all strategy.

In the researchers' model, the number of input and output neurons is fixed, and the execution of the winner-take-all computation is purely the work of a bank of auxiliary neurons. "We are trying to see the trade-off between the computational time to solve a given problem and the number of auxiliary neurons," Parter explains. "We consider neurons to be a resource; we don't want too spend much of it."

Inhibition's virtues

Parter and her colleagues were able to show that with only one inhibitory neuron, it's impossible, in the context of their model, to enact the winner-take-all strategy. But two inhibitory neurons are sufficient. The trick is that one of the inhibitory neurons -- which the researchers call a convergence neuron -- sends a strong inhibitory signal if more than one output neuron is firing. The other inhibitory neuron -- the stability neuron -- sends a much weaker signal as long as any output neurons are firing.

The convergence neuron drives the circuit to select a single output neuron, at which point it stops firing; the stability neuron prevents a second output neuron from becoming active once the convergence neuron has been turned off. The self-feedback circuits from the output neurons enhance this effect. The longer an output neuron has been turned off, the more likely it is to remain off; the longer it's been on, the more likely it is to remain on. Once a single output neuron has been selected, its self-feedback circuit ensures that it can overcome the inhibition of the stability neuron.

Without randomness, however, the circuit won't converge to a single output neuron: Any setting of the inhibitory neurons' weights will affect all the output neurons equally. "You need randomness to break the symmetry," Parter explains.

The researchers were able to determine the minimum number of auxiliary neurons required to guarantee a particular convergence speed and the maximum convergence speed possible given a particular number of auxiliary neurons.

Adding more convergence neurons increases the convergence speed, but only up to a point. For instance, with 100 input neurons, two or three convergence neurons are all you need; adding a fourth doesn't improve efficiency. And just one stability neuron is already optimal.

But perhaps more intriguingly, the researchers showed that including excitatory neurons -- neurons that stimulate, rather than inhibit, other neurons' firing -- as well as inhibitory neurons among the auxiliary neurons cannot improve the efficiency of the circuit. Similarly, any arrangement of inhibitory neurons that doesn't observe the distinction between convergence and stability neurons will be less efficient than one that does.

Assuming, then, that evolution tends to find efficient solutions to engineering problems, the model suggests both an answer to the question of why inhibitory neurons are found in the brain and a tantalizing question for empirical research: Do real inhibitory neurons exhibit the same division between convergence neurons and stability neurons?
-end-
Additional background

PAPER: Computational tradeoffs in biological neural networks: Self-stabilizing winner-take-all networks https://arxiv.org/pdf/1610.02084v1.pdf

ARCHIVE: Exploring networks efficiently http://news.mit.edu/2016/ant-colony-behavior-better-algorithms-network-communication-0713

ARCHIVE: New frontier in error-correcting codes http://news.mit.edu/2014/interactive-error-correcting-code-1002

ARCHIVE: New approach to vertex connectivity could maximize networks' bandwidth http://news.mit.edu/2013/new-approach-to-vertex-connectivity-could-maximize-networks-bandwidth-1224

ARCHIVE: Reliable communication, unreliable networks http://news.mit.edu/2013/reliable-communication-unreliable-networks-0806

Massachusetts Institute of Technology

Related Neurons Articles:

New tool to identify and control neurons
One of the big challenges in the Neuroscience field is to understand how connections and communications trigger our behavior.
Neurons that regenerate, neurons that die
In a new study published in Neuron, investigators report on a transcription factor that they have found that can help certain neurons regenerate, while simultaneously killing others.
How neurons use crowdsourcing to make decisions
When many individual neurons collect data, how do they reach a unanimous decision?
Neurons can learn temporal patterns
Individual neurons can learn not only single responses to a particular signal, but also a series of reactions at precisely timed intervals.
A turbo engine for tracing neurons
Putting a turbo engine into an old car gives it an entirely new life -- suddenly it can go further, faster.
Brain neurons help keep track of time
Turning the theory of how the human brain perceives time on its head, a novel analysis in mice reveals that dopamine neuron activity plays a key role in judgment of time, slowing down the internal clock.
During infancy, neurons are still finding their places
Researchers have identified a large population of previously unrecognized young neurons that migrate in the human brain during the first few months of life, contributing to the expansion of the frontal lobe, a region important for social behavior and executive function.
How many types of neurons are there in the brain?
For decades, scientists have struggled to develop a comprehensive census of cell types in the brain.
Molecular body guards for neurons
In the brain, patterns of neural activity are perfectly balanced.
Engineering researchers use laser to 'weld' neurons
University of Alberta researchers have developed a method of connecting neurons, using ultrashort laser pulses -- a breakthrough technique that opens the door to new medical research and treatment opportunities.

Related Neurons Reading:

The Neuron: Cell and Molecular Biology
by Irwin B. Levitan (Author), Leonard K. Kaczmarek (Author)

From Neuron to Brain (5th Ed)
by John G. Nicholls (Author), A. Robert Martin (Author), David A. Brown (Author), Mathew E. Diamond (Author), David A. Weisblat (Author), Paul A. Fuchs (Author)

From Neurons to Neighborhoods : The Science of Early Childhood Development
by Committee on Integrating the Science of Early Childhood Development (Author), Youth, and Families Board on Children (Author), National Research Council (Author), Committee on Integrating the Science of Early Childhood Development (Author), Jack P. Shonkoff (Editor), Deborah A. Phillips (Editor)

I of the Vortex: From Neurons to Self
by Rodolfo R. Llinas (Author)

From Photon to Neuron: Light, Imaging, Vision
by Philip Nelson (Author)

Brain-Mind: From Neurons to Consciousness and Creativity (Treatise on Mind and Society)
by Oxford University Press

Mirror Neurons Will Save Your Life: How To Stop Being Controlled By Other People

Did My Neurons Make Me Do It?: Philosophical and Neurobiological Perspectives on Moral Responsibility and Free Will
by Nancey Murphy (Author), Warren S. Brown (Author)

From Neuron to Cognition via Computational Neuroscience (Computational Neuroscience Series)
by Michael A. Arbib (Editor), Michael A. Arbib (Editor), James J. Bonaiuto (Editor), James J. Bonaiuto (Editor), Terrence J. Sejnowski (Editor), Tomaso A. Poggio (Editor), Tomaso A. Poggio (Editor), Nicolas Brunel (Editor), John Rinzel (Editor), Jonathan Rubin (Editor), Nathan Vierling-Claassen (Editor), Stephanie Jones (Editor), Wulfram Gerstner (Editor), Jean-Marc Fellous (Editor), Carmen Canavier (Editor), Michael E. Hasselmo (Editor), Auke Jan Ijspeert (Editor), Andrej Bicanski (Editor), Jeremie Knuesel (Editor), Jean-Marie Cabelguen (Editor), Nicolas Schweighofer (Editor), Nathaniel D. Daw (Editor), Stefano Fusi (Editor), Xiao-Jing Wang (Editor), Alan L. Yuille (Editor), Daniel Kersten (Editor), James Bednar (Editor), Christopher Williams (Editor), Richard P. Cooper (Editor), Jacqueline A. Griego (Editor), Carlos R. Cortes (Editor), Ransom Winder (Editor), Malle A Tagamets (Editor), Tony J. Prescott (Editor), Joseph Ayers (Editor), Frank Grasso (Editor), Paul F.M.J. Verschure (Editor), Owen Lewis (Editor), Edmund T Rolls (Editor), Ziad M. Hafed (Editor), John Porrill (Editor), Paul Dean (Editor), Peter Ford Dominey (Editor), Pierre Enel (Editor), Mohamed A Sherif (Editor), William Lytton (Editor), Angelo Cangelosi (Editor)

The Neuron: Cell and Molecular Biology
by Irwin B. Levitan (Author), Leonard K. Kaczmarek (Author)

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Bias And Perception
How does bias distort our thinking, our listening, our beliefs... and even our search results? How can we fight it? This hour, TED speakers explore ideas about the unconscious biases that shape us. Guests include writer and broadcaster Yassmin Abdel-Magied, climatologist J. Marshall Shepherd, journalist Andreas Ekström, and experimental psychologist Tony Salvador.
Now Playing: Science for the People

#513 Dinosaur Tails
This week: dinosaurs! We're discussing dinosaur tails, bipedalism, paleontology public outreach, dinosaur MOOCs, and other neat dinosaur related things with Dr. Scott Persons from the University of Alberta, who is also the author of the book "Dinosaurs of the Alberta Badlands".