Nav: Home

New algorithm could explain human face recognition

December 01, 2016

MIT researchers and their colleagues have developed a new computational model of the human brain's face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.

The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face's degree of rotation -- say, 45 degrees from center -- but not the direction -- left or right.

This property wasn't built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.

"This is not a proof that we understand what's going on," says Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines (CBMM), a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT. "Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it's strong evidence that we are on the right track."

Indeed, the researchers' new paper includes a mathematical proof that the particular type of machine-learning system they use, which was intended to offer what Poggio calls a "biologically plausible" model of the nervous system, will inevitably yield intermediary representations that are indifferent to angle of rotation.

Poggio, who is also a primary investigator at MIT's McGovern Institute for Brain Research, is the senior author on a paper describing the new work, which appeared today in the journal Computational Biology. He's joined on the paper by several other members of both the CBMM and the McGovern Institute: first author Joel Leibo, a researcher at Google DeepMind, who earned his PhD in brain and cognitive sciences from MIT with Poggio as his advisor; Qianli Liao, an MIT graduate student in electrical engineering and computer science; Fabio Anselmi, a postdoc in the IIT@MIT Laboratory for Computational and Statistical Learning, a joint venture of MIT and the Italian Institute of Technology; and Winrich Freiwald, an associate professor at the Rockefeller University.

Emergent properties

The new paper is "a nice illustration of what we want to do in [CBMM], which is this integration of machine learning and computer science on one hand, neurophysiology on the other, and aspects of human behavior," Poggio says. "That means not only what algorithms does the brain use, but what are the circuits in the brain that implement these algorithms."

Poggio has long believed that the brain must produce "invariant" representations of faces and other objects, meaning representations that are indifferent to objects' orientation in space, their distance from the viewer, or their location in the visual field. Magnetic resonance scans of human and monkey brains suggested as much, but in 2010, Freiwald published a study describing the neuroanatomy of macaque monkeys' face-recognition mechanism in much greater detail.

Freiwald showed that information from the monkey's optic nerves passes through a series of brain locations, each of which is less sensitive to face orientation than the last. Neurons in the first region fire only in response to particular face orientations; neurons in the final region fire regardless of the face's orientation -- an invariant representation.

But neurons in an intermediate region appear to be "mirror symmetric": That is, they're sensitive to the angle of face rotation without respect to direction. In the first region, one cluster of neurons will fire if a face is rotated 45 degrees to the left, and a different cluster will fire if it's rotated 45 degrees to the right. In the final region, the same cluster of neurons will fire whether the face is rotated 30 degrees, 45 degrees, 90 degrees, or anywhere in-between. But in the intermediate region, a particular cluster of neurons will fire if the face is rotated by 45 degrees in either direction, another if it's rotated 30 degrees, and so on.

This is the behavior that the researchers' machine-learning system reproduced. "It was not a model that was trying to explain mirror symmetry," Poggio says. "This model was trying to explain invariance, and in the process, there is this other property that pops out."

Neural training

The researchers' machine-learning system is a neural network, so called because it roughly approximates the architecture of the human brain. A neural network consists of very simple processing units, arranged into layers, that are densely connected to the processing units -- or nodes -- in the layers above and below. Data are fed into the bottom layer of the network, which processes them in some way and feeds them to the next layer, and so on. During training, the output of the top layer is correlated with some classification criterion -- say, correctly determining whether a given image depicts a particular person.

In earlier work, Poggio's group had trained neural networks to produce invariant representations by, essentially, memorizing a representative set of orientations for just a handful of faces, which Poggio calls "templates." When the network was presented with a new face, it would measure its difference from these templates. That difference would be smallest for the templates whose orientations were the same as that of the new face, and the output of their associated nodes would end up dominating the information signal by the time it reached the top layer. The measured difference between the new face and the stored faces gives the new face a kind of identifying signature.

In experiments, this approach produced invariant representations: A face's signature turned out to be roughly the same no matter its orientation. But the mechanism -- memorizing templates -- was not, Poggio says, biologically plausible.

So instead, the new network uses a variation on Hebb's rule, which is often described in the neurological literature as "neurons that fire together wire together." That means that during training, as the weights of the connections between nodes are being adjusted to produce more accurate outputs, nodes that react in concert to particular stimuli end up contributing more to the final output than nodes that react independently (or not at all).

This approach, too, ended up yielding invariant representations. But the middle layers of the network also duplicated the mirror-symmetric responses of the intermediate visual-processing regions of the primate brain.
-end-
Additional background

ARCHIVE: Machines that learn like people

ARCHIVE: More-flexible machine learning

ARCHIVE: Artificial-intelligence research revives its old ambitionsARCHIVE: How the brain recognizes objects

Massachusetts Institute of Technology

Related Neurons Articles:

New tool to identify and control neurons
One of the big challenges in the Neuroscience field is to understand how connections and communications trigger our behavior.
Neurons that regenerate, neurons that die
In a new study published in Neuron, investigators report on a transcription factor that they have found that can help certain neurons regenerate, while simultaneously killing others.
How neurons use crowdsourcing to make decisions
When many individual neurons collect data, how do they reach a unanimous decision?
Neurons can learn temporal patterns
Individual neurons can learn not only single responses to a particular signal, but also a series of reactions at precisely timed intervals.
A turbo engine for tracing neurons
Putting a turbo engine into an old car gives it an entirely new life -- suddenly it can go further, faster.
Brain neurons help keep track of time
Turning the theory of how the human brain perceives time on its head, a novel analysis in mice reveals that dopamine neuron activity plays a key role in judgment of time, slowing down the internal clock.
During infancy, neurons are still finding their places
Researchers have identified a large population of previously unrecognized young neurons that migrate in the human brain during the first few months of life, contributing to the expansion of the frontal lobe, a region important for social behavior and executive function.
How many types of neurons are there in the brain?
For decades, scientists have struggled to develop a comprehensive census of cell types in the brain.
Molecular body guards for neurons
In the brain, patterns of neural activity are perfectly balanced.
Engineering researchers use laser to 'weld' neurons
University of Alberta researchers have developed a method of connecting neurons, using ultrashort laser pulses -- a breakthrough technique that opens the door to new medical research and treatment opportunities.

Related Neurons Reading:

The Neuron: Cell and Molecular Biology
by Irwin B. Levitan (Author), Leonard K. Kaczmarek (Author)

The Neuron: Cell and Molecular Biology
by Irwin B. Levitan (Author), Leonard K. Kaczmarek (Author)

The Myth of Mirror Neurons: The Real Neuroscience of Communication and Cognition
by W. W. Norton & Company

From Neurons to Neighborhoods : The Science of Early Childhood Development
by Committee on Integrating the Science of Early Childhood Development (Author), Youth, and Families Board on Children (Author), National Research Council (Author), Committee on Integrating the Science of Early Childhood Development (Author), Jack P. Shonkoff (Editor), Deborah A. Phillips (Editor)

Spiking Neuron Models: Single Neurons, Populations, Plasticity
by Wulfram Gerstner (Author)

Molecular and Cellular Physiology of Neurons, Second Edition
by Gordon L. Fain (Author), Margery J. Fain (Illustrator), Thomas O'Dell (Illustrator)

From Neurons to Neighborhoods: An Update: Workshop Summary (BCYF 25th Anniversary)
by National Research Council (Author), Division of Behavioral and Social Sciences and Education (Author), Institute of Medicine (Author), Youth, and Families Board on Children (Author), Steve Olson (Editor)

From Neuron to Brain (5th Ed)
by John G. Nicholls (Author), A. Robert Martin (Author), David A. Brown (Author), Mathew E. Diamond (Author), David A. Weisblat (Author), Paul A. Fuchs (Author)

The 7 Secrets of Neuron Leadership: What Top Military Commanders, Neuroscientists, and the Ancient Greeks Teach Us about Inspiring Teams
by W. Craig Reed (Author), Gordon R. England (Foreword)

From Photon to Neuron: Light, Imaging, Vision
by Philip Nelson (Author)

Best Science Podcasts 2018

We have hand picked the best science podcasts for 2018. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Where Joy Hides
When we focus so much on achievement and success, it's easy to lose sight of joy. This hour, TED speakers search for joy in unexpected places, and explain why it's crucial to a fulfilling life. Speakers include inventor Simone Giertz, designer Ingrid Fetell Lee, journalist David Baron, and musician Meklit Hadero.
Now Playing: Science for the People

#500 500th Episode
This week we turn 500! To celebrate, we're taking the opportunity to go off format, talk about the journey through 500 episodes, and answer questions from our lovely listeners. Join hosts Bethany Brookshire and Rachelle Saunders as we talk through the show's history, how we've grown and changed, and what we love about the Science for the People. Here's to 500 more episodes!