Nav: Home

Brain learns to recognize familiar faces regardless of where they are in the visual field

November 08, 2018

A Dartmouth study finds that recognition of faces varies by where they appear in the visual field and this variability is reduced by learning familiar faces through social interactions. These biases are stable and idiosyncratic. More importantly, these biases are reduced for more familiar identities suggesting that the brain recognizes personally familiar faces more uniformly across the visual field. The findings suggest that repeated social interactions may tune populations of visual neurons in the face processing network to enable consistent and rapid recognition of familiar faces. The study was published in eNeuro, an open-access journal of the Society for Neuroscience.

Prior research in human facial recognition has often focused on how people perceive unfamiliar faces but this is one of the first to examine how early visual processes may be tuned by regular, social interactions with others to optimize one's ability to recognize faces of people who are important to us.

"For many of us, we spend most of our time with people we know, so understanding the underlying brain activity that enables us to recognize our friends, family, colleagues and peers, is essential to learning more about how we process relevant social stimuli," said senior author Maria Gobbini, an associate professor of psychological and brain sciences at Dartmouth.

To understand how the brain processes personally familiar faces across different retinal locations, study participants (graduate students) were asked to identify either two or three photographs of their peers' faces. As participants stared at a central red dot on a computer screen, an image of a peer's face would flash briefly on the screen peripherally in one of eight locations. After the image disappeared, they were prompted to identify which person they saw. Following the experiment, participants were asked to rate how well they knew the person in the image on a scale of one (not close) to seven (very close). The team then ran computational simulations to test the effect of learning in face-responsive cortical areas. The results of the simulation suggest that early face areas in face processing pathways are more likely to show a visual field bias that can be tuned by learning.

Previous research in facial recognition and identity has shown that the perception of gender and age varies across retinal locations as well. For example, an androgynous face may appear as a female face when shown in a specific visual location and as a male face when shown in another location. Gobbini and colleagues' new study found that this same type of variability across the visual field (idiosynchratic, retinotopic biases) is found for identification of faces that are low in familiarity but is reduced for highly familiar faces. The results suggest that personally familiar faces may be detected in a prioritized way at an early stage of visual processing. "Much in the same way that the human language system is adapted and optimized to process an individual's native language, including auditory recognition of speech sounds, the face perception system is finely tuned in each individual for interaction with the people that play an important role in that person's life, and this tuning extends to learning at early stages of visual processing," concludes Gobbini.
-end-
Maria (Ida) Gobbini is available for comment at: Maria.I.Gobbini@dartmouth.edu.

Dartmouth College

Related Learning Articles:

Learning with light: New system allows optical 'deep learning'
A team of researchers at MIT and elsewhere has come up with a new approach to complex computations, using light instead of electricity.
Mount Sinai study reveals how learning in the present shapes future learning
The prefrontal cortex shapes memory formation by modulating hippocampal encoding.
Better learning through zinc?
Zinc is a vital micronutrient involved in many cellular processes: For example, in learning and memory processes, it plays a role that is not yet understood.
Deep learning and stock trading
A study undertaken by researchers at the School of Business and Economics at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has shown that computer programs that algorithms based on artificial intelligence are able to make profitable investment decisions.
Learning makes animals intelligent
The fact that animals can use tools, have self-control and certain expectations of life can be explained with the help of a new learning model for animal behavior.
Learning Morse code without trying
Researchers at the Georgia Institute of Technology have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear.
The adolescent brain is adapted to learning
Teenagers are often portrayed as seeking immediate gratification, but new work suggests that their sensitivity to reward could be part of an evolutionary adaptation to learn from their environment.
The brain watched during language learning
Researchers from Nijmegen, the Netherlands, have for the first time captured images of the brain during the initial hours and days of learning a new language.
Learning in the absence of external feedback
Rewards act as external factors that influence and reinforce learning processes.
New learning procedure for neural networks
Neural networks learn to link temporally dispersed stimuli.

Related Learning Reading:

Make It Stick: The Science of Successful Learning
by Peter C. Brown (Author), Henry L. Roediger III (Author), Mark A. McDaniel (Author)

Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning)
by Richard S. Sutton (Author), Andrew G. Barto (Author), Francis Bach (Series Editor)

Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron (Author)

Deep Learning (Adaptive Computation and Machine Learning)
by Ian Goodfellow (Author), Yoshua Bengio (Author), Aaron Courville (Author), Francis Bach (Series Editor)

The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics)
by Trevor Hastie (Author), Robert Tibshirani (Author), Jerome Friedman (Author)

An Introduction to Statistical Learning: with Applications in R (Springer Texts in Statistics)
by Gareth James (Author), Daniela Witten (Author), Trevor Hastie (Author), Robert Tibshirani (Author)

Deep Learning with Python
by Francois Chollet (Author)

The Art of Learning: An Inner Journey to Optimal Performance
by Josh Waitzkin (Author)

Pattern Recognition and Machine Learning (Information Science and Statistics)
by Christopher M. Bishop (Author)

Never Stop Learning: Stay Relevant, Reinvent Yourself, and Thrive
by Bradley R. Staats (Author)

Best Science Podcasts 2018

We have hand picked the best science podcasts for 2018. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Unintended Consequences
Human innovation has transformed the way we live, often for the better. But as our technologies grow more powerful, so do their consequences. This hour, TED speakers explore technology's dark side. Guests include writer and artist James Bridle, historians Yuval Noah Harari and Edward Tenner, internet security strategist Yasmin Green, and journalist Kashmir Hill.
Now Playing: Science for the People

#499 Technology, Work and The Future (Rebroadcast)
This week, we're thinking about how rapidly advancing technology will change our future, our work, and our well-being. We speak to Richard and Daniel Susskind about their book "The Future of Professions: How Technology Will Transform the Work of Human Experts" about the impacts technology may have on professional work. And Nicholas Agar comes on to talk about his book "The Sceptical Optimist" and the ways new technologies will affect our perceptions and well-being.