Despite decades of research, the mechanisms behind fast flashes of insight that change how a person perceives their world, termed “one-shot learning,” have remained unknown. A mysterious type of one-shot learning is perceptual learning, in which seeing something once dramatically alters our ability to recognize it again.
Now a new study, led by researchers at NYU Langone Health, addresses the moments when we first recognize a blurry object, a primal ability that enabled our ancestors to avoid threats.
Published online February 4 in Nature Communications, the new work pinpoints for the first time the brain region called the high-level visual cortex (HLVC) as the place where “priors” — images seen in the past and stored — are accessed to enable one-shot perceptual learning.
“Our work revealed, not just where priors are stored, but also the brain computations involved,” said co-senior study author Biyu He, Ph.D., associate professor in the departments of Neurology, Neuroscience, and Radiology at NYU Grossman School of Medicine.
Importantly, past studies had shown that patients with schizophrenia and Parkinson’s disease have abnormal one-shot learning, such that previously stored priors overwhelm what a person is presently looking at to generate hallucinations.
“This study yielded a directly testable theory on how priors act up during hallucinations, and we are now investigating the related brain mechanisms in patients with neurological disorders to reveal what goes wrong,” added Dr. He.
The research team is also looking into likely connections between the brain mechanisms behind visual perception and the better-known type of “aha moment” when we comprehend a new idea.
Sharper Image
For the study, Dr. He’s team explored changes in brain activity triggered when people are shown Mooney images — faded pictures of animals and objects. Specifically, study participants are shown blurred images of the same object and then a clear version. In Dr. He’s 2018 study of this process, after seeing the clear version (and experiencing one-shot learning), subjects became twice as good at recognizing images because the experiment forced them to use their stored priors.
The researchers “took pictures” of brain activity during prior access using functional magnetic resonance imaging (fMRI), which measures brain cell activity by tracking blood flow to active cells. Signaling strength along nerve pathways (plasticity), however, is fine-tuned in the structural spaces (synapses) between brain cells, and fMRI can only measure activity within cells.
For that reason, the researchers combined fMRI with behavioral tests using Mooney images, electroencephalography (EEG) brain recordings, and a model based on machine learning — a form of AI — to locate priors in the HLVC.
To find the seat of one-shot perceptual learning, the research team first determined what kind of information is encoded in signaling changes as prior access improves image recognition. To do so, the team changed the size of images, their position on the page, and their orientation (by rotating them), and recorded the effect of each change on image recognition rates. This behavioral study revealed that changes in the image size did not change one-shot learning, while rotating an image or changing its position partially decreased learning. The results suggested that perceptual priors encode previously seen patterns but not more abstract concepts (e.g., the breed of a dog in an image).
The team then created statistical models that captured brain cell activity patterns via fMRI during prior access, and found that only the known neural coding patterns in the high-level visual cortex matched the properties of the priors that the behavioral study revealed. The authors also probed the timing properties of activity changes using intracranial electroencephalography (EEG) by asking patients already undergoing iEEG monitoring during neurosurgical treatment to perform a short perceptual task. iEEG collects readouts from electrodes on brain tissue to measure fast changing signaling patterns that fMRI cannot measure. The HLVC showed the earliest neural signaling strength changes just as prior-guided object recognition occurred.
As a final step, the research team built a vision transformer — an artificial intelligence model that finds patterns in image parts and fills in what is missing based on probabilities. Just as the HLVC was found to add prior weight to information coming in from the eyes, the AI model stored accumulated image information (priors) in one module, and then used the stored data to better recognize incoming imaging data in another module. Once trained on enough images, the neural network model achieved one-shot learning capability like that seen in humans, and better than other leading AI models without a comparable prior module.
“Although AI has made great progress in object recognition over the past decade, no tool has yet been capable of one-shot learning like humans,” added co-senior author Eric Oermann, M.D ., assistant professor in the Departments of Neurosurgery and Radiology at NYU Langone. “We now anticipate the development of AI models with human-like perceptual mechanisms that classify new objects or learn new tasks with few or no training examples. This is more evidence of a growing convergence between computational neuroscience and advances in AI.”
Along with Drs. He and Oermann, authors included first authors Ayaka Hachisuka and Jonathan Shor, in NYU Langone Institute for Translational Neuroscience and first author Xujin Chris Liu of NYU Tandon School of Engineering. Other authors from NYU Langone Health are Daniel Friedman, Patricia Dugan, and Orrin Devinsky in the Department of Neurology and Werner Doyle in the Department of Neurosurgery. Author Yao Wang is in the Department of Electrical and Computer Engineering at the NYU Tandon School of Engineering. Authors from the Icahn School of Medicine at Mount Sinai are Ignacio Saez in the Department of Neuroscience and Fedor Panov in the Department of Neurosurgery.
This work was supported by a W.M. Keck Foundation medical research grant, National Science Foundation grant BCS-1926780, and the NYU Grossman School of Medicine. Oermann holds equity in Artisight Inc., Delvi Inc., and Eikon Therapeutics, and has consulting arrangements with Google Inc., and Sofinnova Partners. These relationships are managed in accordance with New York University policies.
About NYU Langone Health
NYU Langone Health is a fully integrated health system that consistently achieves the best patient outcomes through a rigorous focus on quality that has resulted in some of the lowest mortality rates in the nation. Vizient, Inc., has ranked NYU Langone No. 1 out of 118 comprehensive academic medical centers across the nation for four years in a row, and U.S. News & World Report recently ranked four of its clinical specialties No. 1 in the nation. NYU Langone offers a comprehensive range of medical services with one high standard of care across seven inpatient locations, its Perlmutter Cancer Center, and more than 320 outpatient locations in the New York area and Florida. The system also includes two tuition-free medical schools, in Manhattan and on Long Island, and a vast research enterprise.
Nature Communications
Experimental study
People
Neural and computational mechanisms underlying one-shot perceptual learning in humans
4-Feb-2026