Penn and Rutgers psychologists increase understanding of how the brain perceives shades of gray

November 10, 2011

PHILADELPHIA -- Vision is amazing because it seems so mundane. Peoples' eyes, nerves and brains translate light into electrochemical signals and then into an experience of the world around them. A close look at the physics of just the first part of this process shows that even seemingly simple tasks, like keeping a stable perception of an object's color in different lighting conditions or distinguishing black and white objects, is, in fact, very challenging.

University of Pennsylvania psychologists, by way of a novel experiment, have now provided new insight into how the brain tackles this problem.

The research was conducted by professor David H. Brainard and post-doctoral fellow Ana Radonjić, both of the Department of Psychology in Penn's School of Arts and Sciences. They collaborated with Sarah R. Allred and Alan L. Gilchrist of Rutgers University's Department of Psychology.

Their research will be published in the journal Current Biology.

The process of seeing an object begins when light reflected off that object hits the light-sensitive structures in the eye. In terms of color shade, the perception of an object's lightness depends on the object's reflectance. Objects that appear lighter reflect a larger percentage of light than those that appear darker; a white sheet of paper might reflect 90 percent of the light that hits it, while a black sheet of paper might only reflect 3 percent.

Interestingly, due to differences in illumination across a scene, the intensity of the light that comes from a surface to an observer's eye does not tell the observer about the surface's lightness. Although it might seem counterintuitive, a black sheet of paper in direct sunlight might reflect thousands of times more light into a person's eyes than a white object in the shade. To determine the shade of gray of a paper, the brain must therefore do more than measure the absolute intensity of light entering the eye.

"The amazing fact about our brains is that they deliver a perception of objects that is stable over the huge range of light that gets to our eyes. We want to know how the brain takes the amount of light that gets to the eye and turns it into a perception that depends on the object rather than on that total amount of light," Brainard said. "If the brain couldn't do that, objects wouldn't have a stable appearance, and it would be a disaster."

One of the puzzling aspects of this capability is that the range of the reflectance of objects is relatively small, especially compared to the range of light intensities in images coming from the world. In the earlier example, the white paper is only 30 times as reflective as the black paper, but the absolute amount of light that they actually reflect can vary by a much greater degree.

"Within one snapshot, the intensity of light coming of the brightest portion of the image could be a million times greater than that coming from the darkest portion," Radonjić said. "The question is how does the visual system map the huge range of intensities within a single image onto the much smaller but meaningful range of surface lightnesses."

Indeed, it is the mismatch of ranges that presents one of the fundamental perceptual challenges for the brain. If it picks the lightest part of an image as "white" and a shade 30 times as dark as "black," preserving the reflectance range, there could be a tremendous number of shades that are darker still that would be indistinguishable from each other.

One hypothesis is that the brain works around this problem by segmenting the image into separate regions of illumination, thereby reducing the range of luminance it must compare.

"If you can get all of the surfaces to be in the same region of illumination, then the reflectance range and luminance range will match, allowing the visual system to use within-region ratios to estimate surface lightness," Radonjić said.

To test whether this is indeed the mechanism at work, the researchers decided to push the limits of the visual system. They conducted an experiment where participants viewed images that, similar to real world images, had a very large range of light intensities -- as large as 10,000 to 1. Unlike natural images, however, those images did not contain any cues that would allow the visual system to segment them into separate regions of illumination.

To perform the experiment, the research team built a custom high dynamic range display. Participants were then asked to look at a 5x5 checkerboard composed of grayscale squares with random intensities spanning the 10,000 to 1 range. The participants were asked to report what shades of gray a target square looked like by selecting a match from a standardized gray scale.

If the visual system relied only on ratios to determine surface lightness, then the ratio of checkerboard intensities the participants reported should have had the same ratio as that of the black and white samples on the reflectance scale, about 100 to 1. Instead, however, the researchers found that this ratio could be as much as 50 times higher, more than 5,000 to 1.

"We're pushing this visual system beyond the limit we think it normally has to deal with, and because people can still make discernments in this situation it means that the ratio hypothesis is not the only one that's at work. Our experiment blows that out of the water," Brainard said. "What seems to happen instead is that the visual system takes that huge intensity range and maps it gracefully onto grayscale values in a way that preserves one's ability to discern between shades across high ranges of light intensities."

While the experiment doesn't reveal the actual mechanism behind the brain's ability to reconcile the mismatch in ranges, it does suggest new avenues of vision research in both psychology and biology. Further experiments may show how these discernments are made, why the eyes and brain are able to keep making them even in situations beyond what can be encountered in the real world, and how the phenomena demonstrated in this experiment operate along with other visual mechanisms for images that incorporate more of the richness of the real world.
The research was supported by the National Institutes of Health and the National Science Foundation.

University of Pennsylvania

Related Brain Articles from Brightsurf:

Glioblastoma nanomedicine crosses into brain in mice, eradicates recurring brain cancer
A new synthetic protein nanoparticle capable of slipping past the nearly impermeable blood-brain barrier in mice could deliver cancer-killing drugs directly to malignant brain tumors, new research from the University of Michigan shows.

Children with asymptomatic brain bleeds as newborns show normal brain development at age 2
A study by UNC researchers finds that neurodevelopmental scores and gray matter volumes at age two years did not differ between children who had MRI-confirmed asymptomatic subdural hemorrhages when they were neonates, compared to children with no history of subdural hemorrhage.

New model of human brain 'conversations' could inform research on brain disease, cognition
A team of Indiana University neuroscientists has built a new model of human brain networks that sheds light on how the brain functions.

Human brain size gene triggers bigger brain in monkeys
Dresden and Japanese researchers show that a human-specific gene causes a larger neocortex in the common marmoset, a non-human primate.

Unique insight into development of the human brain: Model of the early embryonic brain
Stem cell researchers from the University of Copenhagen have designed a model of an early embryonic brain.

An optical brain-to-brain interface supports information exchange for locomotion control
Chinese researchers established an optical BtBI that supports rapid information transmission for precise locomotion control, thus providing a proof-of-principle demonstration of fast BtBI for real-time behavioral control.

Transplanting human nerve cells into a mouse brain reveals how they wire into brain circuits
A team of researchers led by Pierre Vanderhaeghen and Vincent Bonin (VIB-KU Leuven, Université libre de Bruxelles and NERF) showed how human nerve cells can develop at their own pace, and form highly precise connections with the surrounding mouse brain cells.

Brain scans reveal how the human brain compensates when one hemisphere is removed
Researchers studying six adults who had one of their brain hemispheres removed during childhood to reduce epileptic seizures found that the remaining half of the brain formed unusually strong connections between different functional brain networks, which potentially help the body to function as if the brain were intact.

Alcohol byproduct contributes to brain chemistry changes in specific brain regions
Study of mouse models provides clear implications for new targets to treat alcohol use disorder and fetal alcohol syndrome.

Scientists predict the areas of the brain to stimulate transitions between different brain states
Using a computer model of the brain, Gustavo Deco, director of the Center for Brain and Cognition, and Josephine Cruzat, a member of his team, together with a group of international collaborators, have developed an innovative method published in Proceedings of the National Academy of Sciences on Sept.

Read More: Brain News and Brain Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to