Brain's method of creating mental images differs with the source of cues received

November 21, 2002

Berkeley - When the human brain is presented with conflicting information about an object from different senses, it finds a remarkably efficient way to sort out the discrepancies, according to new research conducted at the University of California, Berkeley.

The researchers found that when sensory cues from the hands and eyes differ from one another, the brain effectively splits the difference to produce a single mental image. The researchers describe the middle ground as a "weighted average" because in any given individual, one sense may have more influence than the other. When the discrepancy is too large, however, the brain reverts to information from a single cue - from the eyes, for instance - to make a judgment about what is true.

The findings, reported Friday, Nov. 22, in the journal Science, could spur advances in virtual reality programs and remote surgery applications, which rely upon accurately mimicking visual and haptic (touch) cues.

In a series of experiments, the researchers divided 12 subjects into two groups. One group received two different types of visual cues, while the other received visual and haptic cues. The visual-haptic group assessed three horizontal bars. Two appeared equally thick to the eye and hand in all instances, while the third bar alternately appeared thicker or thinner to the eye or hand. The group with two visual inputs assessed surface orientation, with two surfaces appearing equally slanted according to two visual cues, while a third appeared more slanted according to one cue and less slanted according to the other.

To manipulate the sensory cues, the researchers used force-feedback technology to simulate touch, and shutter glasses to simulate 3-D visual stimuli. Participants in the visual-haptic group inserted their thumb and forefinger into the device to "feel" an object projected onto a computer monitor. Through the devices, they see and feel the virtual object.

"We found that when subjects grasped an object that felt 54 millimeters thick but looked as if it were 56 millimeters thick, their brains interpreted the object as being somewhere in between," said Jamie M. Hillis, lead author of the study and a former graduate student in vision science at UC Berkeley. Hillis, now a post-doctoral researcher in psychology at the University of Pennsylvania, worked on the research with Martin S. Banks, professor of optometry and psychology at UC Berkeley.

"If the brain is taking in different sensory cues and combining them to create one representation, then there could be an infinite number of combinations that the brain is perceiving to be the same," said Banks. "The brain perceives a block to be three inches tall, but was it because the eyes saw something that looked four inches tall while the hands felt something to be two inches tall? Or, was it really simply three inches tall? We wanted to know how much we could push that."

What the researchers found was that pushing the discrepancies too far resulted in the brain defaulting to signals from either the hands or eyes, depending upon which one seemed more accurate. That means the brain maintains three separate representations of the object's property. One representation comes from the combined visual and haptic cues, the second from just the visual cues, and the third from the haptic cues.

What surprised the researchers was that this rule did not hold true when the brain received discrepant cues from the same sense. In tests where participants used only their eyes, researchers presented conflicting visual cues regarding the degree of slant in surfaces appearing before them. One cue - the binocular disparity - made the surface appear to slant in one direction, while the other cue - textured gradient - indicated a different slant. The participants regularly perceived the "weighted average" of the visual signals no matter how far the two cues differed.

"If the discrepant cues were both visual, the brain essentially threw the two individual estimates away, keeping only the single representation of the object's property," said Hillis.

Why would the brain behave differently when receiving information from two senses instead of one? "We rely upon our senses to tell us about the surrounding environment, including an object's size, shape and location," Hillis explained. "But sensory measurements are subject to error, and frequently one sensory measurement will differ from another."

"There are many instances where a person will be looking at one thing and touching another, so it makes sense for the brain to keep the information from those two sensory cues separate," Banks added. "Because people can't look at two different objects at the same time, the brain can more safely discard information from individual visual cues after they've been combined into one representation. The brain is efficient in that it doesn't waste energy maintaining information that it will not likely need in real life."

Banks said that understanding how the brain perceives various sensory inputs is vital in the development of virtual reality applications, such as remote surgery technology, when what the eyes see and what the hands feel must accurately reflect reality.

"Imagine a future where the surgeon is in San Francisco, and the patient is in Nevada," said Banks. "The surgeon is looking at a monitor to manipulate a robot arm with a surgical instrument cutting into the patient. The surgeon feels the contact with the patient with a force-feedback device like the one we used in our experiment. Knowing how the brain combines visual and haptic stimuli is the first step in helping researchers develop better programs that provide accurate touch feedback to the physician so he or she can actually feel what's going on inside the patient."
-end-
Other co-authors of the paper are Marc O. Ernst, research scientist at the Max-Planck Institute for Biological Cybernetics in Germany, and Michael S. Landy, professor of psychology at New York University. Ernst conducted the visual-haptic experiments for this study while he was a post-doctoral researcher in vision science at UC Berkeley. The research was supported by the Air Force Office of Scientific Research, the National Institutes of Health, the Max-Planck Society and Silicon Graphics.

The vision research contributes to UC Berkeley's Health Sciences Initiative, a major push to find innovative solutions to today's health problems through interdisciplinary collaboration.

University of California - Berkeley

Related Brain Articles from Brightsurf:

Glioblastoma nanomedicine crosses into brain in mice, eradicates recurring brain cancer
A new synthetic protein nanoparticle capable of slipping past the nearly impermeable blood-brain barrier in mice could deliver cancer-killing drugs directly to malignant brain tumors, new research from the University of Michigan shows.

Children with asymptomatic brain bleeds as newborns show normal brain development at age 2
A study by UNC researchers finds that neurodevelopmental scores and gray matter volumes at age two years did not differ between children who had MRI-confirmed asymptomatic subdural hemorrhages when they were neonates, compared to children with no history of subdural hemorrhage.

New model of human brain 'conversations' could inform research on brain disease, cognition
A team of Indiana University neuroscientists has built a new model of human brain networks that sheds light on how the brain functions.

Human brain size gene triggers bigger brain in monkeys
Dresden and Japanese researchers show that a human-specific gene causes a larger neocortex in the common marmoset, a non-human primate.

Unique insight into development of the human brain: Model of the early embryonic brain
Stem cell researchers from the University of Copenhagen have designed a model of an early embryonic brain.

An optical brain-to-brain interface supports information exchange for locomotion control
Chinese researchers established an optical BtBI that supports rapid information transmission for precise locomotion control, thus providing a proof-of-principle demonstration of fast BtBI for real-time behavioral control.

Transplanting human nerve cells into a mouse brain reveals how they wire into brain circuits
A team of researchers led by Pierre Vanderhaeghen and Vincent Bonin (VIB-KU Leuven, Université libre de Bruxelles and NERF) showed how human nerve cells can develop at their own pace, and form highly precise connections with the surrounding mouse brain cells.

Brain scans reveal how the human brain compensates when one hemisphere is removed
Researchers studying six adults who had one of their brain hemispheres removed during childhood to reduce epileptic seizures found that the remaining half of the brain formed unusually strong connections between different functional brain networks, which potentially help the body to function as if the brain were intact.

Alcohol byproduct contributes to brain chemistry changes in specific brain regions
Study of mouse models provides clear implications for new targets to treat alcohol use disorder and fetal alcohol syndrome.

Scientists predict the areas of the brain to stimulate transitions between different brain states
Using a computer model of the brain, Gustavo Deco, director of the Center for Brain and Cognition, and Josephine Cruzat, a member of his team, together with a group of international collaborators, have developed an innovative method published in Proceedings of the National Academy of Sciences on Sept.

Read More: Brain News and Brain Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.