Experiment shows how our visual system avoids overloading

February 09, 2021

Russian researchers from HSE University have studied a hypothesis regarding the capability of the visual system to automatically categorize objects (i.e., without requiring attention span). The results of a simple and beautiful experiment confirmed this assumption. The paper was published in the journal Scientific Reports. The study was supported by a Russian Science Foundation grant.

Humans receive a lot of information from the environment through their vision. Every day, we face a flow of varied visual stimuli. At the same time, information processing requires cognitive resources. Like a computer processor, the human brain has limited capacity in terms of the data it is able to process and save in its memory. One hypothesis states that the visual system somehow 'decreases files' resolution in order to avoid overloading. As a result of such 'compression', instead of a detailed analysis of the observed objects, the visual system categorizes them by simple general attributes, such as size. Later, such 'primary data' can be used for a more thorough analysis.

Researchers sought to answer the following question: is the visual system capable of automatic object categorization (i.e., without attention)? In their study, the researchers tried to determine the conditions in which such automatic categorization would work. They used the visual mismatch negativity (vMMN) component measured by electroencephalography (EEG) as a marker of automatic sensory discrimination. vMMN shows the difference between the brain's reactions to a standard (frequent) or a deviant (rare) stimulus. vMMN demonstrates that the visual system noticed a difference between stimuli and, importantly, that it did so without requiring attention.

'We are very interested and amazed by the human visual system's ability to categorize high numbers of objects. For example, when humans look at an apple tree, they immediately differentiate apples from leaves. This study shows that the process of quick categorization can be performed automatically based on the information on differences between objects', says Vladislav Khvostov, Junior Research Fellow at the HSE Laboratory for Cognitive Research, School of Psychology, one of the paper's authors.

To study the automatic distribution of objects into groups using vMMN, the researchers conducted a simple experiment with a fillertask. Study participants were asked to look at a small asymmetrical cross in the centre of the field and press the button each time the cross changed its orientation. This way, the participants' attention was focused on the position of the cross in the centre of the field. The cross was surrounded by rows of lines of varied lengths and orientation. In each experiment block, the combination of these parameters was different. While the participants' attention was focused on the central figure, the researchers used EEG to record brain activity in response to background visual stimulation. In each block of the experiment, the participants were shown 700 visual stimuli, each of which was presented on the screen for 200 ms followed by 400 ms of empty screen. Most of the stimuli included a fixed combination of lines' length and orientation (for example, long lines were steep, and short ones were flat), but in 10% of cases, the combination of parameters was the opposite.

According to Vladislav Khvostov, the only task for the participants was to press a button when the central cross rotated (third image from the left). In the image above, the central cross size is magnified for illustrational purposes. Together with the cross, the participants observed a background visual stimulation consisting of lines with different lengths and orientations. In most cases (standard stimuli) the combination of length and orientation was the same: long lines were flat, and short ones were steep, but in very rare cases (deviant stimuli, seventh image) this combination changed to the opposite: long lines became steep, while short ones became flat. The participants did not pay attention to the change of stimuli, but analysis of EEG indicators showed that the visual system tracked these changes as well.

The researchers were interested in the brain's reaction to the replacement of a standard stimulus with a deviant one. If the feature had only two peak values (short/long in case of length; vertical/horizontal in case of orientation), it was called 'segmentable'. If the attribute had interim values, it was defined as 'non-segmentable.'

The researchers found considerable visual mismatch negativity in response to a deviant stimulus in cases when either both of the features were segmentable, or only length was. Since on all stimuli inside each block, the distribution of lengths and orientations remained constant, the researchers concluded that categorization was not made by one simple feature. This means that the visual system categorized the lines by their combinations. In their experiment, the researchers thus contradicted the assumption that the visual system categorizes the objects only by simple feature. It can solve a less trivial version of the task and use combinations of features.
-end-


National Research University Higher School of Economics

Related Visual System Articles from Brightsurf:

Visual working memory is hierarchically structured
Researchers from HSE University and the University of California San Diego, Igor Utochkin and Timothy Brady, have found new evidence of hierarchical encoding of images in visual working memory.

The mystery of visual stability
We move our eyes several times per second. These fast eye movements, called saccades, create large image shifts on the retina -- making our visual system work hard to maintain a stable perceptual world.

Why visual perception is a decision process
A popular theory in neuroscience called predictive coding proposes that the brain produces all the time expectations that are compared with incoming information.

Visual impairment among women and dementia risk
Whether visual impairment is a risk factor for dementia was the focus of this observational study that included 1,000 older women who are participants in the Women's Health Initiative studies.

VR is not suited to visual memory?!
Toyohashi university of technology researcher and a research team at Tokyo Denki University have found that virtual reality (VR) may interfere with visual memory.

Dartmouth study finds conscious visual perception occurs outside the visual system
A Dartmouth study finds that the conscious perception of visual location occurs in the frontal lobes of the brain, rather than in the visual system in the back of the brain.

Learning to read boosts the visual brain
How does learning to read change our brain? Does reading take up brain space dedicated to seeing objects such as faces, tools or houses?

How brain rhythms organize our visual perception
Imagine that you are watching a crowded hang-gliding competition, keeping track of a red and orange glider's skillful movements.

Seeing it both ways: Visual perspective in memory
Think of a memory from your childhood. Are you seeing the memory through your own eyes, or can you see yourself, while viewing that child as if you were an observer?

Using visual imagery to find your true passions
You may think you know what you like -- how to spend your time or what profession to pursue.

Read More: Visual System News and Visual System Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.