Improving the vision of self-driving vehicles

March 06, 2020

There may be a better way for autonomous vehicles to learn how to drive themselves: by watching humans. With the help of an improved sight-correcting system, self-driving cars could learn just by observing human operators complete the same task.

Researchers from Deakin University in Australia published their results in IEEE/CAA Journal of Automatica Sinica, a joint publication of the Institute of Electrical and Electronics Engineers (IEEE) and the Chinese Association of Automation.

The team implemented imitation learning, also called learning from demonstration. A human operator drives a vehicle outfitted with three cameras, observing the environment from the front and each side of the car. The data is then processed through a neural network -- a computer system based on how the brain's neurons interact to process information -- that allows the vehicles to make decisions based on what it learned from watching the human make similar decisions.

"The expectation of this process is to generate a model solely from the images taken by the cameras," said paper author Saeid Nahavandi, Alfred Deakin Professor, pro vice-chancellor, chair of engineering and director for the Institute for Intelligent Systems Research and Innovation at Deakin University. "The generated model is then expected to drive the car autonomously."

The processing system is specifically a convolutional neural network, which is mirrored on the brain's visual cortex. The network has an input layer, an output layer and any number of processing layers between them. The input translates visual information into dots, which are then continuously compared as more visual information comes in. By reducing the visual information, the network can quickly process changes in the environment: a shift of dots appearing ahead could indicate an obstacle in the road. This, combined with the knowledge gained from observing the human operator, means that the algorithm knows that a sudden obstacle in the road should trigger the vehicle to fully stop to avoid an accident.

"Having a reliable and robust vision is a mandatory requirement in autonomous vehicles, and convolutional neural networks are one of the most successful deep neural networks for image processing applications," Nahavandi said.

He noted a couple of drawbacks, however. One is that imitation learning speeds up the training process while reducing the amount of training data required to produce a good model. In contrast, convolutional neural networks require a significant amount of training data to find an optimal configuration of layers and filters, which can help organize data, produces a properly generated model capable of driving an autonomous vehicle.

"For example, we found that increasing the number of filters does not necessarily result in a better performance," Nahavandi said. "The optimal selection of parameters of the network and training procedure is still an open question that researchers are actively investigating worldwide." Next, the researchers plan to study more intelligent and efficient techniques, including genetic and evolutionary algorithms to obtain the optimum set of parameters to better produce a self-learning, self-driving vehicle.
-end-
Other contributors include Parham Kebria, Abbas Khosravi and Syed Moshfeq Salaken, all of whom are with the Institute for Intelligent Systems Research and Innovation at Deakin University in Australia.

Fulltext of the paper is available: http://www.ieee-jas.org/en/article/doi/10.1109/JAS.2019.1911825

IEEE/CAA Journal of Automatica Sinica aims to publish high-quality, high-interest, far-reaching research achievements globally, and provide an international forum for the presentation of original ideas and recent results related to all aspects of automation. Researchers (including globally highly cited scholars) from institutions all over the world, such as MIT, Yale University, Stanford University, University of Cambridge, Princeton University, select to share their research with a large audience through JAS.

IEEE/CAA Journal of Automatica Sinica is indexed in SCIE, EI, Scopus, etc. The latest CiteScore is 5.31, ranked among top 9% (22/232) in the category of "Control and Systems Engineering", and top 10% (27/269, 20/189) both in the categories of "Information System" and "Artificial Intelligence". JAS has been in the 1st quantile (Q1) in all three categories it belongs to.

Why publish with us:JAS papers can be found at http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6570654 or http://www.ieee-jas.org

Chinese Association of Automation

Related Visual Information Articles from Brightsurf:

Visual working memory is hierarchically structured
Researchers from HSE University and the University of California San Diego, Igor Utochkin and Timothy Brady, have found new evidence of hierarchical encoding of images in visual working memory.

The mystery of visual stability
We move our eyes several times per second. These fast eye movements, called saccades, create large image shifts on the retina -- making our visual system work hard to maintain a stable perceptual world.

Why visual perception is a decision process
A popular theory in neuroscience called predictive coding proposes that the brain produces all the time expectations that are compared with incoming information.

Visual impairment among women and dementia risk
Whether visual impairment is a risk factor for dementia was the focus of this observational study that included 1,000 older women who are participants in the Women's Health Initiative studies.

VR is not suited to visual memory?!
Toyohashi university of technology researcher and a research team at Tokyo Denki University have found that virtual reality (VR) may interfere with visual memory.

Dartmouth study finds conscious visual perception occurs outside the visual system
A Dartmouth study finds that the conscious perception of visual location occurs in the frontal lobes of the brain, rather than in the visual system in the back of the brain.

People with autism show atypical brain activity when coordinating visual and motor information
The brain is organized differently in individuals with ASD in its function for basic sensorimotor behaviors, but these functions can differ between people with autism.

Learning to read boosts the visual brain
How does learning to read change our brain? Does reading take up brain space dedicated to seeing objects such as faces, tools or houses?

How brain rhythms organize our visual perception
Imagine that you are watching a crowded hang-gliding competition, keeping track of a red and orange glider's skillful movements.

Seeing it both ways: Visual perspective in memory
Think of a memory from your childhood. Are you seeing the memory through your own eyes, or can you see yourself, while viewing that child as if you were an observer?

Read More: Visual Information News and Visual Information Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.