Researchers study why neural networks are efficient in their predictions

September 04, 2020

Artificial intelligence, machine learning and neural networks are terms that are increasingly being used in daily life. Face recognition, object detection, and person classification and segmentation are common tasks for machine learning algorithms which are now in widespread use. Underlying all these processes is machine learning, which means that computers can capture the essential properties or the key characteristics of processes in which the relationships between objects are really complex. The learning process involves good and bad examples with no previous knowledge about the objects or the underlying laws of physics.

However, since it is a blind optimization process, machine learning is like a black box: computers take decisions they regard as valid but it is not understood why one decision is taken and not another so the internal mechanism of the method is still unclear. As a result, the predictions made by machine learning for critical situations are risky and by no means reliable because the results can be deceptive.

In this study, the research group made up of Vladimir Baulin, from the URV's Department of Chemical Engineering, Marc Werner (Leibniz Institute of Polymer Research in Dresden) and YachongGuo (University of Nanjing, China) has tested the predictions of a neural network to check whether they coincide with actual results. To this end, they chose a well defined practical example: the neural network had to design a polymer molecule that would cross the lipid membrane in as short a time as possible. The lipid membrane is a natural barrier that protects cells from damage and external components. To monitor the neural network's prediction, the researchers developed a novel numerical method that uses an exhaustive enumeration system that determines all the possibilities of polymer composition by directly programming the high-performance graphic cards in parallel calculations. "The traditional processor of a computer can contain a maximum of 12-24 nuclei for calculations, but graphic cards are designed to make parallel calculations of image and video pixels, and they have thousands of calculation cores optimized for parallel calculations," explains Vladimir Baulin. This enormous computational power generates thousands of millions of polymer combinations in just a few seconds or minutes. In this way a map can be generated that contains all the possible combinations and, therefore, how the neural network chooses the correct result can be monitored.

"What is surprising is that such a simple, minimum network as the neural network can find the composition of a molecule," Baulin points out. "This is probably due to the fact that physical systems obey the laws of nature, which are intrinsically symmetrical and self-similar. This drastically reduces the number of possible parameter combinations that are then captured by the neural networks."

Therefore, comparing the result of the neural network with the actual result not only makes it possible to check the prediction but also shows how the predictions evolve if the task is changed. And, in turn, this shows how neural networks take decisions and how they "think".
-end-


Universitat Rovira i Virgili

Related Neural Networks Articles from Brightsurf:

Deep neural networks show promise for predicting future self-harm based on clinical notes
Medical University of South Carolina researchers report in JMIR Medical Informatics that they have developed deep learning models to predict intentional self-harm based on information in clinical notes.

Researchers develop new model of the brain's real-life neural networks
Researchers at the Cyber-Physical Systems Group at the USC Viterbi School of Engineering, in conjunction with the University of Illinois at Urbana-Champaign, have developed a new model of how information deep in the brain could flow from one network to another and how these neuronal network clusters self-optimize over time.

The brain's memory abilities inspire AI experts in making neural networks less 'forgetful'
Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a ''major, long-standing obstacle to increasing AI capabilities'' by drawing inspiration from a human brain memory mechanism known as ''replay.''

New data processing module makes deep neural networks smarter
Artificial intelligence researchers have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization.

Neural cartography
A new x-ray microscopy technique could help accelerate efforts to map neural circuits and ultimately the brain itself.

Researchers study why neural networks are efficient in their predictions
A study has tested the predictions of a neural network to check whether they coincide with actual results.

Optimizing neural networks on a brain-inspired computer
Neural networks in both biological settings and artificial intelligence distribute computation across their neurons to solve complex tasks.

Teaching physics to neural networks removes 'chaos blindness'
Teaching physics to neural networks enables those networks to better adapt to chaos within their environment.

A clique away from more efficient networks
An old branch of mathematics finds a fertile new field of application.

Unravelling complex brain networks with automated 3D neural mapping
KAIST researchers developed a new algorithm for brain imaging data analysis that enables the precise and quantitative mapping of complex neural circuits onto a standardized 3D reference atlas.

Read More: Neural Networks News and Neural Networks Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.