Nav: Home

Researchers study why neural networks are efficient in their predictions

September 04, 2020

Artificial intelligence, machine learning and neural networks are terms that are increasingly being used in daily life. Face recognition, object detection, and person classification and segmentation are common tasks for machine learning algorithms which are now in widespread use. Underlying all these processes is machine learning, which means that computers can capture the essential properties or the key characteristics of processes in which the relationships between objects are really complex. The learning process involves good and bad examples with no previous knowledge about the objects or the underlying laws of physics.

However, since it is a blind optimization process, machine learning is like a black box: computers take decisions they regard as valid but it is not understood why one decision is taken and not another so the internal mechanism of the method is still unclear. As a result, the predictions made by machine learning for critical situations are risky and by no means reliable because the results can be deceptive.

In this study, the research group made up of Vladimir Baulin, from the URV's Department of Chemical Engineering, Marc Werner (Leibniz Institute of Polymer Research in Dresden) and YachongGuo (University of Nanjing, China) has tested the predictions of a neural network to check whether they coincide with actual results. To this end, they chose a well defined practical example: the neural network had to design a polymer molecule that would cross the lipid membrane in as short a time as possible. The lipid membrane is a natural barrier that protects cells from damage and external components. To monitor the neural network's prediction, the researchers developed a novel numerical method that uses an exhaustive enumeration system that determines all the possibilities of polymer composition by directly programming the high-performance graphic cards in parallel calculations. "The traditional processor of a computer can contain a maximum of 12-24 nuclei for calculations, but graphic cards are designed to make parallel calculations of image and video pixels, and they have thousands of calculation cores optimized for parallel calculations," explains Vladimir Baulin. This enormous computational power generates thousands of millions of polymer combinations in just a few seconds or minutes. In this way a map can be generated that contains all the possible combinations and, therefore, how the neural network chooses the correct result can be monitored.

"What is surprising is that such a simple, minimum network as the neural network can find the composition of a molecule," Baulin points out. "This is probably due to the fact that physical systems obey the laws of nature, which are intrinsically symmetrical and self-similar. This drastically reduces the number of possible parameter combinations that are then captured by the neural networks."

Therefore, comparing the result of the neural network with the actual result not only makes it possible to check the prediction but also shows how the predictions evolve if the task is changed. And, in turn, this shows how neural networks take decisions and how they "think".

Universitat Rovira i Virgili

Related Neural Networks Articles:

The brain's memory abilities inspire AI experts in making neural networks less 'forgetful'
Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a ''major, long-standing obstacle to increasing AI capabilities'' by drawing inspiration from a human brain memory mechanism known as ''replay.''
New data processing module makes deep neural networks smarter
Artificial intelligence researchers have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization.
Neural cartography
A new x-ray microscopy technique could help accelerate efforts to map neural circuits and ultimately the brain itself.
Researchers study why neural networks are efficient in their predictions
A study has tested the predictions of a neural network to check whether they coincide with actual results.
Optimizing neural networks on a brain-inspired computer
Neural networks in both biological settings and artificial intelligence distribute computation across their neurons to solve complex tasks.
Teaching physics to neural networks removes 'chaos blindness'
Teaching physics to neural networks enables those networks to better adapt to chaos within their environment.
A clique away from more efficient networks
An old branch of mathematics finds a fertile new field of application.
Unravelling complex brain networks with automated 3D neural mapping
KAIST researchers developed a new algorithm for brain imaging data analysis that enables the precise and quantitative mapping of complex neural circuits onto a standardized 3D reference atlas.
Get excited by neural networks
Scientists at The University of Tokyo introduced a new method for inferring the energy of the excited states of electrons in materials using machine learning.
Early Bird uses 10 times less energy to train deep neural networks
Rice engineers have found a way to train deep neural networks for a fraction of the energy required today.
More Neural Networks News and Neural Networks Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Listen Again: The Power Of Spaces
How do spaces shape the human experience? In what ways do our rooms, homes, and buildings give us meaning and purpose? This hour, TED speakers explore the power of the spaces we make and inhabit. Guests include architect Michael Murphy, musician David Byrne, artist Es Devlin, and architect Siamak Hariri.
Now Playing: Science for the People

#576 Science Communication in Creative Places
When you think of science communication, you might think of TED talks or museum talks or video talks, or... people giving lectures. It's a lot of people talking. But there's more to sci comm than that. This week host Bethany Brookshire talks to three people who have looked at science communication in places you might not expect it. We'll speak with Mauna Dasari, a graduate student at Notre Dame, about making mammals into a March Madness match. We'll talk with Sarah Garner, director of the Pathologists Assistant Program at Tulane University School of Medicine, who takes pathology instruction out of...
Now Playing: Radiolab

What If?
There's plenty of speculation about what Donald Trump might do in the wake of the election. Would he dispute the results if he loses? Would he simply refuse to leave office, or even try to use the military to maintain control? Last summer, Rosa Brooks got together a team of experts and political operatives from both sides of the aisle to ask a slightly different question. Rather than arguing about whether he'd do those things, they dug into what exactly would happen if he did. Part war game part choose your own adventure, Rosa's Transition Integrity Project doesn't give us any predictions, and it isn't a referendum on Trump. Instead, it's a deeply illuminating stress test on our laws, our institutions, and on the commitment to democracy written into the constitution. This episode was reported by Bethel Habte, with help from Tracie Hunte, and produced by Bethel Habte. Jeremy Bloom provided original music. Support Radiolab by becoming a member today at     You can read The Transition Integrity Project's report here.