Brain-inspired electronic system could vastly reduce AI's carbon footprint

August 27, 2020

Extremely energy-efficient artificial intelligence is now closer to reality after a study by UCL researchers found a way to improve the accuracy of a brain-inspired computing system.

The system, which uses memristors to create artificial neural networks, is at least 1,000 times more energy efficient than conventional transistor-based AI hardware, but has until now been more prone to error.

Existing AI is extremely energy-intensive - training one AI model can generate 284 tonnes of carbon dioxide, equivalent to the lifetime emissions of five cars. Replacing the transistors that make up all digital devices with memristors, a novel electronic device first built in 2008, could reduce this to a fraction of a tonne of carbon dioxide - equivalent to emissions generated in an afternoon's drive.

Since memristors are so much more energy-efficient than existing computing systems, they can potentially pack huge amounts of computing power into hand-held devices, removing the need to be connected to the Internet.

This is especially important as over-reliance on the Internet is expected to become problematic in future due to ever-increasing data demands and the difficulties of increasing data transmission capacity past a certain point.

In the new study, published in Nature Communications, engineers at UCL found that accuracy could be greatly improved by getting memristors to work together in several sub-groups of neural networks and averaging their calculations, meaning that flaws in each of the networks could be cancelled out.

Memristors, described as "resistors with memory", as they remember the amount of electric charge that flowed through them even after being turned off, were considered revolutionary when they were first built over a decade ago, a "missing link" in electronics to supplement the resistor, capacitor and inductor. They have since been manufactured commercially in memory devices, but the research team say they could be used to develop AI systems within the next three years.

Memristors offer vastly improved efficiency because they operate not just in a binary code of ones and zeros, but at multiple levels between zero and one at the same time, meaning more information can be packed into each bit.

Moreover, memristors are often described as a neuromorphic (brain-inspired) form of computing because, like in the brain, processing and memory are implemented in the same adaptive building blocks, in contrast to current computer systems that waste a lot of energy in data movement.

In the study, Dr Adnan Mehonic, PhD student Dovydas Joksas (both UCL Electronic & Electrical Engineering), and colleagues from the UK and the US tested the new approach in several different types of memristors and found that it improved the accuracy of all of them, regardless of material or particular memristor technology. It also worked for a number of different problems that may affect memristors' accuracy.

Researchers found that their approach increased the accuracy of the neural networks for typical AI tasks to a comparable level to software tools run on conventional digital hardware.

Dr Mehonic, director of the study, said: "We hoped that there might be more generic approaches that improve not the device-level, but the system-level behaviour, and we believe we found one. Our approach shows that, when it comes to memristors, several heads are better than one. Arranging the neural network into several smaller networks rather than one big network led to greater accuracy overall."

Dovydas Joksas further explained: "We borrowed a popular technique from computer science and applied it in the context of memristors. And it worked! Using preliminary simulations, we found that even simple averaging could significantly increase the accuracy of memristive neural networks."

Professor Tony Kenyon (UCL Electronic & Electrical Engineering), a co-author on the study, added: "We believe now is the time for memristors, on which we have been working for several years, to take a leading role in a more energy-sustainable era of IoT devices and edge computing."
-end-


University College London

Related Neural Networks Articles from Brightsurf:

Deep neural networks show promise for predicting future self-harm based on clinical notes
Medical University of South Carolina researchers report in JMIR Medical Informatics that they have developed deep learning models to predict intentional self-harm based on information in clinical notes.

Researchers develop new model of the brain's real-life neural networks
Researchers at the Cyber-Physical Systems Group at the USC Viterbi School of Engineering, in conjunction with the University of Illinois at Urbana-Champaign, have developed a new model of how information deep in the brain could flow from one network to another and how these neuronal network clusters self-optimize over time.

The brain's memory abilities inspire AI experts in making neural networks less 'forgetful'
Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a ''major, long-standing obstacle to increasing AI capabilities'' by drawing inspiration from a human brain memory mechanism known as ''replay.''

New data processing module makes deep neural networks smarter
Artificial intelligence researchers have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization.

Neural cartography
A new x-ray microscopy technique could help accelerate efforts to map neural circuits and ultimately the brain itself.

Researchers study why neural networks are efficient in their predictions
A study has tested the predictions of a neural network to check whether they coincide with actual results.

Optimizing neural networks on a brain-inspired computer
Neural networks in both biological settings and artificial intelligence distribute computation across their neurons to solve complex tasks.

Teaching physics to neural networks removes 'chaos blindness'
Teaching physics to neural networks enables those networks to better adapt to chaos within their environment.

A clique away from more efficient networks
An old branch of mathematics finds a fertile new field of application.

Unravelling complex brain networks with automated 3D neural mapping
KAIST researchers developed a new algorithm for brain imaging data analysis that enables the precise and quantitative mapping of complex neural circuits onto a standardized 3D reference atlas.

Read More: Neural Networks News and Neural Networks Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.