Accelerating AI computing to the speed of light

January 08, 2021

Artificial intelligence and machine learning are already an integral part of our everyday lives online. For example, search engines such as Google use intelligent ranking algorithms and video streaming services such as Netflix use machine learning to personalize movie recommendations.

As the demands for AI online continue to grow, so does the need to speed up AI performance and find ways to reduce its energy consumption.

Now a University of Washington-led team has come up with a system that could help: an optical computing core prototype that uses phase-change material. This system is fast, energy efficient and capable of accelerating the neural networks used in AI and machine learning. The technology is also scalable and directly applicable to cloud computing.

The team published these findings Jan. 4 in Nature Communications.

"The hardware we developed is optimized to run algorithms of an artificial neural network, which is really a backbone algorithm for AI and machine learning," said senior author Mo Li, a UW associate professor of both electrical and computer engineering and physics. "This research advance will make AI centers and cloud computing more energy efficient and run much faster."

The team is among the first in the world to use phase-change material in optical computing to enable image recognition by an artificial neural network. Recognizing an image in a photo is something that is easy for humans to do, but it is computationally demanding for AI. Because image recognition is computation-heavy, it is considered a benchmark test of a neural network's computing speed and precision. The team demonstrated that their optical computing core, running an artificial neural network, could easily pass this test.

"Optical computing first appeared as a concept in the 1980s, but then it faded in the shadow of microelectronics," said lead author Changming Wu, a UW electrical and computer engineering graduate student. "Now, because of the end of Moore's law, advances in integrated photonics and the demands of AI computing, it has been revamped. That's very exciting."
[Note: Moore's law is the observation that the number of transistors in a dense, integrated circuit doubles about every two years.]

Other co-authors are Seokhyeong Lee and Ruoming Peng at the UW, and Heshan Yu and Ichiro Takeuchi at the University of Maryland. This research was funded by the Office of Naval Research Multidisciplinary University Research Initiative. Some of this work was conducted at the Washington Nanofabrication Facility/Molecular Analysis Facility at the UW.

For more information, contact Li at

Grant numbers: N00014-17-1-2661

University of Washington

Related Neural Network Articles from Brightsurf:

Performance test for neural interfaces
Freiburg researchers develop guidelines to standardize analysis of electrodes.

Neural cartography
A new x-ray microscopy technique could help accelerate efforts to map neural circuits and ultimately the brain itself.

New neural network differentiates Middle and Late Stone Age toolkits
The change from Middle Stone Age (MSA) to Later Stone Age (LSA) marks a major cultural change amongst our hunter-gatherer ancestors, but distinguishing between these two industrial complexes is not straightforward.

Early neural activity associated with autism
Researchers at the University of California, Los Angeles, have found evidence of signature brain activity in infants that predicted ASD symptoms later at 18 months old.

Recovering data: NIST's neural network model finds small objects in dense images
In efforts to automatically capture important data from scientific papers, computer scientists at the National Institute of Standards and Technology (NIST) have developed a method that can accurately detect small, geometric objects such as triangles within dense, low-quality plots contained in image data.

Get excited by neural networks
Scientists at The University of Tokyo introduced a new method for inferring the energy of the excited states of electrons in materials using machine learning.

Performing optical logic operations by a diffractive neural network
Optical logic operations, as the basis of optical computing, hold huge potentials to many applications such as cryptographically secured wireless communication and real-time wavefront-shaping.

Computational imaging benefits from untrained neural network
In a recent study, investigators from the Chinese Academy of Sciences described how they combined an untrained neural network and physics knowledge to eliminate the limitations of deep-learning-based CI methods.

Neural hardware for image recognition in nanoseconds
Usually, artificial intelligence is based on software. Scientists at TU Wien (Vienna) created intelligent hardware, which is much faster.

The neural basis of sensory hypersensitivity
A study from MIT and Brown University reveals a neural circuit that appears to underlie sensory hypersensitivity in a mouse model of autism, offering a possible strategy for developing new treatments.

Read More: Neural Network News and Neural Network Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to