Swinburne-led research team demonstrates world's fastest optical neuromorphic processor

January 06, 2021

An international team of researchers led by Swinburne University of Technology has demonstrated the world's fastest and most powerful optical neuromorphic processor for artificial intelligence (AI), which operates faster than 10 trillion operations per second (TeraOPs/s) and is capable of processing ultra-large scale data.

Published in the prestigious journal Nature, this breakthrough represents an enormous leap forward for neural networks and neuromorphic processing in general.

Artificial neural networks, a key form of AI, can 'learn' and perform complex operations with wide applications to computer vision, natural language processing, facial recognition, speech translation, playing strategy games, medical diagnosis and many other areas. Inspired by the biological structure of the brain's visual cortex system, artificial neural networks extract key features of raw data to predict properties and behaviour with unprecedented accuracy and simplicity.

Led by Swinburne's Professor David Moss, Dr Xingyuan (Mike) Xu (Swinburne, Monash University) and Distinguished Professor Arnan Mitchell from RMIT University, the team achieved an exceptional feat in optical neural networks: dramatically accelerating their computing speed and processing power.

The team demonstrated an optical neuromorphic processor operating more than 1000 times faster than any previous processor, with the system also processing record-sized ultra-large scale images - enough to achieve full facial image recognition, something that other optical processors have been unable to accomplish.

"This breakthrough was achieved with 'optical micro-combs', as was our world-record internet data speed reported in May 2020," says Professor Moss, Director of Swinburne's Optical Sciences Centre and recently named one of Australia's top research leaders in physics and mathematics in the field of optics and photonics by The Australian.

While state-of-the-art electronic processors such as the Google TPU can operate beyond 100 TeraOPs/s, this is done with tens of thousands of parallel processors. In contrast, the optical system demonstrated by the team uses a single processor and was achieved using a new technique of simultaneously interleaving the data in time, wavelength and spatial dimensions through an integrated micro-comb source.

Micro-combs are relatively new devices that act like a rainbow made up of hundreds of high-quality infrared lasers on a single chip. They are much faster, smaller, lighter and cheaper than any other optical source.

"In the 10 years since I co-invented them, integrated micro-comb chips have become enormously important and it is truly exciting to see them enabling these huge advances in information communication and processing. Micro-combs offer enormous promise for us to meet the world's insatiable need for information," Professor Moss says.

"This processor can serve as a universal ultrahigh bandwidth front end for any neuromorphic hardware --optical or electronic based -- bringing massive-data machine learning for real-time ultrahigh bandwidth data within reach," says co-lead author of the study, Dr Xu, Swinburne alum and postdoctoral fellow with the Electrical and Computer Systems Engineering Department at Monash University.

"We're currently getting a sneak-peak of how the processors of the future will look. It's really showing us how dramatically we can scale the power of our processors through the innovative use of microcombs," Dr Xu explains.

RMIT's Professor Mitchell adds, "This technology is applicable to all forms of processing and communications - it will have a huge impact. Long term we hope to realise fully integrated systems on a chip, greatly reducing cost and energy consumption".

"Convolutional neural networks have been central to the artificial intelligence revolution, but existing silicon technology increasingly presents a bottleneck in processing speed and energy efficiency," says key supporter of the research team, Professor Damien Hicks, from Swinburne and the Walter and Elizabeth Hall Institute.

He adds, "This breakthrough shows how a new optical technology makes such networks faster and more efficient and is a profound demonstration of the benefits of cross-disciplinary thinking, in having the inspiration and courage to take an idea from one field and using it to solve a fundamental problem in another."
-end-
The international research collaboration was led by Professor David Moss (Swinburne); Dr Xingyuan (Mike) Xu (Swinburne and Monash) and Distinguished Professor Arnan Mitchell (RMIT) with key support from Mengxi Tan and Dr Jiayang Wu (Swinburne); Professor Damien Hicks (Swinburne and Walter and Elizabeth Hall Institute (WEHI)); Andrea Boes and Thach G Nguyen (RMIT); Dr Bill Corcoran (Monash); Sai T Chu (City University of Hong Kong); Brent Little (Xi'an Institute of Optics); and Roberto Morandotti (INRS in Montreal, Canada).

Media contact

Cherish Philip George, Communications and Media
Swinburne University of Technology
M +61 410 276 413 | E: cphilipgeorge@swin.edu.au

Monash University

Related Artificial Intelligence Articles from Brightsurf:

Physics can assist with key challenges in artificial intelligence
Two challenges in the field of artificial intelligence have been solved by adopting a physical concept introduced a century ago to describe the formation of a magnet during a process of iron bulk cooling.

A survey on artificial intelligence in chest imaging of COVID-19
Announcing a new article publication for BIO Integration journal. In this review article the authors consider the application of artificial intelligence imaging analysis methods for COVID-19 clinical diagnosis.

Using artificial intelligence can improve pregnant women's health
Disorders such as congenital heart birth defects or macrosomia, gestational diabetes and preterm birth can be detected earlier when artificial intelligence is used.

Artificial intelligence (AI)-aided disease prediction
Artificial Intelligence (AI)-aided Disease Prediction https://doi.org/10.15212/bioi-2020-0017 Announcing a new article publication for BIO Integration journal.

Artificial intelligence dives into thousands of WW2 photographs
In a new international cross disciplinary study, researchers from Aarhus University, Denmark and Tampere University, Finland have used artificial intelligence to analyse large amounts of historical photos from WW2.

Applying artificial intelligence to science education
A new review published in the Journal of Research in Science Teaching highlights the potential of machine learning--a subset of artificial intelligence--in science education.

New roles for clinicians in the age of artificial intelligence
New Roles for Clinicians in the Age of Artificial Intelligence https://doi.org/10.15212/bioi-2020-0014 Announcing a new article publication for BIO Integration journal.

Artificial intelligence aids gene activation discovery
Scientists have long known that human genes are activated through instructions delivered by the precise order of our DNA.

Artificial intelligence recognizes deteriorating photoreceptors
A software based on artificial intelligence (AI), which was developed by researchers at the Eye Clinic of the University Hospital Bonn, Stanford University and University of Utah, enables the precise assessment of the progression of geographic atrophy (GA), a disease of the light sensitive retina caused by age-related macular degeneration (AMD).

Classifying galaxies with artificial intelligence
Astronomers have applied artificial intelligence (AI) to ultra-wide field-of-view images of the distant Universe captured by the Subaru Telescope, and have achieved a very high accuracy for finding and classifying spiral galaxies in those images.

Read More: Artificial Intelligence News and Artificial Intelligence Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.