Rice engineers offer smart, timely ideas for AI bottlenecks

June 11, 2020

HOUSTON -- (June 11, 2020) -- Rice University researchers have demonstrated methods for both designing innovative data-centric computing hardware and co-designing hardware with machine-learning algorithms that together can improve energy efficiency by as much as two orders of magnitude.

Advances in machine learning, the form of artificial intelligence behind self-driving cars and many other high-tech applications, have ushered in a new era of computing -- the data-centric era -- and are forcing engineers to rethink aspects of computing architecture that have gone mostly unchallenged for 75 years.

"The problem is that for large-scale deep neural networks, which are state-of-the-art for machine learning today, more than 90% of the electricity needed to run the entire system is consumed in moving data between the memory and processor," said Yingyan Lin, an assistant professor of electrical and computer engineering.

Lin and collaborators proposed two complementary methods for optimizing data-centric processing, both of which were presented June 3 at the International Symposium on Computer Architecture (ISCA), one of the premier conferences for new ideas and research in computer architecture.

The drive for data-centric architecture is related to a problem called the von Neumann bottleneck, an inefficiency that stems from the separation of memory and processing in the computing architecture that has reigned supreme since mathematician John von Neumann invented it in 1945. By separating memory from programs and data, von Neumann architecture allows a single computer to be incredibly versatile; depending upon which stored program is loaded from its memory, a computer can be used to make a video call, prepare a spreadsheet or simulate the weather on Mars.

But separating memory from processing also means that even simple operations, like adding 2 plus 2, require the computer's processor to access the memory multiple times. This memory bottleneck is made worse by massive operations in deep neural networks, systems that learn to make humanlike decisions by "studying" large numbers of previous examples. The larger the network, the more difficult the task it can master, and the more examples the network is shown, the better it performs. Deep neural network training can require banks of specialized processors that run around the clock for more than a week. Performing tasks based on the learned networks -- a process known as inference -- on a smartphone can drain its battery in less than an hour.

"It has been commonly recognized that for the data-centric algorithms of the machine-learning era, we need innovative data-centric hardware architecture," said Lin, the director of Rice's Efficient and Intelligent Computing (EIC) Lab. "But what is the optimal hardware architecture for machine learning?

"There are no one-for-all answers, as different applications require machine-learning algorithms that might differ a lot in terms of algorithm structure and complexity, while having different task accuracy and resource consumption -- like energy cost, latency and throughput -- tradeoff requirements," she said. "Many researchers are working on this, and big companies like Intel, IBM and Google all have their own designs."

One of the presentations from Lin's group at ISCA 2020 offered results on TIMELY, an innovative architecture she and her students developed for "processing in-memory" (PIM), a non-von Neumann approach that brings processing into memory arrays. A promising PIM platform is "resistive random access memory" (ReRAM), a nonvolatile memory similar to flash. While other ReRAM PIM accelerator architectures have been proposed, Lin said experiments run on more than 10 deep neural network models found TIMELY was 18 times more energy efficient and delivered more than 30 times the computational density of the most competitive state-of-the-art ReRAM PIM accelerator.

TIMELY, which stands for "Time-domain, In-Memory Execution, LocalitY," achieves its performance by eliminating major contributors to inefficiency that arise from both frequent access to the main memory for handling intermediate input and output and the interface between local and main memories.

In the main memory, data is stored digitally, but it must be converted to analog when it is brought into the local memory for processing in-memory. In prior ReRAM PIM accelerators, the resulting values are converted from analog to digital and sent back to the main memory. If they are called from the main memory to local ReRAM for subsequent operations, they are converted to analog yet again, and so on.

TIMELY avoids paying overhead for both unnecessary accesses to the main memory and interfacing data conversions by using analog-format buffers within the local memory. In this way, TIMELY mostly keeps the required data within local memory arrays, greatly enhancing efficiency.

The group's second proposal at ISCA 2020 was for SmartExchange, a design that marries algorithmic and accelerator hardware innovations to save energy.

"It can cost about 200 times more energy to access the main memory -- the DRAM -- than to perform a computation, so the key idea for SmartExchange is enforcing structures within the algorithm that allow us to trade higher-cost memory for much-lower-cost computation," Lin said.

"For example, let's say our algorithm has 1,000 parameters," she added. "In a conventional approach, we will store all the 1,000 in DRAM and access as needed for computation. With SmartExchange, we search to find some structure within this 1,000. We then need to only store 10, because if we know the relationship between these 10 and the remaining 990, we can compute any of the 990 rather than calling them up from DRAM.

"We call these 10 the 'basis' subset, and the idea is to store these locally, close to the processor to avoid or aggressively reduce having to pay costs for accessing DRAM," she said.

The researchers used the SmartExchange algorithm and their custom hardware accelerator to experiment on seven benchmark deep neural network models and three benchmark datasets. They found the combination reduced latency by as much as 19 times compared to state-of-the-art deep neural network accelerators.
TIMELY study co-authors include Weitao Li, Pengfei Xu and Yang Zhao, all of Rice, Haitong Li of Stanford University and Yuan Xie of the University of California, Santa Barbara (UCSB). SmartExchange study co-authors include Zhao, Yue Wang, Chaojian Li, Haoran You and Yonggan Fu, all of Rice, Xie, and Xiaohan Chen and Zhangyang Wang of Texas A&M University.

The research was supported by the National Science Foundation (937592 and 1937588) and the National Institutes of Health (R01HL144683).

Both papers are available at: https://eiclab.net/publications/.

IMAGES are available for download at:


CAPTION: Yingyan Lin (Photo courtesy of Rice University)


CAPTION: Rice University researchers have demonstrated methods for both designing data-centric computing hardware and co-designing hardware with machine-learning algorithms that together can improve energy efficiency in artificial intelligence hardware by as much as two orders of magnitude.

This release can be found online at https://news.rice.edu/2020/06/11/rice-engineers-offer-smart-timely-ideas-for-ai-bottlenecks/

Follow Rice News and Media Relations via Twitter @RiceUNews.

Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation's top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,962 undergraduates and 3,027 graduate students, Rice's undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 4 for quality of life by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger's Personal Finance.

Jeff Falk

Mike Williams

Rice University

Related Memory Articles from Brightsurf:

Memory of the Venus flytrap
In a study to be published in Nature Plants, a graduate student Mr.

Memory protein
When UC Santa Barbara materials scientist Omar Saleh and graduate student Ian Morgan sought to understand the mechanical behaviors of disordered proteins in the lab, they expected that after being stretched, one particular model protein would snap back instantaneously, like a rubber band.

Previously claimed memory boosting font 'Sans Forgetica' does not actually boost memory
It was previously claimed that the font Sans Forgetica could enhance people's memory for information, however researchers from the University of Warwick and the University of Waikato, New Zealand, have found after carrying out numerous experiments that the font does not enhance memory.

Memory boost with just one look
HRL Laboratories, LLC, researchers have published results showing that targeted transcranial electrical stimulation during slow-wave sleep can improve metamemories of specific episodes by 20% after only one viewing of the episode, compared to controls.

VR is not suited to visual memory?!
Toyohashi university of technology researcher and a research team at Tokyo Denki University have found that virtual reality (VR) may interfere with visual memory.

The genetic signature of memory
Despite their importance in memory, the human cortex and subcortex display a distinct collection of 'gene signatures.' The work recently published in eNeuro increases our understanding of how the brain creates memories and identifies potential genes for further investigation.

How long does memory last? For shape memory alloys, the longer the better
Scientists captured live action details of the phase transitions of shape memory alloys, giving them a better idea how to improve their properties for applications.

A NEAT discovery about memory
UAB researchers say over expression of NEAT1, an noncoding RNA, appears to diminish the ability of older brains to form memories.

Molecular memory can be used to increase the memory capacity of hard disks
Researchers at the University of Jyväskylä have taken part in an international British-Finnish-Chinese collaboration where the first molecule capable of remembering the direction of a magnetic above liquid nitrogen temperatures has been prepared and characterized.

Memory transferred between snails
Memories can be transferred between organisms by extracting ribonucleic acid (RNA) from a trained animal and injecting it into an untrained animal, as demonstrated in a study of sea snails published in eNeuro.

Read More: Memory News and Memory Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.