Nav: Home

System opens up high-performance programming to nonexperts

November 07, 2016

Dynamic programming is a technique that can yield relatively efficient solutions to computational problems in economics, genomic analysis, and other fields. But adapting it to computer chips with multiple "cores," or processing units, requires a level of programming expertise that few economists and biologists have.

Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stony Brook University aim to change that, with a new system that allows users to describe what they want their programs to do in very general terms. It then automatically produces versions of those programs that are optimized to run on multicore chips. It also guarantees that the new versions will yield exactly the same results that the single-core versions would, albeit much faster.

In experiments, the researchers used the system to "parallelize" several algorithms that used dynamic programming, splitting them up so that they would run on multicore chips. The resulting programs were between three and 11 times as fast as those produced by earlier techniques for automatic parallelization, and they were generally as efficient as those that were hand-parallelized by computer scientists.

The researchers presented their new system last week at the Association for Computing Machinery's conference on Systems, Programming, Languages and Applications: Software for Humanity.

Dynamic programming offers exponential speedups on a certain class of problems because it stores and reuses the results of computations, rather than recomputing them every time they're required.

"But you need more memory, because you store the results of intermediate computations," says Shachar Itzhaky, first author on the new paper and a postdoc in the group of Armando Solar-Lezama, an associate professor of electrical engineering and computer science at MIT. "When you come to implement it, you realize that you don't get as much speedup as you thought you would, because the memory is slow. When you store and fetch, of course, it's still faster than redoing the computation, but it's not as fast as it could have been."

Outsourcing complexity

Computer scientists avoid this problem by reordering computations so that those requiring a particular stored value are executed in sequence, minimizing the number of times that the value has to be recalled from memory. That's relatively easy to do with a single-core computer, but with multicore computers, when multiple cores are sharing data stored at multiple locations, memory management become much more complex. A hand-optimized, parallel version of a dynamic-programming algorithm is typically 10 times as long as the single-core version, and the individual lines of code are more complex, to boot.

The CSAIL researchers' new system -- dubbed Bellmania, after Richard Bellman, the applied mathematician who pioneered dynamic programming -- adopts a parallelization strategy called recursive divide-and-conquer. Suppose that the task of a parallel algorithm is to perform a sequence of computations on a grid of numbers, known as a matrix. Its first task might be to divide the grid into four parts, each to be processed separately.

But then it might divide each of those four parts into four parts, and each of those into another four parts, and so on. Because this approach -- recursion -- involves breaking a problem into smaller subproblems, it naturally lends itself to parallelization.

Joining Itzhaky on the new paper are Solar-Lezama; Charles Leiserson, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science; Rohit Singh and Kuat Yessenov, who were MIT both graduate students in electrical engineering and computer science when the work was done; Yongquan Lu, an MIT undergraduate who participated in the project through MIT's Undergraduate Research Opportunities Program; and Rezaul Chowdhury, an assistant professor of computer science at Stony Brook, who was formerly a research affiliate in Leiserson's group.

Leiserson's group specializes in divide-and-conquer parallelization techniques; Solar-Lezama's specializes in program synthesis, or automatically generating code from high-level specifications. With Bellmania, the user simply has to describe the first step of the process -- the division of the matrix and the procedures to be applied to the resulting segments. Bellmania then determines how to continue subdividing the problem so as to use memory efficiently.

Rapid search

At each level of recursion -- with each successively smaller subdivision of the matrix -- a program generated by Bellmania will typically perform some operation on some segment of the matrix and farm the rest out to subroutines, which can be performed in parallel. Each of those subroutines, in turn, will perform some operation on some segment of the data and farm the rest out to further subroutines, and so on.

Bellmania determines how much data should be processed at each level and which subroutines should handle the rest. "The goal is to arrange the memory accesses such that when you read a cell [of the matrix], you do as much computation as you can with it, so that you will not have to read it again later," Itzhaky says.

Finding the optimal division of tasks requires canvassing a wide range of possibilities. Solar-Lezama's group has developed a suite of tools to make that type of search more efficient; even so, Bellmania takes about 15 minutes to parallelize a typical dynamic-programming algorithm. That's still much faster than a human programmer could perform the same task, however. And the result is guaranteed to be correct; hand-optimized code is so complex that it's easy for errors to creep in.
-end-


Massachusetts Institute of Technology

Related Memory Articles:

Taking photos of experiences boosts visual memory, impairs auditory memory
A quick glance at any social media platform will tell you that people love taking photos of their experiences -- whether they're lying on the beach, touring a museum, or just waiting in line at the grocery store.
Think you know how to improve your memory? Think again
Research from Katherine Duncan at the University of Toronto suggests we may have to rethink how we improve memory.
Improving memory with magnets
The ability to remember sounds, and manipulate them in our minds, is incredibly important to our daily lives -- without it we would not be able to understand a sentence, or do simple arithmetic.
Who has the better memory -- men or women?
In the battle of the sexes, women have long claimed that they can remember things better and longer than men can.
New study of the memory through optogenetics
A collaboration between Universitat Autònoma de Barcelona and Harvard University pioneers the increase of memory using optogenetics in mice in Spain.
Peppermint tea can help improve your memory
Peppermint tea can improve long-term and working memory and in healthy adults.
A new glimpse into working memory
MIT study finds bursts of neural activity as the brain holds information in mind, overturns a long-held model.
Memory ensembles
For over forty years, neuro-scientists have been interested in the biological mechanisms underlying the storage of the information that our brain records every day.
What is your memory style?
Why is it that some people have richly detailed recollection of past experiences (episodic memory), while others tend to remember just the facts without details (semantic memory)?
Watching a memory form
Neuroscientists at Rosalind Franklin University of Medicine and Science have discovered a novel mechanism for memory formation.

Related Memory Reading:

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Climate Crisis
There's no greater threat to humanity than climate change. What can we do to stop the worst consequences? This hour, TED speakers explore how we can save our planet and whether we can do it in time. Guests include climate activist Greta Thunberg, chemical engineer Jennifer Wilcox, research scientist Sean Davis, food innovator Bruce Friedrich, and psychologist Per Espen Stoknes.
Now Playing: Science for the People

#527 Honey I CRISPR'd the Kids
This week we're coming to you from Awesome Con in Washington, D.C. There, host Bethany Brookshire led a panel of three amazing guests to talk about the promise and perils of CRISPR, and what happens now that CRISPR babies have (maybe?) been born. Featuring science writer Tina Saey, molecular biologist Anne Simon, and bioethicist Alan Regenberg. A Nobel Prize winner argues banning CRISPR babies won’t work Geneticists push for a 5-year global ban on gene-edited babies A CRISPR spin-off causes unintended typos in DNA News of the first gene-edited babies ignited a firestorm The researcher who created CRISPR twins defends...