Novel software to balance data processing load in supercomputers to be presented

April 30, 2019

The modern-age adage "work smarter, not harder" stresses the importance of not only working to produce, but also making efficient use of resources.

And it's not something that supercomputers currently do well all of the time, especially when it comes to managing huge amounts of data.

But a team of researchers in the Department of Computer Science in Virginia Tech's College of Engineering is helping supercomputers to work more efficiently in a novel way, using machine learning to properly distribute, or load balance, data processing tasks across the thousands of servers that comprise a supercomputer.

By incorporating machine learning to predict not only tasks but types of tasks, researchers found that load on various servers can be kept balanced throughout the entire system. The team will present its research in Rio de Janeiro, Brazil, at the 33rd International Parallel and Distributed Processing Symposium on May 22, 2019.

Current data management systems in supercomputing rely on approaches that assign tasks in a round-robin manner to servers without regard to the kind of task or amount of data it will burden the server with. When load on servers is not balanced, systems get bogged down by stragglers, and performance is severely degraded.

"Supercomputing systems are harbingers of American competitiveness in high-performance computing," said Ali R. Butt, professor of computer science. "They are crucial to not only achieving scientific breakthroughs but maintaining the efficacy of systems that allow us to conduct the business of our everyday lives, from using streaming services to watch movies to processing online financial transactions to forecasting weather systems using weather modeling."

In order to implement a system to use machine learning, the team built a novel end-to-end control plane that combined the application-centric strengths of client-side approaches with the system-centric strengths of server-side approaches.

"This study was a giant leap in managing supercomputing systems. What we've done has given supercomputing a performance boost and proven these systems can be managed smartly in a cost-effective way through machine learning," said Bharti Wadhwa, first author on the paper and a Ph.D. candidate in the Department of Computer Science. "We have given users the capability of designing systems without incurring a lot of cost."

The novel technique gave the team the ability to have "eyes" to monitor the system and allowed the data storage system to learn and predict when larger loads might be coming down the pike or when the load became too great for one server. The system also provided real-time information in an application-agnostic way, creating a global view of what was happening in the system. Previously servers couldn't learn and software applications weren't nimble enough to be customized without major redesign.

"The algorithm predicted the future requests of applications via a time-series model," said Arnab K. Paul, second author and Ph.D. candidate also in the Department of Computer Science. "This ability to learn from data gave us a unique opportunity to see how we could place future requests in a load balanced manner."

The end-to-end system also allowed an unprecedented ability for users to benefit from the load balanced setup without changing the source code. In current traditional supercomputer systems this is a costly procedure as it requires the foundation of the application code to be altered

"It was a privilege to contribute to the field of supercomputing with this team," said Sarah Neuwirth, a postdoctoral researcher from the University of Heidelberg's Institute of Computer Engineering. "For supercomputing to evolve and meet the challenges of a 21st-century society, we will need to lead international efforts such as this. My own work with commonly used supercomputing systems benefited greatly from this project."

The end-to-end control plane consisted of storage servers posting their usage information to the metadata server. An autoregressive integrated moving average time series model was used to predict future requests with approximately 99 percent accuracy and were sent to the metadata server in order to map to storage servers using minimum-cost maximum-flow graph algorithm.
This research is funded by the National Science Foundation and done in collaboration with the National Leadership Computing Facility at Oak Ridge National Lab.

Virginia Tech

Related Supercomputers Articles from Brightsurf:

Blue whirl flame structure revealed with supercomputers
Main structure and flow structure of 'blue whirl' flame revealed through supercomputer simulations.

Hungry galaxies grow fat on the flesh of their neighbours
Galaxies grow large by eating their smaller neighbours, new research reveals.

Supercomputers and Archimedes' law enable calculating nanobubble diffusion in nuclear fuel
Researchers from the Moscow Institute of Physics and Technology have proposed a method that speeds up the calculation of nanobubble diffusion in solid materials.

Dissecting the mechanism of protein unfolding by SDS
A new study by the Aksimentiev group at the University of Illinois has used molecular dynamics simulations to understand how sodium dodecyl sulfate, a commonly used detergent in labs, induces protein folding.

Supercomputers unlock reproductive mysteries of viruses and life
Supercomputer simulations support a new mechanism for the budding off of viruses like the coronavirus.

Supercomputers drive ion transport research
Kinetics of solute transport through nanoporous membranes captured through supercomputer simulations.

Supercomputers use graphics processors to solve longstanding turbulence question
Advanced simulations have solved a problem in turbulent fluid flow that could lead to more efficient turbines and engines.

Novel software to balance data processing load in supercomputers to be presented
The team will present its research in Rio de Janeiro, Brazil, at the 33rd International Parallel and Distributed Processing Symposium on May 22, 2019.

Supercomputers help supercharge protein assembly
Using proteins derived from jellyfish, scientists assembled a complex sixteen protein structure composed of two stacked octamers by supercharging alone.

Physicists use supercomputers to solve 50-year-old beta decay puzzle
Beta decay plays an indispensable role in the universe. And for 50 years it has held onto a secret that puzzled nuclear physicists.

Read More: Supercomputers News and Supercomputers Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to