Nav: Home

A new parallel strategy for tackling turbulence on Summit

November 13, 2019

Turbulence, the state of disorderly fluid motion, is a scientific puzzle of great complexity. Turbulence permeates many applications in science and engineering, including combustion, pollutant transport, weather forecasting, astrophysics, and more. One of the challenges facing scientists who simulate turbulence lies in the wide range of scales they must capture to accurately understand the phenomenon. These scales can span several orders of magnitude and can be difficult to capture within the constraints of the available computing resources.

High-performance computing can stand up to this challenge when paired with the right scientific code; but simulating turbulent flows at problem sizes beyond the current state of the art requires new thinking in concert with top-of-the-line heterogeneous platforms.

A team led by P. K. Yeung, professor of aerospace engineering and mechanical engineering at the Georgia Institute of Technology, performs direct numerical simulations (DNS) of turbulence using his team's new code, GPUs for Extreme-Scale Turbulence Simulations (GESTS). DNS can accurately capture the details that arise from a wide range of scales. Earlier this year, the team developed a new algorithm optimized for the IBM AC922 Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF). With the new algorithm, the team reached a performance of less than 15 seconds of wall-clock time per time step for more than 6 trillion grid points in space--a new world record surpassing the prior state of the art in the field for the size of the problem.

The simulations the team conducts on Summit are expected to clarify important issues regarding rapidly churning turbulent fluid flows, which will have a direct impact on the modeling of reacting flows in engines and other types of propulsion systems.

GESTS is a computational fluid dynamics code in the Center for Accelerated Application Readiness at the OLCF, a US Department of Energy (DOE) Office of Science User Facility at DOE's Oak Ridge National Laboratory. At the heart of GESTS is a basic math algorithm that computes large-scale, distributed fast Fourier transforms (FFTs) in three spatial directions.

An FFT is a math algorithm that computes the conversion of a signal (or a field) from its original time or space domain to a representation in the frequency (or wave number) space--and vice versa for the inverse transform. Yeung extensively applies a huge number of FFTs in accurately solving the fundamental partial differential equation of fluid dynamics, the Navier-Stokes equation, using an approach known in mathematics and scientific computing as "pseudospectral methods."

Most simulations using massive CPU-based parallelism will partition a 3D solution domain, or the volume of space where a fluid flow is computed, along two directions into many long "data boxes," or "pencils." However, when Yeung's team met at an OLCF GPU Hackathon in late 2017 with mentor David Appelhans, a research staff member at IBM, the group conceived of an innovative idea. They would combine two different approaches to tackle the problem. They would first partition the 3D domain in one direction, forming a number of data "slabs" on Summit's large-memory CPUs, then further parallelize within each slab using Summit's GPUs.

The team identified the most time-intensive parts of a base CPU code and set out to design a new algorithm that would reduce the cost of these operations, push the limits of the largest problem size possible, and take advantage of the unique data-centric characteristics of Summit, the world's most powerful and smartest supercomputer for open science.

"We designed this algorithm to be one of hierarchical parallelism to ensure that it would work well on a hierarchical system," Appelhans said. "We put up to two slabs on a node, but because each node has 6 GPUs, we broke each slab up and put those individual pieces on different GPUs."

In the past, pencils may have been distributed among many nodes, but the team's method makes use of Summit's on-node communication and its large amount of CPU memory to fit entire data slabs on single nodes.

"We were originally planning on running the code with the memory residing on the GPU, which would have limited us to smaller problem sizes," Yeung said. "However, at the OLCF GPU Hackathon, we realized that the NVLink connection between the CPU and the GPU is so fast that we could actually maximize the use of the 512 gigabytes of CPU memory per node."

The realization drove the team to adapt some of the main pieces of the code (kernels) for GPU data movement and asynchronous processing, which allows computation and data movement to occur simultaneously. The innovative kernels transformed the code and allowed the team to solve problems much larger than ever before at a much faster rate than ever before.

The team's success proved that even large, communication-dominated applications can benefit greatly from the world's most powerful supercomputer when code developers integrate the heterogenous architecture into the algorithm design.

Coalescing into success

One of the key ingredients to the team's success was a perfect fit between the Georgia Tech team's long-held domain science expertise and Appelhans' innovative thinking and deep knowledge of the machine.

Also crucial to the achievement was the OLCF's early access Ascent and Summitdev systems and a million-node-hour allocation on Summit provided by the Innovative Novel and Computational Impact on Theory and Experiment (INCITE) program, jointly managed by the Argonne and Oak Ridge Leadership Computing Facilities, and the Summit Early Science Program in 2019.

Oscar Hernandez, tools developer at the OLCF, helped the team navigate challenges throughout the project. One such challenge was figuring out how to how to run each single parallel process (that obeys the message passing interface [MPI] standard) on the CPU in conjunction with multiple GPUs. Typically, one or more MPI processes are tied to a single GPU, but the team found that using multiple GPUs per MPI process allows the MPI processes to send and receive a smaller number of larger messages than the team originally planned. Using the OpenMP programming model, Hernandez helped the team reduce the number of MPI tasks, improving the code's communication performance and thereby leading to further speedups.

Kiran Ravikumar, a Georgia Tech doctoral student on the project, will present details of the algorithm within the technical program of the 2019 Supercomputing Conference, SC19.

The team plans to use the code to make further inroads into the mysteries of turbulence; they will also introduce other physical phenomena such as oceanic mixing and electromagnetic fields into the code in the future.

"This code, and its future versions, will provide exciting opportunities for major advances in the science of turbulence, with insights of generality bearing upon turbulent mixing in many natural and engineered environments," Yeung said.
Related Publication: K. Ravikumar, D. Appelhans, and P. K. Yeung, "GPU Acceleration of Extreme Scale Pseudo-Spectral Simulations of Turbulence using Asynchronism." Paper to be presented at the 2019 International Conference for High Performance Computing, Networking and Storage Analysis (SC10), Denver, CO, November 17-22, 2019.

DOE/Oak Ridge National Laboratory

Related Engineering Articles:

Re-engineering antibodies for COVID-19
Catholic University of America researcher uses 'in silico' analysis to fast-track passive immunity
Next frontier in bacterial engineering
A new technique overcomes a serious hurdle in the field of bacterial design and engineering.
COVID-19 and the role of tissue engineering
Tissue engineering has a unique set of tools and technologies for developing preventive strategies, diagnostics, and treatments that can play an important role during the ongoing COVID-19 pandemic.
Engineering the meniscus
Damage to the meniscus is common, but there remains an unmet need for improved restorative therapies that can overcome poor healing in the avascular regions.
Artificially engineering the intestine
Short bowel syndrome is a debilitating condition with few treatment options, and these treatments have limited efficacy.
Reverse engineering the fireworks of life
An interdisciplinary team of Princeton researchers has successfully reverse engineered the components and sequence of events that lead to microtubule branching.
New method for engineering metabolic pathways
Two approaches provide a faster way to create enzymes and analyze their reactions, leading to the design of more complex molecules.
Engineering for high-speed devices
A research team from the University of Delaware has developed cutting-edge technology for photonics devices that could enable faster communications between phones and computers.
Breakthrough in blood vessel engineering
Growing functional blood vessel networks is no easy task. Previously, other groups have made networks that span millimeters in size.
Next-gen batteries possible with new engineering approach
Dramatically longer-lasting, faster-charging and safer lithium metal batteries may be possible, according to Penn State research, recently published in Nature Energy.
More Engineering News and Engineering Current Events

Trending Science News

Current Coronavirus (COVID-19) News

Top Science Podcasts

We have hand picked the top science podcasts of 2020.
Now Playing: TED Radio Hour

Debbie Millman: Designing Our Lives
From prehistoric cave art to today's social media feeds, to design is to be human. This hour, designer Debbie Millman guides us through a world made and remade–and helps us design our own paths.
Now Playing: Science for the People

#574 State of the Heart
This week we focus on heart disease, heart failure, what blood pressure is and why it's bad when it's high. Host Rachelle Saunders talks with physician, clinical researcher, and writer Haider Warraich about his book "State of the Heart: Exploring the History, Science, and Future of Cardiac Disease" and the ails of our hearts.
Now Playing: Radiolab

Insomnia Line
Coronasomnia is a not-so-surprising side-effect of the global pandemic. More and more of us are having trouble falling asleep. We wanted to find a way to get inside that nighttime world, to see why people are awake and what they are thinking about. So what'd Radiolab decide to do?  Open up the phone lines and talk to you. We created an insomnia hotline and on this week's experimental episode, we stayed up all night, taking hundreds of calls, spilling secrets, and at long last, watching the sunrise peek through.   This episode was produced by Lulu Miller with Rachael Cusick, Tracie Hunte, Tobin Low, Sarah Qari, Molly Webster, Pat Walters, Shima Oliaee, and Jonny Moens. Want more Radiolab in your life? Sign up for our newsletter! We share our latest favorites: articles, tv shows, funny Youtube videos, chocolate chip cookie recipes, and more. Support Radiolab by becoming a member today at