Nav: Home

Big PanDA tackles big data for physics and other future extreme scale scientific applications

August 16, 2016

UPTON, NY-A billion times per second, particles zooming through the Large Hadron Collider (LHC) at CERN, the European Organization for Nuclear Research, smash into one another at nearly the speed of light, emitting subatomic debris that could help unravel the secrets of the universe. Collecting the data from those collisions and making it accessible to more than 6000 scientists in 45 countries, each potentially wanting to slice and analyze it in their own unique ways, is a monumental challenge that pushes the limits of the Worldwide LHC Computing Grid (WLCG), the current infrastructure for handling the LHC's computing needs. With the move to higher collision energies at the LHC, the demand just keeps growing.

To help meet this unprecedented demand and supplement the WLCG, a group of scientists working at U.S. Department of Energy (DOE) national laboratories and collaborating universities has developed a way to fit some of the LHC simulations that demand high computing power into untapped pockets of available computing time on one of the nation's most powerful supercomputers-similar to the way tiny pebbles can fill the empty spaces between larger rocks in a jar. The group-from DOE's Brookhaven National Laboratory, Oak Ridge National Laboratory (ORNL), University of Texas at Arlington, Rutgers University, and University of Tennessee, Knoxville-just received $2.1 million in funding for 2016-2017 from DOE's Advanced Scientific Computing Research (ASCR) program to enhance this "workload management system," known as Big PanDA, so it can help handle the LHC data demands and be used as a general workload management service at DOE's Oak Ridge Leadership Computing Facility (OLCF, https://www.olcf.ornl.gov/), a DOE Office of Science User Facility at ORNL.

"The implementation of these ideas in an operational-scale demonstration project at OLCF could potentially increase the use of available resources at this Leadership Computing Facility by five to ten percent," said Brookhaven physicist Alexei Klimentov, a leader on the project. "Mobilizing these previously unusable supercomputing capabilities, valued at millions of dollars per year, could quickly and effectively enable cutting-edge science in many data-intensive fields."

Proof-of-concept tests using the Titan supercomputer at Oak Ridge National Laboratory have been highly successful. This Leadership Computing Facility typically handles large jobs that are fit together to maximize its use. But even when fully subscribed, some 10 percent of Titan's computing capacity might be sitting idle-too small to take on another substantial "leadership class" job, but just right for handling smaller chunks of number crunching. The Big PanDA (for Production and Distributed Analysis) system takes advantage of these unused pockets by breaking up complex data analysis jobs and simulations for the LHC's ATLAS and ALICE experiments and "feeding" them into the "spaces" between the leadership computing jobs. When enough capacity is available to run a new big job, the smaller chunks get kicked out and reinserted to fill in any remaining idle time.

"Our team has managed to access opportunistic cycles available on Titan with no measurable negative effect on the supercomputer's ability to handle its usual workload," Klimentov said. He and his collaborators estimate that up to 30 million core hours or more per month may be harvested using the Big PanDA approach. From January through July of 2016, ATLAS detector simulation jobs ran for 32.7 million core hours on Titan, using only opportunistic, backfill resources. The results of the supercomputing calculations are shipped to and stored at the RHIC & ATLAS Computing Facility, a Tier 1 center for the WLCG located at Brookhaven Lab, so they can be made available to ATLAS researchers across the U.S. and around the globe.

The goal now is to translate the success of the Big PanDA project into operational advances that will enhance how the OLCF handles all of its data-intensive computing jobs. This approach will provide an important model for future exascale computing, increasing the coherence between the technology base used for high-performance, scalable modeling and simulation and that used for data-analytic computing.

"This is a novel and unique approach to workload management that could run on all current and future leadership computing facilities," Klimentov said.

Specifically, the new funding will help the team develop a production scale operational demonstration of the PanDA workflow within the OLCF computational and data resources; integrate OLCF and other leadership facilities with the Grid and Clouds; and help high-energy and nuclear physicists at ATLAS and ALICE-experiments that expect to collect 10 to 100 times more data during the next 3 to 5 years-achieve scientific breakthroughs at times of peak LHC demand.

As a unifying workload management system, Big PanDA will also help integrate Grid, leadership-class supercomputers, and Cloud computing into a heterogeneous computing architecture accessible to scientists all over the world as a step toward a global cyberinfrastructure.

"The integration of heterogeneous computing centers into a single federated distributed cyberinfrastructure will allow more efficient utilization of computing and disk resources for a wide range of scientific applications," said Klimentov, noting how the idea mirrors Aristotle's assertion that "the whole is greater than the sum of its parts."

-end-

This project is supported by the DOE Office of Science.

Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. Brookhaven is operated and managed for DOE's Office of Science by Brookhaven Science Associates, a limited-liability company founded by the Research Foundation for the State University of New York on behalf of Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit applied science and technology organization.

Media contacts: Karen McNulty Walsh, (631) 344-8350, kmcnulty@bnl.gov, or Peter Genzer, (631) 344-3174, genzer@bnl.gov

DOE/Brookhaven National Laboratory

Related Large Hadron Collider Articles:

The Large Hadron Collider -- the greatest adventure in town
World Scientific's latest book, 'The Large Hadron Collider,' homes in on the ATLAs Experiment to illustrate how and why this process happens, why it has an importance well beyond traditional spin-off and how it adds new meaning to the cost of this research and to the value of international collaboration.
Why odds are against a large Zika outbreak in the US
Is the United States at risk for a large-scale outbreak of Zika or other mosquito-borne disease?
Laser R&D focuses on next-gen particle collider
A set of new laser systems and proposed upgrades at Berkeley Lab's BELLA Center will propel long-term plans for a more compact and affordable ultrahigh-energy particle collider.
Physicist offers leading theory about mysterious Large Hadron Collider excess
K.C. Kong's idea: a sequence of particles at different masses -- without a 'resonance' particle at 750 GeV -- triggered the mystery signal at the Large Hadron Collider.
The large-scale stability of chromosomes
A new study led by the SISSA of Trieste and published in PLOS Computational Biology adds detail to the theoretical models used in chromatin simulations and demonstrates that even when made up of a mixture of fibres with different properties chromatin does not alter its three-dimensional structure above a certain spatial resolution.
Syracuse physicists help restart Large Hadron Collider
After months of winter hibernation, the LHC has resumed smashing beams of protons together, in attempt to recreate conditions of the first millionth of a second of the universe, some 13.9 billion years ago.
A quasiparticle collider
Experiments prove that basic collider concepts from particle physics can be transferred to solid-state research.
Physicists offer theories to explain mysterious collision at Large Hadron Collider
Physicists around the world were puzzled recently when an unusual bump appeared in the signal of the Large Hadron Collider, the world's largest and most powerful particle accelerator, causing them to wonder if it was a new particle previously unknown, or perhaps even two new particles.
Inadequate policies for hunting large carnivores
Many policies regulating carnivore hunting do not adequately acknowledge and address the negative effects of hunting on demography and population dynamics, authors of this Policy Forum say.
ALCF helps tackle the Large Hadron Collider's big data challenge
To help tackle the considerable challenge of interpreting data, researchers from the US Department of Energy's (DOE's) Argonne National Laboratory are demonstrating the potential of simulating collision events with Mira, a 10-petaflops IBM Blue Gene/Q supercomputer at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

Best Science Podcasts 2017

We have hand picked the best science podcasts for 2017. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: Radiolab

Oliver Sipple
One morning, Oliver Sipple went out for a walk. A couple hours later, to his own surprise, he saved the life of the President of the United States. But in the days that followed, Sipple's split-second act of heroism turned into a rationale for making his personal life into political opportunity. What happens next makes us wonder what a moment, or a movement, or a whole society can demand of one person. And how much is too much?
Now Playing: TED Radio Hour

Future Consequences
From data collection to gene editing to AI, what we once considered science fiction is now becoming reality. This hour, TED speakers explore the future consequences of our present actions. Guests include designer Anab Jain, futurist Juan Enriquez, biologist Paul Knoepfler, and neuroscientist and philosopher Sam Harris.