New method to ensure reproducibility in computational experiments

April 25, 2017

Research reproducibility is crucial to move forward in science. Unfortunately, and according to recent studies and surveys*, the number of irreproducible experiments is increasing and research reproducibility is now recognized as one of the major challenges that scientists, institutions, founders and journals must address for science to remain credible and to keep progressing.

In order to make sense of genomic data, scientists are increasingly relying on a combination of computer software named pipelines. These pipelines process data and deliver analytical results such as genetic risks for instance. Unfortunately, the results of these pipelines are not always reproducible. In the era of precision medicine, this limited reproducibility can have important implications for our health.

Now, a team of researchers at the Centre for Genomic Regulation (CRG) in Barcelona, Spain, led by Cedric Notredame, have developed a workflow management system that ensures reproducibility in computational experiments. The system, named Nextflow, has been described in the current issue of Nature Biotechnology. "When doing computational analysis, tiny variations across computational platforms can induce numerical instability that result in irreproducibility. Nextflow allows scientists to avoid these variations and contributes to standardizing good practices in computational experiments" explains Cedric Notredame, lead author of the paper.

"A small variation may not seem to be a problem when using genomic data in a particular research project but, even the smallest variations may be crucial if we are using these conclusions to take a decision, for instance on a precision medicine treatment." adds Paolo Di Tommaso, first author of the paper. "Irreproducibility will be a major issue in precision medicine" he concludes.

Containing irreproducibility

The main reason for irreproducibility is the complexity of modern computers. With all the libraries and software they contain, computers are like machines made of billions of moving parts. Even when using exactly the same pipeline and the same data, slight variations across computers can lead to irreproducibility. The solution to this problem is providing not only the data and the software, but also the complete pre-configured execution environment within a new generation of virtualization technology named containers. The CRG team implemented Nextflow as a tool to manage a computational workflow along with its dependencies by using these containers. "It is like freezing the experiment, so everyone aiming at reproducing it can do it the same way without having to manually re-introduce complex configurations. This way of doing things guarantees that the same dataset will produce the same results anywhere" explain the authors.

Nextflow helps integrate the most sophisticated resources for reproducibility: Zenodo for data, Github and Docker for software, and the cloud for computation. It provides a turning point for good practice in the computational processing of large datasets. The CRG is now committed to help promoting this important aspect of modern biology by making this new resource available for academic research but also for clinical and commercial production. It is also organizing a series of courses and workshops dedicated to the use of Nextflow and its uptake by the community.
-end-


Center for Genomic Regulation

Related Data Articles from Brightsurf:

Keep the data coming
A continuous data supply ensures data-intensive simulations can run at maximum speed.

Astronomers are bulging with data
For the first time, over 250 million stars in our galaxy's bulge have been surveyed in near-ultraviolet, optical, and near-infrared light, opening the door for astronomers to reexamine key questions about the Milky Way's formation and history.

Novel method for measuring spatial dependencies turns less data into more data
Researcher makes 'little data' act big through, the application of mathematical techniques normally used for time-series, to spatial processes.

Ups and downs in COVID-19 data may be caused by data reporting practices
As data accumulates on COVID-19 cases and deaths, researchers have observed patterns of peaks and valleys that repeat on a near-weekly basis.

Data centers use less energy than you think
Using the most detailed model to date of global data center energy use, researchers found that massive efficiency gains by data centers have kept energy use roughly flat over the past decade.

Storing data in music
Researchers at ETH Zurich have developed a technique for embedding data in music and transmitting it to a smartphone.

Life data economics: calling for new models to assess the value of human data
After the collapse of the blockchain bubble a number of research organisations are developing platforms to enable individual ownership of life data and establish the data valuation and pricing models.

Geoscience data group urges all scientific disciplines to make data open and accessible
Institutions, science funders, data repositories, publishers, researchers and scientific societies from all scientific disciplines must work together to ensure all scientific data are easy to find, access and use, according to a new commentary in Nature by members of the Enabling FAIR Data Steering Committee.

Democratizing data science
MIT researchers are hoping to advance the democratization of data science with a new tool for nonstatisticians that automatically generates models for analyzing raw data.

Getting the most out of atmospheric data analysis
An international team including researchers from Kanazawa University used a new approach to analyze an atmospheric data set spanning 18 years for the investigation of new-particle formation.

Read More: Data News and Data Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.